

I agree that LLM are made to be more exploratory, this is good as it allows them to experiment with more different topic, as opposed to always saying the same. However, I do not agree it is a feature for code generation, as you would need it to follow strict ruleset (code syntax, specification, tests). Whatever errors it generates and people accept are little mistakes in the threshold of acceptance for the person and a tradeoff for the cost of fixing the problem. In some contexts we see people focusing almost only on short term which leads to a lot of errors being allowed.
Moreover, you cannot say compilers are deterministic. There are situations where they are not (at least for the user).
https://krystalgamer.github.io/high-level-game-patches/
GCC’s unwarranted behaviour
In order to keep the code as small as possible I was compiling the code with -Os. Everything was working fine until I started to remove some printfs and started to get some crashes. Moving function calls around also seemed to randomly fix the problem, this was an indication that somehow memory/stack corruption was happening. After a lot of testing, I figured out that if -O2/-O3/-Os were used then the problem would appear. The issue was caused by Interprocedural analysis or IPA. One of its functions is to determine whether registers are polluted across function calls and if not then re-use them.




I have talked with the author to confirm what he meant with this and other posts he made on compilation. He has confirmed that most (if not all) C compilers are not deterministic. He has pointed me to here as an example. He added that optimizations are not applied in deterministic order and when you add LTO it worsens the problem.