Math rules (order of operation) says that a * b + a * c == a * (b + c) which also applies for a * b + a * c + a * d == a * (b + c + d) and so on . This means that the same calculation can be done with less multiplications and more additions. It might not be a huge difference, but I would assume that the compiler would be able to optimize away the left part to the right part to save some multiplications, but that seems to not be the case.
In fact, write the same program in C and you'll see that for doubles GCC generates 2 multiplications for the second snippet (like Go does), unless you pass --ffast-math, which disables strict IEEE-754 arithmetic.
@ALTree is right, this optimization is not allowed for floats. Rounding occurs at different places which can affect the final result.
Go does have some leeway in the spec to do optimizations kind of like this one:
An implementation may combine multiple floating-point operations into a single fused operation, possibly across statements, and produce a result that differs from the value obtained by executing and rounding the instructions individually.
But that only allows dropping a rounding, not moving it around.
Alright, my bad. I've Leary something new today. I have been doing way too much math and did not have enough understanding of rounding to realise that it would produce different results. Thanks for correcting me 🙂