This went away when the gcc inliner was disabled. Most likely
the underlying problem has just been covered up, but I don't
care enough to investigate further. Note that the original
failure only occurred when using the x86 floating point stack,
and not when using sse registers.
Remarks: this seems to be due to extra precision in x86-32 floating
point registers versus the precision (64 bits) when stored to a stack
slot. The test is testing 64 bit precision, and as far as I can see
it shouldn't matter whether you use stack stores or registers, since
both give values inside the Ada "model interval" (Ada uses an interval
arithmetic model for floating point). I wasn't able to find anything
obviously wrong with the LLVM generated code. Currently I suspect that
the test is wrong, but didn't finish analyzing it yet.
Yes, still a problem. In fact it now fails at -O2! It is not clear if this
is a bug in LLVM or in the test. In order to resolve this I wrote my own
simulator of Ada floating point interval arithmetic, but having done so never
found the time to apply it to the analysis of this test, d'oh!