You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.
On some older Intel 32-bit hardware, under Linux, floating-point
operations don't always give correctly rounded results. Here's an
example involving addition, on SuSE Linux 10.2/Xeon.
Python 2.6a3+ (trunk:63521, May 21 2008, 15:40:39)
[GCC 4.1.2 20061115 (prerelease) (SUSE Linux)] on linux2
Type "help", "copyright", "credits" or "license" for more information.
The second result should really be 1e16+2., not 1e16+4. This appears to
be related to this GCC issue:
Various fixes are possible. One possible fix is to add the -ffloat-
store flag to the gcc options. Another is to use the information in
fpu_control.h, if available, to set the precision control. Yet another
is to sprinkle some 'volatile' modifiers throughout floatobject.c.
It's not clear to me that this *should* be fixed, but I think the
problem should at least be documented. Hence this bug report.
"These represent machine-level double precision floating point numbers.
You are at the mercy of the underlying machine architecture (and C or
Java implementation) for the accepted range and handling of overflow."
If you want to, one could add ", precision" in the sentence; I think it
is fine as it stands.
Okay; so this is definitely not a Python bug---it's a well-known
and well-documented problem with IA32 floating-point. And I accept
that it's really not Python's responsibility to document this, either.
Nevertheless, it was a surprise to me when my (supposedly IEEE 754
compliant) Pentium 4 box produced this. I probably shouldn't have
been surprised. I'm aware of issues with 80-bit extended precision when
programming in C, but naively expected that Python would be largely
immune from these, since it's always going to force intermediate results
from (80-bit) floating-point registers into (64-bit) memory slots.
There's an excellent recent article by David Monniaux, "The pitfalls of
verifying floating-point computations.", that's available online at
that explains exactly what's going on here (it's a case of double-
rounding, as described in section 3.1.2 of that paper).
Do you think a documentation patch that added this reference, along with
the oft-quoted "What Every Computer Scientist Should Know About
Floating-Point Arithmetic" by David Goldberg, to Appendix B of the
tutorial would be acceptable?
One other thing that's worth mentioning: on Pentium 4 and later, the
gcc flags "-mfpmath=sse -msse2" appear to fix the problem, by forcing
gcc to use the SSE floating-point unit instead of the x87-derived one.
In any case, I guess this report should be closed as 'invalid', but I
hope that at least others who encounter this problem manage to find this