-
-
Notifications
You must be signed in to change notification settings - Fork 31.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Make conversions from long to float correctly rounded. #47416
Comments
If n is a Python long, then one might expect float(n) to return the
The closest float to n is equal to n+1. But float(n) returns the >>> float(n)
2.9514790517935283e+20
>>> long(float(n))
295147905179352825856L
>>> n - long(float(n))
65535L It's fairly straightforward to fix PyLong_AsDouble to return the closest Having a correctly rounded float(n) can be useful for testing other |
I agree, longs should be correctly rounded when coerced to floats. There is an ugly (but amusing) workaround while people wait for this int(float(repr(295147905179352891391)[:-1])) Though I assume this relies on the platform's strtod working correctly. |
You may use "if (nbits == (size_t)-1 && PyErr_Occurred())" to check Anyway, interresting patch! Python3 vanilla:
>>> n = 295147905179352891391; int(float(n)) - n
-65535
Python3 + your patch:
>>> int(float(n)) - n
1 |
Mark, I noticed that you replaced a call to _PyLong_AsScaledDouble with your I also wonder whether round to nearest float can be implemented without I believe _PyLong_AsScaledDouble is written the way it is to support Maybe it would be worthwhile to provide a simple IEEE specific code with |
float(295147905179352891391L) gives different result on Python 2.5 and
whereas the code is the same!? |
Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> reduce(lambda x,y: x*32768.0 + y, [256, 0, 0, 1, 32767])
2.9514790517935283e+20
>>> float(295147905179352891391L)
2.9514790517935289e+20
Python 2.7a0 (trunk:67679M, Dec 9 2008, 14:29:12)
>>> reduce(lambda x,y: x*32768.0 + y, [256, 0, 0, 1, 32767])
2.9514790517935283e+20
>>> float(295147905179352891391L)
2.9514790517935283e+20
Python 3.1a0 (py3k:67652M, Dec 9 2008, 13:08:19)
>>> float(295147905179352891391)
2.9514790517935283e+20
>>> digits=[256, 0, 0, 1, 32767]; x=0
>>> for d in digits:
... x*=32768.0
... x+= d
...
>>> x
2.9514790517935283e+20 All results are the same, except float(295147905179352891391L) in Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> x=295147905179352891391L
>>> long(float(long(x))) - x
1L |
Ok, I understand why different versions of the same code gives Results with Python 2.5.1:
I'm unable to isolate the exact compiler flag which changes the |
Victor, what does
give on your Ubuntu 2.5 machine? |
About -O0 vs -O1, I think that I understood (by reading the pseudocode of the -O0 version: pseudocode of the -O1 version: Intel uses 80 bits float in internals, but load/store uses 64 bits Hey, floating point numbers are funny :-) --- Python 2.5.1 (r251:54863, Jul 31 2008, 23:17:40)
>>> 1e16 + 2.999
10000000000000002.0
>>> 1e16 + 2.9999
10000000000000004.0 Same result with python 2.7/3.1. |
An interresting document: "Request for Comments: Rounding in PHP" |
Exactly. If your Intel machine is Pentium 4 or newer, you can get
Yup. |
On Tue, Dec 9, 2008 at 11:02 AM, Mark Dickinson <report@bugs.python.org> wrote:
The flags you may be looking for are -msse2 -mfpmath=sse |
[Alexander]
Thanks, Alexander! [Alexander again, from an earlier post...]
You read my mind! I've got another issue open about making long
Well, I had other possible formats in mind when I wrote the code, and I When FLT_RADIX is some other power of 2 (FLT_RADIX=16 is the only |
[Alexander]
The idea's attractive. The problem is finding an integer type that's (One could use two 32-bit integer variables, but that's going to get |
..
But Python already has an arbitrary precision integer type, why not |
As you say, performance would suffer. What would using Python's integer type solve, that isn't already solved by I know the code isn't terribly readable; I'll add some comments |
On Tue, Dec 9, 2008 at 12:39 PM, Mark Dickinson <report@bugs.python.org> wrote:
Speaking for myself, it would alleviate the irrational fear of Seriously, it is not obvious that your algorithm is correct and does On the other hand, an implementation that uses only integer |
By the way, the algorithm here is essentially the same as the algorithm that I So any mathematical defects that you find in this patch probably indicate a defect in In fact, the code *does* do integer arithmetic, except that one of the integers happens I accept the code needs extra documentation; I was planning to put the equivalent |
Hmm. On closer inspection that's not quite true. After the line x = x * PyLong_BASE + (dig & (PyLong_BASE - pmask)); x has a value of the form n * pmask, where pmask is a power of 2 and |
Thanks for your comments, Alexander. Here's a rewritten version of the patch that's better commented and |
Minor cleanup of long_as_double2.patch. |
Updated patch; cleanup of comments and slight refactoring of code. Int->float conversions are even a speck faster than the current code, for Also, retarget this for 2.7 and 3.1. |
Updated patch; applies cleanly to current trunk. No significant changes. Note that there's now a new reason to apply this patch: it ensures that One problem: if long->float conversions are correctly rounded, then This problem only affects 64-bit machines: on 32-bit machines, all |
(Slightly updated version of) patch applied in r71772 (trunk), |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: