-
-
Notifications
You must be signed in to change notification settings - Fork 31.7k
Optimize PyLong_AsDouble for single-digit longs #70476
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Comments
The attached patch drastically speeds up PyLong_AsDouble for single digit longs: -m timeit -s "x=2" "x*2.2 + 2 + x*2.5 + 1.0 - x / 2.0 + (x+0.1)/(x-0.1)2 + (x+10)(x-30)" with patch: 0.414
without: 0.612 spectral_norm: 1.05x faster. The results are even better when paired with patch from issue bpo-21955. |
New changeset 986184c355e8 by Yury Selivanov in branch 'default': |
Thanks a lot for the review, Serhiy! |
Nice enhancement. /* Fast path; single digit will always fit decimal. I'm not sure that "spectral_norm" can be qualified as macro-benchmark. It's more a microbenchmark on numerics, no? I'm just nitpicking, your patch is fine ;-) |
Actually, please fix the comment. We don't want someone wondering what those "macro-benchmarks" are. |
Additionally, "single digit will always fit a double"? |
If spectral-norm and nbody aren't good benchmarks then let's remove them from our benchmarks suite. I'll remove that comment anyways, as it doesn't make a lot of sense :)
What's wrong about that phrase? Would this be better: "It's safe to cast a single-digit long (31 bits) to double"? |
Sorry, I was a bit brief: The current comment says "decimal" instead of "double". It should be changed to "double". |
Le 06/02/2016 18:07, Yury Selivanov a écrit :
Probably, yes. |
Actually, let me refine that. nbody and spectral-norm don't make sense |
New changeset cfb77ccdc23a by Yury Selivanov in branch 'default': |
Oh, got it now, sorry. I rephrased the comment a bit, hopefully it's better now. Please check. Thanks! |
The comment looks good to me -- I'll stay out of the benchmarking issue, I didn't check any of that. :) |
Well I *did* run the decimal/float milli-benchmark now and it shows at least 15% improvement for floats consistently. Given that the official benchmark suite does not seem to be very stable either (bpo-21955), I actually prefer small and well-understood benchmarks. |
I'm not sure why this issue is open... Closing it. |
Note: these values reflect the state of the issue at the time it was migrated and might not reflect the current state.
Show more details
GitHub fields:
bugs.python.org fields:
The text was updated successfully, but these errors were encountered: