-
-
Notifications
You must be signed in to change notification settings - Fork 190
add fractions benchmarks #10
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ng the same benchmarking code
I'm not sure that I understand the purpose of the change. Do you want to compare the performances of the fractions module with the performance of the decimal module? It's not really how the "performance" module is used. This module is a set of benchmarks to compare the performance of two Python implementations. Maybe we can add the benchmark, use fractions by default, but don't run the benchmark with decimal? I mean that you should run it with decimal manually. |
@VersionRange() | ||
def BM_Telco_Decimal(python, options): | ||
bm_path = Relative("bm_telco_fractions.py") | ||
return run_perf_script(python, options, bm_path, extra_args=['decimal']) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
As I wrote in the comment, I don't think that it makes sense to test the decimal module?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Agreed, unless you want to point out the speed difference. But that was my question on bugs.python.org: What is the specific goal of this benchmark? :)
What I mean is that there is one benchmark implementation that is executed with two different backends, thus giving comparable results for two different stdlib libraries. Thus, I think it's good to execute both. The results are not directly comparable with the "bm_telco" benchmark because that does some unrelated processing along the way. |
return perf.perf_counter() - start | ||
|
||
|
||
def run_bench(n, impl): |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I suggest to rename "n" to "loops" to be consistent with other benchmarks.
…ake sure we make use of the final result (and validate the calculations by comparing all results)
I've updated the pull request. |
def find_benchmark_class(impl_name): | ||
if impl_name == 'fractions': | ||
from fractions import Fraction as backend_class | ||
elif impl_name == 'quicktions': |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hmm, what about gmpy2 rationals? As I understand it, this test suite is about regressions within Python itself. I'm not sure this would be a good precedent.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I can't see a reason why the benchmarks shouldn't support non-stdlib libraries that reimplement standard Python functionality, as long as it can be achieved with a reasonable amount of adaptation. The elementtree benchmark I've written also supports lxml (AFAICT, it's been modified since), you can pass the library to import by name.
And since you asked, yes, gmpy2 would certainly qualify, also for the original telco benchmark, I guess.
…run it against the "fractions" module by default
I've updated the pull request to address the comments. |
I closed the issue https://bugs.python.org/issue22458 but please take a look at the discussion there. I closed the CPython issue to suggest to continue the discussion on this (GitHub) bug tracker. |
Sorry, but I'm not really convinced that a fractions benchmark is really helpful to compare the performances of different Python implementations. If you still want a fractions benchmark, maybe you can send me a pull request to my https://github.com/haypo/pymicrobench project which is a much wider collection of random CPython (micro)benchmarks. This project has a different purpose. At least, I took your update on the Telco URL :-D I also wrote a longer description for each benchmark in the documentation. Here is the new documentation for telco: I close the PR. |
Add fractions benchmarks that compare the decimal and fractions modules using the same benchmarking code.
See https://bugs.python.org/issue22458