Skip to content

add fractions benchmarks #10

New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Closed
wants to merge 9 commits into from
Closed

add fractions benchmarks #10

wants to merge 9 commits into from

Conversation

scoder
Copy link

@scoder scoder commented Sep 1, 2016

Add fractions benchmarks that compare the decimal and fractions modules using the same benchmarking code.
See https://bugs.python.org/issue22458

@vstinner
Copy link
Member

vstinner commented Sep 1, 2016

Add fractions benchmarks that compare the decimal and fractions modules using the same benchmarking code.

I'm not sure that I understand the purpose of the change. Do you want to compare the performances of the fractions module with the performance of the decimal module?

It's not really how the "performance" module is used. This module is a set of benchmarks to compare the performance of two Python implementations.

Maybe we can add the benchmark, use fractions by default, but don't run the benchmark with decimal? I mean that you should run it with decimal manually.

@VersionRange()
def BM_Telco_Decimal(python, options):
bm_path = Relative("bm_telco_fractions.py")
return run_perf_script(python, options, bm_path, extra_args=['decimal'])
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

As I wrote in the comment, I don't think that it makes sense to test the decimal module?

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Agreed, unless you want to point out the speed difference. But that was my question on bugs.python.org: What is the specific goal of this benchmark? :)

@scoder
Copy link
Author

scoder commented Sep 1, 2016

What I mean is that there is one benchmark implementation that is executed with two different backends, thus giving comparable results for two different stdlib libraries. Thus, I think it's good to execute both. The results are not directly comparable with the "bm_telco" benchmark because that does some unrelated processing along the way.

return perf.perf_counter() - start


def run_bench(n, impl):
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I suggest to rename "n" to "loops" to be consistent with other benchmarks.

@scoder
Copy link
Author

scoder commented Sep 1, 2016

I've updated the pull request.

def find_benchmark_class(impl_name):
if impl_name == 'fractions':
from fractions import Fraction as backend_class
elif impl_name == 'quicktions':
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hmm, what about gmpy2 rationals? As I understand it, this test suite is about regressions within Python itself. I'm not sure this would be a good precedent.

Copy link
Author

@scoder scoder Sep 2, 2016

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I can't see a reason why the benchmarks shouldn't support non-stdlib libraries that reimplement standard Python functionality, as long as it can be achieved with a reasonable amount of adaptation. The elementtree benchmark I've written also supports lxml (AFAICT, it's been modified since), you can pass the library to import by name.

And since you asked, yes, gmpy2 would certainly qualify, also for the original telco benchmark, I guess.

@scoder
Copy link
Author

scoder commented Sep 2, 2016

I've updated the pull request to address the comments.

@vstinner
Copy link
Member

I closed the issue https://bugs.python.org/issue22458 but please take a look at the discussion there.

I closed the CPython issue to suggest to continue the discussion on this (GitHub) bug tracker.

@vstinner
Copy link
Member

Sorry, but I'm not really convinced that a fractions benchmark is really helpful to compare the performances of different Python implementations.

If you still want a fractions benchmark, maybe you can send me a pull request to my https://github.com/haypo/pymicrobench project which is a much wider collection of random CPython (micro)benchmarks. This project has a different purpose.

At least, I took your update on the Telco URL :-D I also wrote a longer description for each benchmark in the documentation. Here is the new documentation for telco:
http://pyperformance.readthedocs.io/benchmarks.html#telco

I close the PR.

@vstinner vstinner closed this Apr 13, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

3 participants