-
Notifications
You must be signed in to change notification settings - Fork 78
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
pyperf is incompatible with click #65
Comments
Can't you avoid importing the code which uses click in your benchmark? Or maybe mock sys.argv when it's imported. Why does your code parse command line arguments even if it's not the main module? |
@vstinner I don't understand, my code is the import click
import pyperf
import module_to_benchmark
def function_factory(x, y):
def some_function(x=x, y=y):
module_to_benchmark.function_to_benchmark(x, y)
return some_function
@click.cli()
@click.option("-x")
@click.option("-y")
def main(x, y):
func = function_factory(x, y)
runner = pyperf.Runner()
runner.bench_func(f"Benchmarking function with ({x}, {y})", func) Because I'm passing arbitrary Python objects, it would require a very different approach to refactor this using subprocess calls. I think it would be preferable if |
pyperf spawns multiple processes and it uses the command line to pass arguments to worker processes: https://pyperf.readthedocs.io/en/latest/run_benchmark.html#pyperf-architecture You should avoid using click and pyperf in the same file. |
Presumably also you should avoid using I think I don't really understand the point of I suppose for my purposes it will be good enough to go back to using |
Can't you move your benchmark code into a separated script, and import your functions in the benchmark script? Something like that: Maybe pyperf should have an option to not spawn worker processes, but run all benchmarks in the same process.
Most timeit users run the benchmark an arbitrary number of times (usually 3 to 5 times) and then pick their favorite result, like the minimum. pyperf does it for you, but tries to be smarter: store all values, ignore the first "warmup" value, and display the mean with the standard deviation, rather than just the minimum: |
This currently uses timeit because pyperf is incompatible with click, and it was easier to rewrite the timing parts with timeit than to rewrite the argument handling parts with argparse, and timeit serves our needs. See: psf/pyperf#65
This currently uses timeit because pyperf is incompatible with click, and it was easier to rewrite the timing parts with timeit than to rewrite the argument handling parts with argparse, and timeit serves our needs. See: psf/pyperf#65
When a script calls
pyperf.Runner.bench_func
from a script that usesclick
to handle arguments, it raises an exception inpyperf
's argument parsing logic. Here is a minimal reproducing script:Here's a requirements.txt, pinned to the versions I have tried this with:
The error message is:
I believe that the issue is that
pyperf.Runner
is directly usingargparse
to add its own command line arguments, which is not really the behavior I would expect from a library.It might be a lot to ask, but I think a better option would be to refactor this into a config object that can also be constructed automatically from the parser, something like this:
To avoid backwards incompatibility issues, you can add a flag to
Runner
likeuse_argparse=True
, which users ofclick
could set toFalse
to avoid this problem.The text was updated successfully, but these errors were encountered: