-
Notifications
You must be signed in to change notification settings - Fork 177
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Allow benchmarking of sequential functions #446
Comments
This probably goes to the feature wishlist, not directly supported right now. Some overlap probably with #179 (the benchmark is logically a single benchmark with multiple returns, if the different parts cannot really be run separately). You can sort of cobble it together by running all of the steps in |
@pv thanks! I'm always happy to hear that I didn't miss some completely obvious part of the docs that solves my problem. ;) I will try your hacky suggestion. Incidentally, if you have any pointers about where to start implementing this properly in asv, I might have a crack at it! |
All benchmark running is done in `benchmarks.py` (manager process) and
`benchmark.py` (benchmark runner/discovery process).
|
cc @mike-wendt you might appreciate the hacky suggestion here |
From my reading of Writing benchmarks, setup functions are run once per benchmark. Sometimes, however, one benchmark's test is another benchmark's setup. When functions take a long time to run, it'd be great to be able to use the results of a benchmark as the setup for the next one.
Here's my own use case:
Most of these are big operations that take some time to run, so it's annoying to have to run the whole pipeline again to run the final benchmark. And the objects are big and complex, so pickling saves some time but is still expensive.
Any suggestions?
Thanks!
The text was updated successfully, but these errors were encountered: