-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
ASV Benchmarking Pull Request Workflow #831
Conversation
An example of this workflow in action can be seen in this dummy PR on my fork. |
ASV BenchmarkingBenchmark Comparison ResultsBenchmarks that have improved:
Benchmarks that have stayed the same:
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is excellent and very timely for our project.
We can tackle some of my comments in another future PR.
The table units of before and after and the function calls look a little verbose.
Thanks for the review! Yeah, the table can definitely be trimmed a bit to look cleaner. I didn't do any processing of the data, simply returned the result of the I'll definitely look into at least removing the |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This is incredibly useful! Despite some issues (e.g. the Change column), I am fine with getting this merged and fixing those issues in a separate PR since this will not break any functionality and instead will be usable for other PRs immediately after the merge.
Overview
Introduces a Github Actions Workflow for running ASV Benchmarks on Pull Requests, getting compared to the
main
branch.The workflow is triggered using the
run-benchmark
label.By default, the workflow is skipped unless it is queued using the label above.
The workflow can be re-queued by removing and adding the label.
Once the workflow is complete, the results are posted (or updated) by a Github Bot.
New benchmarks, including those that test new functionality, can be run using this bot as well. They will fail for older versions that raise an error, but that is expected.
Example screenshot of the output: