Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Automatic benchmarking on pull requests #468

Draft
wants to merge 4 commits into
base: master
Choose a base branch
from

Conversation

MilesCranmer
Copy link

This is an attempt to set up automatic benchmarking on every pull request to Yao.jl.

This PR creates a GitHub action that uses AirspeedVelocity.jl (easy benchmarking with a CLI based on Comonicon.jl) to automatically run the benchmarks file on all submitted PRs, and create a table of the performance measurements. This is useful for catching performance regressions.

Your project was a bit tricky to setup because there are multiple libraries in one repo. So, I created a main benchmark/benchmarks.jl file that include's the benchmarks in the lib folders. This works as follows:

  1. Wraps each library's benchmarks inside a module.
  2. Creates a global SUITE with SUITE["YaoArrayRegister"] equal to the YaoArrayRegister benchmark suite, etc.
  3. Uses a macro to replace all instances of e.g., using YaoArrayRegister with using Yao.YaoArrayRegister inside the included benchmarks, so that running the benchmark only requires Yao to be installed (easier to work with AirspeedVelocity.jl, as it will measure performance over the revision history of the git repo).

However, while it does seem to work at the import step, it seems like the benchmark files themselves are two years old and out-of-date. e.g., sprand_hermitian is no longer exported; there is some instruct! call that is no longer defined for the passed types, etc.

Please let me know how you'd like to proceed. e.g., perhaps you could update the API in the benchmarks file, and then this automated benchmarking could work? Feel free to push to this PR branch.

Cheers,
Miles


An example GitHub PR comment is shown here: SymbolicML/DynamicExpressions.jl#27 (comment)

image

module YaoArrayRegisterBenchmarks
using ..BenchmarkUtils: replace_imports
using Yao
using Yao.YaoBlocks: sprand_hermitian
Copy link
Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

(This line can probably be deleted once the benchmark is fixed)

@Roger-luo
Copy link
Member

Wow! Thanks for setting this up! Yeah, those benchmarks were made for this exact same feature but because missing of a lot of infrastructure 2 years ago they gradually get abandoned. I'd like to play with this a bit and I have one question actually: how do you decide if the benchmark gets run or not? Or there is a separate option to turn the benchmark off/on in the commit message or comments? I would expect this to drain the GitHub action capacity quickly because the benchmarks in this repo are quite resource-heavy.

@codecov
Copy link

codecov bot commented May 9, 2023

Codecov Report

Patch coverage has no change and project coverage change: +0.59 🎉

Comparison is base (c60f6d9) 88.28% compared to head (e3f750f) 88.88%.

Additional details and impacted files
@@            Coverage Diff             @@
##           master     #468      +/-   ##
==========================================
+ Coverage   88.28%   88.88%   +0.59%     
==========================================
  Files          77       83       +6     
  Lines        4824     4840      +16     
==========================================
+ Hits         4259     4302      +43     
+ Misses        565      538      -27     

see 16 files with indirect coverage changes

☔ View full report in Codecov by Sentry.
📢 Do you have feedback about the report comment? Let us know in this issue.

@MilesCranmer
Copy link
Author

how do you decide if the benchmark gets run or not?

Right now all benchmarks defined in the SUITE variable in benchmark/benchmarks.jl are run. If your benchmarks are integration benchmarks rather than, e.g., timings of small kernels, maybe it’s worth filtering the benchmarks down first? If you know from memory which benchmarks those would be we could add a filter to it.

Or there is a separate option to turn the benchmark off/on in the commit message or comments? I would expect this to drain the GitHub action capacity quickly because the benchmarks in this repo are quite resource-heavy.

I haven’t tried this but that sounds like a good idea. If you look at benchmark_pr.yml, maybe there is some action that can look through the PR for any particular message, and not run if it is found?

@MilesCranmer
Copy link
Author

Maybe https://github.com/marketplace/actions/find-comment could be used to check for any comment on the PR disabling the benchmarks. Not sure if it’s worth it though

@MilesCranmer
Copy link
Author

Just thinking out loud, but it might be nice if AirspeedVelocity.jl had a CLI option to filter benchmarks, like:

benchpkg Yao -r master,patch-1 -e 'expensive1' -e 'expensive2' -e 'suite/expensive3*'

although I'm not sure how to implement this with Comonicon.jl

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants