-
Notifications
You must be signed in to change notification settings - Fork 12
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Create continuous benchmarks #39
Comments
The benchmarks created in the above commit now have these timings:
As this is built using a custom framework some work has to be done to get it to a continuous benchmarking system. |
I would assume that no big performance jumps are expected between commits so running benchmarks on each commit could be quite redundant. Besides that cloud systems are not the most reliable to find small regressions in code performance: https://bheisler.github.io/post/benchmarking-in-the-cloud/. So until this codebase is developed by a bigger team and used in more performance critical environments I would propose to run benchmarks locally once in a while. |
To keep an eye on performance and see when thing unnecessarily digress it would be cool to have continuous benchmarks upon pushing to the server. There is a GitHub action made for this: https://github.com/marketplace/actions/continuous-benchmark. But keeping in mind that lots of behaviour is not yet implemented the run times will get higher over time.
Proposal for benchmarks
open
pdb.apply_transformation,
rotation x 90°pdb.remove_atom_by
, every odd numbered atomsave
pdb.atoms
, calculate average B factorvalidate
pdb.clone
All benchmarks should be run on
1ubq.pdb
andpTLS-6484.pdb
to give an idea of the impact of the PDB size.The text was updated successfully, but these errors were encountered: