Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Create continuous benchmarks #39

Closed
douweschulte opened this issue Jan 13, 2021 · 2 comments
Closed

Create continuous benchmarks #39

douweschulte opened this issue Jan 13, 2021 · 2 comments

Comments

@douweschulte
Copy link
Owner

To keep an eye on performance and see when thing unnecessarily digress it would be cool to have continuous benchmarks upon pushing to the server. There is a GitHub action made for this: https://github.com/marketplace/actions/continuous-benchmark. But keeping in mind that lots of behaviour is not yet implemented the run times will get higher over time.

Proposal for benchmarks

  • open
  • pdb.apply_transformation, rotation x 90°
  • pdb.remove_atom_by, every odd numbered atom
  • save
  • pdb.atoms, calculate average B factor
  • validate
  • pdb.clone

All benchmarks should be run on 1ubq.pdb and pTLS-6484.pdb to give an idea of the impact of the PDB size.

douweschulte added a commit that referenced this issue Jan 13, 2021
@douweschulte
Copy link
Owner Author

The benchmarks created in the above commit now have these timings:

Open - small: average time over 633 runs:
        4ms 883μs ± 921μs 509ns

Open - big: average time over 6 runs:
        2s 913ms ± 104ms 260μs

Transformation - small: average time over 264555 runs:
        12μs 586ns ± 38μs 721ns

Transformation - big: average time over 945 runs:
        3ms 47μs ± 348μs 701ns

Remove - small: average time over 80650 runs:
        33μs 719ns ± 65μs 246ns

Remove - big: average time over 673 runs:
        4ms 892μs ± 1331μs 326ns

Iteration - small: average time over 212770 runs:
        11μs 805ns ± 33μs 152ns

Iteration - big: average time over 829 runs:
        3ms 17μs ± 829μs 609ns

Validate - small: average time over 1005 runs:
        98ns ± 91ns

Validate - big: average time over 1005 runs:
        62ms 277μs ± 2ms 210μs

Renumber - small: average time over 231486 runs:
        12μs 137ns ± 28μs 964ns

Renumber - big: average time over 1317 runs:
        2ms 194μs ± 729μs 230ns

Clone - small: average time over 58167 runs:
        57μs 690ns ± 74μs 75ns

Clone - big: average time over 217 runs:
        12ms 479μs ± 2ms 656μs

Save - small: average time over 259 runs:
        11ms 565μs ± 890μs 219ns

Save - big: average time over 8 runs:
        785ms 135μs ± 8ms 434μs

As this is built using a custom framework some work has to be done to get it to a continuous benchmarking system.

@douweschulte
Copy link
Owner Author

I would assume that no big performance jumps are expected between commits so running benchmarks on each commit could be quite redundant. Besides that cloud systems are not the most reliable to find small regressions in code performance: https://bheisler.github.io/post/benchmarking-in-the-cloud/. So until this codebase is developed by a bigger team and used in more performance critical environments I would propose to run benchmarks locally once in a while.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant