Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Possible way to stabilize CI's benchmark measurements #897

Closed
cgewecke opened this issue Sep 29, 2020 · 5 comments
Closed

Possible way to stabilize CI's benchmark measurements #897

cgewecke opened this issue Sep 29, 2020 · 5 comments

Comments

@cgewecke
Copy link
Contributor

cgewecke commented Sep 29, 2020

In the last team meeting, the question of the CI benchmark's variability was mentioned - it's common to see large-ish performance diffs from change sets that don't seem like they'd have much impact.

Per comment in the benchmark actions repo this might be caused by comparing runs which have been executed on different physical machines in the cloud.

An alternative to comparing the current run against a previously cached version is to:

  • run current branch benchmark --> save to file A
  • checkout master
  • run master benchmark --> save to file B
  • compare A and B

That strategy proposed and implemented here:

@ryanio
Copy link
Contributor

ryanio commented Oct 1, 2020

I started down this path but actually our data looks pretty stable according to the benchmark page.

Since others have expressed that the commit comments are a bit verbose though, I will disable comment-always - PR #899

@cgewecke
Copy link
Contributor Author

cgewecke commented Oct 1, 2020

@ryanio Out of curiosity what does that threshold mean in practice? That a change has been introduced which results in a 50% slowdown?

@ryanio
Copy link
Contributor

ryanio commented Oct 1, 2020

yes so for the first benchmarked block 9,422,905 around 1,700 ops/sec - set for 150% it would have to exceed 2,550 ops/sec to trigger an alert comment.

@ryanio
Copy link
Contributor

ryanio commented Oct 1, 2020

actually that being the most stable one, the blocks with higher amplitudes (like 9,422,912) may trigger it pretty often. what do you think of keeping it at the default of 200%?

edit: ok i changed it back to default

@holgerd77
Copy link
Member

This issue is referenced in #1204, will close here to reduce redundancy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

3 participants