-
Notifications
You must be signed in to change notification settings - Fork 216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Integrate CI for Running Benchmarks #436
Comments
/bounty 150$ |
💎 $150 bounty created by tailcallhq |
/attempt #436 Options |
The bounty is up for grabs! Everyone is welcome to |
Note: The user @cheikh2shift is already attempting to complete issue #436 and claim the bounty. If you attempt to complete the same issue, there is a chance that @cheikh2shift will complete the issue first, and be awarded the bounty. We recommend discussing with @cheikh2shift and potentially collaborating on the same solution versus creating an alternate solution. |
The bounty is up for grabs! Everyone is welcome to |
/attempt #436 |
Using Github Actions to measure performance could be quite unreliable due the way they are executing. That fact even mentioned in the criterion's FAQ |
This seems like an alternative to iai: https://github.com/Joining7943/iai-callgrind |
/attempt #436 Options |
The approach that I have in mind is to use iai kinda tools to identify the following — Instructions: 1733
L1 Hits: 2358
L2 Hits: 0
RAM Hits: 3
Total read+write: 2361
Estimated Cycles: 2463 Since these are exact values, whenever there is an increase in any of the parameters, we could fail the build. |
The bounty is up for grabs! Everyone is welcome to |
/attempt #436 |
@alankritdabral: The Tailcall Inc. team prefers to assign a single contributor to the issue rather than let anyone attempt it right away. We recommend waiting for a confirmation from a member before getting started. |
@alankritdabral would you be taking the approach that I have described above using IAI? |
Yes i am taking the same approah @tusharmath |
@tusharmath I am thinking to change each bench file from Criterion->iai-callgrind code so i can have accurate data to compare :
Edited Code:
Results: ~/Desktop/git/tailcall$ cargo bench --bench json_like_bench
|
@alankritdabral As discussed on discord, we need both kind of benchmarks. They both signal orthogonal things. And it's possible that reduction in instructions still results in a performance regression. |
fixed in #762 |
@tusharmath I'm a little late to the party here, but down the road, if you want a more robust continuous benchmarking solution, you might consider checking out Bencher: https://github.com/bencherdev/bencher |
This is a nice tool. We are a small team so we take help from contributors to things done. Happy to add a bounty for integrating the tool into our ci. |
Thank you for the kind words! I would be more than happy to help with the integration. 😃 |
Overview
We need to enhance our CI pipeline to not only run our existing Criterion benchmarks but also ensure that the build time stays within acceptable limits. Moreover, a clear and concise report should be generated and published on the PR for quick and easy understanding.
Requirements
main
branch.Build Time Check: The CI should compare the build time of the current PR with the average build time of the last few successful builds on themain
branch. If the build time increases by more than 10%, the CI should fail.main
branch for easy comparison.benchmark
label is added to a PR or commit.Rationale
Performance is paramount for our project. It's essential to ensure that any changes do not adversely affect either the runtime performance (tracked by benchmarks) or the build time. By integrating these checks into our CI, we can maintain a high standard of performance and ensure that contributors receive immediate feedback on any potential issues.
The text was updated successfully, but these errors were encountered: