Add Bencher to track Benchmarks Performance Regression#3611
Add Bencher to track Benchmarks Performance Regression#3611rv-jenkins merged 12 commits intodevelopfrom
Conversation
1338a26 to
d56cb9a
Compare
…m sequentially to get confident execution results metrics
|
Add -v to kompile |
Removing kompile-dir to multiple iterations doesn't use cache; Removing working around to output verbose execution as Bencher fixed this.
|
@radumereuta I've addressed your comments, and I'm now removing the kompiled directory after each iteration. Although, I think the appropriate solution would be a new frontend flag like The flag |
radumereuta
left a comment
There was a problem hiding this comment.
lgtm
Deleting the directory is the safest option. It's too easy to mess something up in the code and get erroneous results.
I would like it if someone else also had a look at the PR.
Great work, though.
F-WRunTime
left a comment
There was a problem hiding this comment.
I can't comment on the lower level Makefiles but the changes to the workflow look reasonable
|
Looks good, but where is the link to the results? |
The link for each result is generated after every execution of . |
Fixes #3517
Follow up of #3603
This PR Introduces Bencher.dev to track performance regressions on our benchmarks.
The Benchmarks already run in the Test-PR Workflow as "Performance Tests" CI. This PR introduces a third-party program that will collect the JSON results of these executions and display them in a graphical online tool.
It will also set a threshold for the time/space that each test, and if an execution passes this limit, the tool will post a comment on the PR warning the developer which tests the regression was observed. The CI will also fail, which won't prevent the PR from being merged.