You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Perfomance is crucial, so we should have a way to detect regressions or evaluate improvements.
Here is an idea that I have:
We add some more complicated "performance" unittest and exclude them from compilation by default
They probably have to use a special API or mixin to report a unique name and their runtime.
On a PR a CI checks out the new version and the master branch and runs for both the "performance" tests several times in random order and then calculates the average for every tests (that's why we need a unique name for mapping) and difference between master and the PR / feature branch.
Probably some variance due to different loads has to be tolerated and shouldn't be reported
The CI could complain via git bot (like coverage), email or the CI status icon
Maybe we then want to use a different CI, so that is just additional info and doesn't block Travis
Btw this is also a topic that often comes up in Phobos, but afaik currently it always depends on manual benchmarking.
---
Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/32834334-detect-performance-regressions?utm_campaign=plugin&utm_content=tracker%2F18251717&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F18251717&utm_medium=issues&utm_source=github).
The text was updated successfully, but these errors were encountered:
@9il I think getting a basic solution to this problem could be quite useful for you when you try to benchmark your achievements with Blas. Can you think about a simpler solution?
Perfomance is crucial, so we should have a way to detect regressions or evaluate improvements.
Here is an idea that I have:
Btw this is also a topic that often comes up in Phobos, but afaik currently it always depends on manual benchmarking.
std.algorithm.sort perfomance: dlang/phobos#3922
--- Want to back this issue? **[Post a bounty on it!](https://www.bountysource.com/issues/32834334-detect-performance-regressions?utm_campaign=plugin&utm_content=tracker%2F18251717&utm_medium=issues&utm_source=github)** We accept bounties via [Bountysource](https://www.bountysource.com/?utm_campaign=plugin&utm_content=tracker%2F18251717&utm_medium=issues&utm_source=github).std.regex JIT compiling: dlang/phobos#4120
Faster pairwise summation: dlang/phobos#4069
Faster topN: dlang/phobos#3934
The text was updated successfully, but these errors were encountered: