Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Performance testing #13875

Closed
karelz opened this issue Nov 14, 2014 · 2 comments
Closed

Performance testing #13875

karelz opened this issue Nov 14, 2014 · 2 comments
Labels
help wanted [up-for-grabs] Good issue for external contributors
Milestone

Comments

@karelz
Copy link
Member

karelz commented Nov 14, 2014

DISCUSSION MOVED to forums: http://forums.dotnetfoundation.org/t/performance-testing-best-practices/410 (per project guidance)

(on behalf of CLR perf team)
We are fishing for best practices how to do performance tests in open source projects and for cross-platform.
We have bunch of reusable practices home-grown, but they heavily depend on monitoring on certified dedicated machines in our perf lab (i.e. no other process/disk IO interaction).

Any pointers how other open source projects deal with perf test suites in their CI systems?

What we do today in-house:
There's a set of tests. Many of them are microbenchmarks, they are the easiest: Each microbenchmark is wrapped in runner which warms up the scenario, then runs it 5 times and measures the time. It prints the result with some basic statistics to the output/log. A tool then parses all the logs and displays them (through a DB) in HTML-based UI with history (graphs for trends, etc.).
Currently we run it on the same dedicated machines, so results are comparable and one can reason about changes over time.

What we could do:
Keep running tests on dedicated machines and publish results somewhere. Is that the best practice?
To support dev scenario on local box we could generalize the infrastructure to run on any machine and provide a tool that compares 2 runs (baseline vs. PR), so any dev can check impact on performance prior to PR if there's perf risk.

@akoeplinger
Copy link
Member

Keep running tests on dedicated machines and publish results somewhere. Is that the best practice?

Sounds like a perfectly fine approach to me. E.g. here's a blog post from the HHVM team showing some of their benchmarks. I couldn't find out if they publish this somewhere publicly for every commit/test run, so if you do that it's even better.

The Mono team also publishes GC perf test results, see docs and an example result

To support dev scenario on local box we could generalize the infrastructure to run on any machine and provide a tool that compares 2 runs (baseline vs. PR), so any dev can check impact on performance prior to PR if there's perf risk.

👍

Another interesting thing the HHVM team is doing is to include major OSS projects in perf runs, e.g. https://github.com/hhvm/oss-performance. I'm not sure if you're already doing something similar internally, but I'd image seeing how a change impacts a complex application / framework like Orchard, RavenDB, Nancy, ServiceStack, etc. could be very helpful.

@Petermarcu
Copy link
Member

Closing this issue. Its being discussed in the forums

Olafski referenced this issue in Olafski/corefx Jun 15, 2017
release notes directory versioning
@msftgits msftgits transferred this issue from dotnet/corefx Jan 31, 2020
@msftgits msftgits added this to the 1.0.0-rtm milestone Jan 31, 2020
@ghost ghost locked as resolved and limited conversation to collaborators Jan 8, 2021
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
help wanted [up-for-grabs] Good issue for external contributors
Projects
None yet
Development

No branches or pull requests

4 participants