-
Notifications
You must be signed in to change notification settings - Fork 149
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Add basic runtime benchmark infrastructure #1423
Conversation
|
This is awesome! After this last RustConf I've been thinking about spending time on this kind of thing again, let me know if I can help! I'm still waiting on some new Linux hardware to be able to use perf_event_open locally but I'll be able to help with results/UI/regression detection/porting benchmarks/etc soon. |
|
I think that we first have to get the basic infrastructure in place (code layout, measurements, DB, etc.), I'm not sure how much that is "parallelizable". But once that's in place, we will need to populate the benchmark suite and possibly build some new UI for that, and then it will definitely be good to cooperate :) It might still be a way off though, the infrastructure will probably go through a lot of iterations. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I've had a quick look, it all seems quite plausible.
Why do the benchmarks appear twice? E.g. collector/lolperf/benchmarks/nbody/src/nbody.rs and runtime/benchmarks/nbody/src/nbody.rs. Same story with other source files like measure.rs.
I have a strong preference for the structure of the runtime benchmarks and the compile time benchmarks to be similar. E.g. compile time benchmarks are in collector/benchmarks/, so the runtime ones should be in a related spot at the same level of nesting. This will make it easier to navigate. If this requires renaming some existing things, that's fine. (Likewise with the subcommand names and structures, as you suggested.)
|
I also want to make the structure similar to comptime benchmarks, I just put it into a separate directory for now to better separate it from the rest of the code for now. There's also one additional thing instead of just a directory of benchmarks, namely the benchmarking library. I'm fine with changing the file/directory layout in any way we find better. |
Ooos, sorry, I accidentally committed another directory where I have stored it previously. And I was wondering why the diff is so large.. 😅 |
b9ccaa6
to
93bb62c
Compare
|
I got some nasty diffs there because of a botched directory move, now it should be fine. |
83c6be8
to
88ed11a
Compare
|
@rylev Can you take a look please? :) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'm annoyed by the inconsistency between the runtime-benchmarks and compile-benchmarks directory names, i.e. the use of time in one but not the other. compile-time-benchmarks would be more consistent, but longer. (There's still the inconsistency between runtime being a single word and compile time being two words but that's a pre-existing bug in English 😁)
run-benchmarks would also be more consistent, but then it sounds like a verb phrase, which is no good.
I don't know, maybe it's good enough as is.
Other than that, and a few minor comments above, this looks good to me.
|
We can also name it |
Library for defining and measuring (micro)benchmarks.
…p` runtime benchmark
|
Ok, |
|
I'm going to go ahead and merge it so that we can start experimenting with it. |
This PR contains basic
collectorinfrastructure for runtime benchmarks. It implements the ideas described in https://hackmd.io/j19WFamVQlmQMRigF9wrFw.There are no DB or UI changes yet, first I think that we should only support the benchmarks locally, for experimentation. Later I will add support for storing the results into the DB.
Currently the design looks like this:
collectordirectory. It's calledbenchliband it contains a small library for defining and measuring microbenchmarks (using perf. counters withperf_event_openand walltime currently). It's used bycollectorto read benchmark results (transferred with JSON currently) and by the runtime benchmarks to define what should be measured.collector/runtime-benchmarksthat will contain runtime benchmarks that will usebenchlibto define a set of related benchmarks. The benchmarks are not executed right away, thebenchlibinstead decides what to do once it parses CLI arguments (it could have commands e.g. to list available benchmarks and their core requirements, so that when you actually execute the benchmark, you can select to how many cores you could pin it).bench_runtime_localwas added tocollector. It takes a path torustc, uses it to compile the runtime benchmarks and then executes them. Eventually, it might make sense to instead create something likebench_local comptime --scenarios ...andbench_local runtime .... A newBenchNextcommand shouldn't be needed, since it should decide from thenext_artifactAPI call if it should also run runtime benchmarks.Best reviewed commit by commit.