New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Run history difference reporting / exporting #973
Comments
Hi @lahma I like this idea and I have even implemented a similar tool in the dotnet/performance repo https://github.com/dotnet/performance/tree/master/src/tools/ResultsComparer Maybe we should add it as a new, global tool similar to #1006 ? @AndreyAkinshin what do you think about adding such command line tools to the BDN repo? |
@adamsitnik @adamsitnik Design suggestion Implementing it into the existing The subcommand The subcommand This design is similiar to other well known dotnet tools (e.g. tooling for Advantages
How would you like that? |
|
I love all the possibilities listed here. I'd also like to point out the case which is important to me. I usually have baseline run that tests system (multiple methods, sub-systems) and I try to see how the results have been affetected by change. So in my case it's more of a "overall rps/s change after I fine-tuned data-structures allocation patterns" instead of "which of the two methods is faster". So I usually run multiple benchmarks stressing library from the top and check that no regressions are introduced when I tweak some particular case. So in short I'll run same benchmarks and I usually want to see allocation and duration changes for the same benchmark over time. |
@AndreyAkinshin personally I would prefer a dedicated tool for every command. It would give us a better overview of what our users are using (nuget stats) and cleaner commands, more Unix-like? Commands with a single tool: dotnet benchmark run abc.dll
dotnet benchmark compare x.json y.json With dedicated tools: dotnet benchmark abc.dll
dotnet compare x.json y.json Also in the future I would like to move some of our code to stand-alone tools. Examples: disassembler (could be reused by others) and profilers (could also be reused) dotnet disassembler --processId 1234 --method My.Program.Main --depth 3
dotnet profiler start --type ETW
dotnet profiler stop --type ETW @AndreyAkinshin what do you think about this idea in general? Speaking of the files, as of today every run overwrites the previous result. I think that we should change it (maybe include a time stamp in the file name or sth like that?). Also, prefer JSON over CSV. It's more "type safe" to me ;p |
I think that option "install one package and get all of the command line out of the box" is better than forcing users to install a separate NuGet package for each command. Also, I don't like this command line:
It's OK for us to reserve the
|
@AndreyAkinshin You are right. BTW if we switch to System.CommandLine (#1016) it should be easier to write a single global tool that handles everything we want (it was designed for global tools, including support of auto-complete for the argument names!) |
This would be super nice. Currently, I try to compare the results in azure devops put don't find a good way to do this. |
It would be nice to have additional metrics reported with each summary, such as being able to display the net increase or decrease in each benchmark's mean execution time (compared to the previous run). For example, something like:
Adding on to that idea, being able to plot changes in performance over time (even if it meant opening a file in a third-party program) would also be awesome and help greatly with development. |
@adamsitnik ask to me to follow up here after dotnet/performance#314 (review) I don't know if new comparer tool will be the same of
|
This feature looks promising - is there any progress on this? |
I was also looking for something like this. It would be wonderful if I could easily run BDN with a comparison CLI argument as part of a PR cycle, and have PR submissions report the net increase (or decrease) in performance. |
@adamsitnik and @AndreyAkinshin is there any news around 'run history difference reporting / exporting'? |
Same question |
One of the biggest pain points I'm finding with BDN so far is there's no convenient way of comparing a before and after. So far, what I'm having to do is write a |
+1 for this. |
@Tarun047 I'm working on it right now. A huge refactoring is coming with a new serialization format + a lot of new features including various reports. |
I'm filing an issue just the check whether it would be a valid feature and reasonable to implement on BenchmarkDotNet's side. I've built a small and ugly helper that produces difference between two BenchmarkDotNet runs using CSV reports: https://github.com/lahma/BenchmarkDotNet.ResultDiff .
So the parsing is ugly and brittle, but having a feature that would clearly state difference between various runs in percentages/absolute values seems beneficial to me. I've used when optimizing the Jint library and I feel it's great way to easily communicate the difference. I see people throwing two sets of results (before/after) when creating optimization PRs and if there are more than 3 rows to mentally diff, it gets burdensome (or it's just me).
So what I would suggest is some form of exporter that keeps track of every run done in efficient raw data format, say normal_file_name_yyyy-MM-dd-HH-mm-ss.data and then runs a diff of oldest and newest (by default) and produces similar output like the tool I linked which shows what's the actual difference between work done.
The text was updated successfully, but these errors were encountered: