Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add benchmark dotnet and comprehensive set of end to end benchmarks #1175

Merged
merged 15 commits into from
Apr 12, 2023

Conversation

james-s-tayler
Copy link
Contributor

@james-s-tayler james-s-tayler commented May 16, 2021

What kind of change does this PR introduce?
This PR adds a full set of end-to-end benchmarks using BenchmarkDotNet for all of the return types Refit supports parameterized with different payload sizes (several orders of magnitude), serialization methods, http verbs, and status codes.

Inspiration was taken from the stale PR already open for this here #898 which helped in getting passed the initial hurdles of getting it running.

What is the current behavior?
Currently there are no benchmarks.

What is the new behavior?
We now benchmark all the things! Yay.

What might this PR break?
Nothing. It's a new csproj added to the solution.

Please check if the PR fulfills these requirements

  • Tests for the changes have been added (for bug fixes / features)
  • Docs have been added / updated (for bug fixes / features)

Other information:
@Sergio0694 With the original PR for this - turns out you can't set the mock expectation in the [GlobalSetup] method as it seems it gets garbage collected, which causes Refit to return HTTP 404. Once I moved that into the benchmarks themselves, it all worked. I profiled just that setup to understand what overhead it adds and it's negligible.

@clairernovotny I didn't update the docs yet with the results of the benchmarks, they are pretty hardware specific and thought you might like to select some sort of standard hardware configuration to run it on. Though, I will certainly post my set as a comment here once they finish running. Takes a fairly long time.

@james-s-tayler
Copy link
Contributor Author

james-s-tayler commented May 16, 2021

hmm seems I need to make some adjustments as it is complaining about the benchmark that returns 'Task<IEnumerable>`.

Ok, it appears I've fixed that problem now with #c4f35a7 by making the return type a List<T> instead.

@james-s-tayler
Copy link
Contributor Author

Interesting. I found a memory leak in the benchmarks themselves. Turns out when I moved the mockHttp handler expectation setup into the benchmark itself that caused a memory leak and massively, massively slowed down the benchmarks. After reading through the docs of BenchmarkDotNet I found there are [IterationSetup] and [IterationCleanup] methods. Turns out doing it in there is the correct way to do it. Benchmark is blazingly fast in comparison.

@james-s-tayler
Copy link
Contributor Author

james-s-tayler commented May 16, 2021

hmm reading the docs more I'm looking at adding the necessary things in order to be able to easily control which sets of benchmarks you want to run, as the supplied set of [Params] runs an absolute tonne of benchmarks.

Ok - I've update it so that you can pass a --filter to specify which test and --runtimes to control which runtimes and added a script to run each set of benchmarks. In addition I've reduced the number of benchmarks each one of those scripts run to be an acceptable amount of time / information.

Think it's ready!


<PropertyGroup>
<OutputType>Exe</OutputType>
<TargetFramework>netcoreapp3.1</TargetFramework>
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can this be multi-targeted to support benchmarks on .NET Core 3.1, .NET 5 & 6?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

BenchmarkDotNet has excellent support built in for running benchmarks on different runtimes. I don't think it actually requires that we multi-target it here. I believe the two ways to do it are either configure the benchmark via code either through a custom config or simply by adding an attribute to the benchmark class for each runtime you want it to run the benchmarks for. Or it lets you supply a command line argument for which runtime you want to run the benchmarks against. The docs for the latter are here https://benchmarkdotnet.org/articles/guides/console-args.html#runtimes

So, that's supported as is. Just need to pass the right arguments I think. I think under the hood to run each benchmark it actually is compiling that into a class etc, that's how it's able to support targeting different runtimes in that way.

I originally had it hardcoded as an attribute on the benchmark class itself, but then that mandates running the benchmarks for all the specified runtimes which can take quite a while if you're targeting a few of them.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Upon playing with this it did indeed require me to multi-target it. My initial understanding of it wasn't quite right. I've added the multi-targeting and subsequent scripts to run the benchmarks for each runtime.

if (string.IsNullOrEmpty(fileName))
throw new ArgumentNullException(nameof(fileName));

responsePayload = File.ReadAllText(fileName);
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Should this handler be reading the file from disk each time? Or should we preload the files into memory during the setup stage and returned cached bytes?

That would prevent the tests from being affected by disk I/O.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This runs as part of [GlobalSetup], and this disk access happens in the constructor, so it runs only a single time, and isn't having to hit the disk and do I/O on each iteration during the benchmark. During the benchmarks it's just taking the string which it's got in memory and returning it.

Copy link
Member

@clairernovotny clairernovotny left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Looks good but a few notes

@clairernovotny
Copy link
Member

Hi, just looking to see where this is at -- would love to get some benchmarks into Refit! :)

@james-s-tayler
Copy link
Contributor Author

Hi @clairernovotny sorry it's taken a while to respond. If you can provide some additional guidance on recommended changes I can look at finishing this off as it's pretty close. I responded to your initial comments but didn't get any subsequent feedback, so wasn't sure what to change to get it approved and then life happened.

I suspect possibly including a set of shell scripts for each runtime to run the benchmarks against might do it? The framework itself handles the multi-targeting and you just need to provide the runtime as an argument, hence if there were a set of scripts we could run for benchmarking against each runtime then that would cover the multi-targeting aspect.

@james-s-tayler
Copy link
Contributor Author

@clairernovotny I've added added the multi-targeting and subsequent scripts in, so this should be good to review again. I only included net6.0, net5.0 and netcoreapp3.1 as net461 is of course quite different from the others and I just don't have the bandwidth to work through the differences right now, but wanted to at least get the .NET Core / .NET 5+ stuff over the line and finally get some benchmarks into Refit :)

The results are pretty interesting.

@laurencee
Copy link

This was one of the first things I went to try and find to see if this would be good for my use case, as without this sort of information, it's difficult to figure out what the trade-offs are compared to hand creating a client.

Ideally the Readme would include a little table summary of the output of these sorts of tests comparing with the hand crafted or other libraries that do a similar thing (if there are any).

@glennawatson glennawatson merged commit b608846 into reactiveui:main Apr 12, 2023
@github-actions
Copy link

This pull request has been automatically locked since there has not been any recent activity after it was closed. Please open a new issue for related bugs.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Apr 27, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants