Skip to content
Draft
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
22 changes: 11 additions & 11 deletions docs/benchmarkdotnet.md
Original file line number Diff line number Diff line change
Expand Up @@ -117,47 +117,47 @@

#### Filtering the Benchmarks

You can filter the benchmarks using `--filter $globPattern` console line argument. The filter is **case insensitive**.
You can filter the benchmarks using `--filter "$globPattern"` console line argument. The filter is **case insensitive**.

The glob patterns are applied to full benchmark name: namespace.typeName.methodName. Examples (all in the `src\benchmarks\micro` folder):

- Run all the benchmarks from BenchmarksGame namespace:

```cmd
dotnet run -c Release -f net9.0 --filter BenchmarksGame*
dotnet run -c Release -f net9.0 --filter 'BenchmarksGame*'
```

- Run all the benchmarks with type name Richards:

```cmd
dotnet run -c Release -f net9.0 --filter *.Richards.*
dotnet run -c Release -f net9.0 --filter '*.Richards.*'
```

- Run all the benchmarks with method name ToStream:

```cmd
dotnet run -c Release -f net9.0 --filter *.ToStream
dotnet run -c Release -f net9.0 --filter '*.ToStream'
```

- Run ALL benchmarks:

```cmd
dotnet run -c Release -f net9.0 --filter *
dotnet run -c Release -f net9.0 --filter '*'
```

- You can provide many filters (logical disjunction):

```cmd
dotnet run -c Release -f net9.0 --filter System.Collections*.Dictionary* *.Perf_Dictionary.*
dotnet run -c Release -f net9.0 --filter 'System.Collections*.Dictionary*' '*.Perf_Dictionary.*'
```

- To print a **joined summary** for all of the benchmarks (by default printed per type), use `--join`:

```cmd
dotnet run -c Release -f net9.0 --filter BenchmarksGame* --join
dotnet run -c Release -f net9.0 --filter 'BenchmarksGame*' --join
```

Please remember that on **Unix** systems `*` is resolved to all files in current directory, so you need to escape it `'*'`.
Please remember that in most Unix-like shells, `*` is subject to pathname expansion, so you need to quote it, e.g. '*'.

#### Listing the Benchmarks

Expand All @@ -166,7 +166,7 @@
Example: Show the tree of all the benchmarks from System.Threading namespace that can be run for .NET 7.0:

```cmd
dotnet run -c Release -f net9.0 --list tree --filter System.Threading*
dotnet run -c Release -f net9.0 --list tree --filter 'System.Threading*'
```

```log
Expand Down Expand Up @@ -261,7 +261,7 @@

You can do that by passing `--disassm` to the app or by using `[DisassemblyDiagnoser(printAsm: true, printSource: true)]` attribute or by adding it to your config with `config.With(DisassemblyDiagnoser.Create(new DisassemblyDiagnoserConfig(printAsm: true, recursiveDepth: 1))`.

Example: `dotnet run -c Release -f net9.0 -- --filter System.Memory.Span<Int32>.Reverse -d`
Example: `dotnet run -c Release -f net9.0 -- --filter 'System.Memory.Span<Int32>.Reverse' -d`

```assembly
; System.Runtime.InteropServices.MemoryMarshal.GetReference[[System.Byte, System.Private.CoreLib]](System.Span`1<Byte>)
Expand Down Expand Up @@ -304,7 +304,7 @@
Example: run Mann–Whitney U test with relative ratio of 5% for `BinaryTrees_2` for .NET 7.0 (base) vs .NET 8.0 (diff). .NET 7.0 will be baseline because it was first.

```cmd
dotnet run -c Release -f net8.0 --filter *BinaryTrees_2* --runtimes net7.0 net8.0 --statisticalTest 5%
dotnet run -c Release -f net8.0 --filter '*BinaryTrees_2*' --runtimes net7.0 net8.0 --statisticalTest 5%
```

| Method | Toolchain | Mean | MannWhitney(5%) |
Expand Down Expand Up @@ -369,7 +369,7 @@
dotnet run -c Release -f net48 -- --clrVersion $theVersion
```

More info can be found [here](https://github.com/dotnet/BenchmarkDotNet/issues/706).

Check failure on line 372 in docs/benchmarkdotnet.md

View workflow job for this annotation

GitHub Actions / lint

Link text should be descriptive [Context: "[here]"]

### Private CoreRT Build

Expand Down
20 changes: 10 additions & 10 deletions docs/benchmarking-workflow-dotnet-runtime.md
Original file line number Diff line number Diff line change
Expand Up @@ -116,7 +116,7 @@ During the port from xunit-performance to BenchmarkDotNet, the namespaces, type
Please remember that you can filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](./benchmarkdotnet.md#Filtering-the-Benchmarks)):

```cmd
dotnet run -c Release -f net9.0 --filter System.Memory*
dotnet run -c Release -f net9.0 --filter 'System.Memory*'
```

(Run the above command on `src/benchmarks/micro/MicroBenchmarks.csproj`.)
Expand All @@ -136,7 +136,7 @@ C:\Projects\runtime> build -c Release
Every time you want to run the benchmarks against local build of [dotnet/runtime](https://github.com/dotnet/runtime) you need to provide the path to CoreRun:

```cmd
dotnet run -c Release -f net9.0 --filter $someFilter \
dotnet run -c Release -f net9.0 --filter "$someFilter" \
--coreRun C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe
```

Expand Down Expand Up @@ -257,7 +257,7 @@ dotnet restore $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj -
dotnet build $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore /p:NuGetPackageRoot=$RunDir/performance/artifacts/packages /p:UseSharedCompilation=false /p:BuildInParallel=false /m:1

# Run
dotnet run --project $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter $TestToRun* --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir/artifacts/bin/aot/sgen/mini/mono-sgen --customruntimepack $RunDir/artifacts/bin/aot/pack --aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir/artifacts/BenchmarkDotNet.Artifacts --packages $RunDir/performance/artifacts/packages --buildTimeout 1200
dotnet run --project $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter "$TestToRun*" --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir/artifacts/bin/aot/sgen/mini/mono-sgen --customruntimepack $RunDir/artifacts/bin/aot/pack --aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir/artifacts/BenchmarkDotNet.Artifacts --packages $RunDir/performance/artifacts/packages --buildTimeout 1200
```

#### Running on Windows
Expand Down Expand Up @@ -294,7 +294,7 @@ dotnet restore $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj -
dotnet build $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore /p:NuGetPackageRoot=$RunDir\performance\artifacts\packages /p:UseSharedCompilation=false /p:BuildInParallel=false /m:1

# Run
dotnet run --project $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter $TestToRun* --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir\artifacts\bin\aot\sgen\mini\mono-sgen.exe --customruntimepack $RunDir\artifacts\bin\aot\pack -aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir\artifacts\BenchmarkDotNet.Artifacts --packages $RunDir\performance\artifacts\packages --buildTimeout 1200
dotnet run --project $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter "$TestToRun*" --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir\artifacts\bin\aot\sgen\mini\mono-sgen.exe --customruntimepack $RunDir\artifacts\bin\aot\pack -aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir\artifacts\BenchmarkDotNet.Artifacts --packages $RunDir\performance\artifacts\packages --buildTimeout 1200
```

### dotnet runtime testing for MonoInterpreter
Expand Down Expand Up @@ -485,7 +485,7 @@ Preventing regressions is a fundamental part of our performance culture. The che
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net9.0 \
--artifacts "C:\results\before" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \
--filter System.IO.Pipes*
--filter 'System.IO.Pipes*'
```

Please try to **avoid running any resource-heavy processes** that could **spoil** the benchmark results while running the benchmarks.
Expand All @@ -500,7 +500,7 @@ C:\Projects\runtime\src\libraries\System.IO.Pipes\src> dotnet msbuild /p:Configu
C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net9.0 \
--artifacts "C:\results\after" \
--coreRun "C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \
--filter System.IO.Pipes*
--filter 'System.IO.Pipes*'
```

When you have the results you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to find out how your changes have affected the performance:
Expand All @@ -526,7 +526,7 @@ To run the benchmarks against the latest .NET Core SDK you can use the [benchmar
```cmd
C:\Projects\performance> py scripts\benchmarks_ci.py -f net9.0 \
--bdn-arguments="--artifacts "C:\results\latest_sdk"" \
--filter System.IO.Pipes*
--filter 'System.IO.Pipes*'
```

## Solving Regressions
Expand All @@ -544,7 +544,7 @@ The real performance investigation starts with profiling. We have a comprehensiv
To profile the benchmarked code and produce an ETW Trace file ([read more](./benchmarkdotnet.md#Profiling)):

```cmd
dotnet run -c Release -f net9.0 --profiler ETW --filter $YourFilter
dotnet run -c Release -f net9.0 --profiler ETW --filter "$YourFilter"
```

The benchmarking tool is going to print the path to the `.etl` trace file. You should open it with PerfView or Windows Performance Analyzer and start the analysis from there. If you are not familiar with PerfView, you should watch [PerfView Tutorial](https://channel9.msdn.com/Series/PerfView-Tutorial) by @vancem first. It's an investment that is going to pay off very quickly.
Expand All @@ -567,7 +567,7 @@ BenchmarkDotNet has some extra features that might be useful when doing performa

### Confirmation

When you identify and fix the regression, you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to confirm that you have solved the problem. Please remember that if the regression was found in a very common type like `Span<T>` and you are not sure which benchmarks to run, you can run all of them using `--filter *`.
When you identify and fix the regression, you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to confirm that you have solved the problem. Please remember that if the regression was found in a very common type like `Span<T>` and you are not sure which benchmarks to run, you can run all of them using `--filter '*'`.

Please take a moment to consider how the regression managed to enter the product. Are we now properly protected?

Expand Down Expand Up @@ -612,7 +612,7 @@ Because the benchmarks are not in the [dotnet/runtime](https://github.com/dotnet
The first thing you need to do is send a PR with the new API to the [dotnet/runtime](https://github.com/dotnet/runtime) repository. Once your PR gets merged and a new NuGet package is published to the [dotnet/runtime](https://github.com/dotnet/runtime) NuGet feed, you should remove the Reference to a `.dll` and install/update the package consumed by [MicroBenchmarks](../src/benchmarks/micro/MicroBenchmarks.csproj). You can do this by running the following script locally:

```cmd
/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter $YourFilter -f net9.0
/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter "$YourFilter" -f net9.0
```cmd
This script will try to pull the latest .NET Core SDK from [dotnet/runtime](https://github.com/dotnet/runtime) nightly build, which should contain the new API that you just merged in your first PR, and use that to build MicroBenchmarks project and then run the benchmarks that satisfy the filter you provided.

Expand Down
4 changes: 2 additions & 2 deletions docs/crank-to-helix-workflow.md
Original file line number Diff line number Diff line change
Expand Up @@ -95,15 +95,15 @@ After installing crank as mentioned in the prerequisites, you will be able to in
Below is an example of a crank command which will run any benchmarks with Linq in the name on a Windows x64 queue. This command must be run in the performance repository, and the runtime repository must be located next to it so that you could navigate to it with `cd ../runtime`.

```cmd
crank --config .\helix.yml --scenario micro --profile win-x64 --variable bdnArgs="--filter *Linq*" --profile msft-internal --variable buildNumber="myalias-20230811.1"
crank --config .\helix.yml --scenario micro --profile win-x64 --variable bdnArgs="--filter '*Linq*'" --profile msft-internal --variable buildNumber="myalias-20230811.1"
```

An explanation for each argument:

- `--config .\helix.yml`: This tells crank what yaml file defines all the scenarios and jobs
- `--scenario micro`: Runs the microbenchmarks scenario
- `--profile win-x64`: Configures crank to a local Windows x64 build of the runtime, and sets the Helix Queue to a Windows x64 queue.
- `--variable bdnArgs="--filter *Linq*"`: Sets arguments to pass to BenchmarkDotNet that will filter it to only Linq benchmarks
- `--variable bdnArgs="--filter '*Linq*'"`: Sets arguments to pass to BenchmarkDotNet that will filter it to only Linq benchmarks
- `--profile msft-internal`: Sets the crank agent endpoint to the internal hosted crank agent
- `--variable buildNumber="myalias-20230811.1"`: Sets the build number which will be associated with the results when it gets uploaded to our storage accounts. You can use this to search for the run results in Azure Data Explorer. This build number does not have to follow any convention, the only recommendation would be to include something unique to yourself so that it doesn't conflict with other build numbers.

Expand Down
2 changes: 1 addition & 1 deletion docs/microbenchmark-design-guidelines.md
Original file line number Diff line number Diff line change
Expand Up @@ -110,7 +110,7 @@ public int[] Reverse()
Profile it using the [ETW Profiler](./benchmarkdotnet.md#Profiling):

```cmd
dotnet run -c Release -f netcoreapp3.1 --filter *.Reverse --profiler ETW
dotnet run -c Release -f netcoreapp3.1 --filter '*.Reverse' --profiler ETW
```

And open the produced trace file with [PerfView](https://github.com/Microsoft/perfview):
Expand Down
4 changes: 2 additions & 2 deletions scripts/BENCHMARKS_LOCAL_README.md
Original file line number Diff line number Diff line change
Expand Up @@ -73,11 +73,11 @@ This group of arguments includes those that have a direct impact on the runtime

Here is an example command line that runs the MonoJIT RunType from a local runtime for the tests matching `*Span.IndexerBench.CoveredIndex2*`:

`python .\benchmarks_local.py --local-test-repo "<absolute path to runtime folder>/runtime" --run-types MonoJIT --filter *Span.IndexerBench.CoveredIndex2*`
`python .\benchmarks_local.py --local-test-repo "<absolute path to runtime folder>/runtime" --run-types MonoJIT --filter '*Span.IndexerBench.CoveredIndex2*'`

Here is an example command line that runs the MonoInterpreter and MonoJIT RunTypes using commits `dd079f53` and `69702c37` for the tests `*Span.IndexerBench.CoveredIndex2*` with the commits being cloned to the `--repo-storage-path` for building, it also passes `--join` to BenchmarkDotNet so all the reports from a single run will be joined into a single report:

`python .\benchmarks_local.py --commits dd079f53b95519c8398d8b0c6e796aaf7686b99a 69702c372a051580f76defc7ba899dde8fcd2723 --repo-storage-path "<absolute path to where you want to store runtime clones>" --run-types MonoInterpreter MonoJIT --filter *Span.IndexerBench.CoveredIndex2* *WriteReadAsync* --bdn-arguments="--join"`
`python .\benchmarks_local.py --commits dd079f53b95519c8398d8b0c6e796aaf7686b99a 69702c372a051580f76defc7ba899dde8fcd2723 --repo-storage-path "<absolute path to where you want to store runtime clones>" --run-types MonoInterpreter MonoJIT --filter '*Span.IndexerBench.CoveredIndex2*' '*WriteReadAsync*' --bdn-arguments="--join"`

- Note: There is not currently a way to block specific RunTypes from being run on specific hardware.

Expand Down
10 changes: 5 additions & 5 deletions src/benchmarks/micro/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -31,35 +31,35 @@ dotnet run -c Release -f net10.0 --list flat|tree
To filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](../../../docs/benchmarkdotnet.md#Filtering-the-Benchmarks)):

```cmd
dotnet run -c Release -f net10.0 --filter *Span*
dotnet run -c Release -f net10.0 --filter '*Span*'
```

To profile the benchmarked code and produce an ETW Trace file ([read more](../../../docs/benchmarkdotnet.md#Profiling)):

```cmd
dotnet run -c Release -f net10.0 --filter $YourFilter --profiler ETW
dotnet run -c Release -f net10.0 --filter "$YourFilter" --profiler ETW
```

To run the benchmarks for multiple runtimes ([read more](../../../docs/benchmarkdotnet.md#Multiple-Runtimes)):

```cmd
dotnet run -c Release -f net8.0 --filter * --runtimes net8.0 net10.0
dotnet run -c Release -f net8.0 --filter '*' --runtimes net8.0 net10.0
```

## Private Runtime Builds

If you contribute to [dotnet/runtime](https://github.com/dotnet/runtime) and want to benchmark **local builds of .NET Core** you need to build [dotnet/runtime](https://github.com/dotnet/runtime) in Release (including tests - so a command similar to `build clr+libs+libs.tests -rc release -lc release`) and then provide the path(s) to CoreRun(s). Provided CoreRun(s) will be used to execute every benchmark in a dedicated process:

```cmd
dotnet run -c Release -f net10.0 --filter $YourFilter \
dotnet run -c Release -f net10.0 --filter "$YourFilter" \
--corerun C:\git\runtime\artifacts\bin\testhost\net10.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe
```

To make sure that your changes don't introduce any regressions, you can provide paths to CoreRuns with and without your changes and use the Statistical Test feature to detect regressions/improvements ([read more](../../../docs/benchmarkdotnet.md#Regressions)):

```cmd
dotnet run -c Release -f net10.0 \
--filter BenchmarksGame* \
--filter 'BenchmarksGame*' \
--statisticalTest 3ms \
--coreRun \
"C:\git\runtime_upstream\artifacts\bin\testhost\net10.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -6,7 +6,7 @@
<TargetFrameworks Condition="'$(TargetFrameworks)' == ''">net9.0</TargetFrameworks>
<ImplicitUsings>enable</ImplicitUsings>
<AllowUnsafeBlocks>true</AllowUnsafeBlocks>
<StartArguments>--filter *</StartArguments>
<StartArguments>--filter '*'</StartArguments>
</PropertyGroup>

<ItemGroup>
Expand Down
2 changes: 1 addition & 1 deletion src/tools/ResultsComparer/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -51,7 +51,7 @@ Sample usage:

```cmd
dotnet run -c Release matrix decompress --input D:\results\Performance-Runs.zip --output D:\results\net7.0-preview3
dotnet run -c Release matrix --input D:\results\net7.0-preview3 --base net7.0-preview2 --diff net7.0-preview3 --threshold 10% --noise 2ns --filter System.IO*
dotnet run -c Release matrix --input D:\results\net7.0-preview3 --base net7.0-preview2 --diff net7.0-preview3 --threshold 10% --noise 2ns --filter 'System.IO*'
```

Sample results:
Expand Down
Loading