From 438cbba4d628d0cfd5529ca8922ce28c991a756a Mon Sep 17 00:00:00 2001 From: xtqqczze <45661989+xtqqczze@users.noreply.github.com> Date: Tue, 30 Sep 2025 18:00:42 +0100 Subject: [PATCH 1/2] Quote `--filter` strings --- docs/benchmarkdotnet.md | 20 +++++++++---------- docs/benchmarking-workflow-dotnet-runtime.md | 20 +++++++++---------- docs/crank-to-helix-workflow.md | 4 ++-- docs/microbenchmark-design-guidelines.md | 2 +- scripts/BENCHMARKS_LOCAL_README.md | 4 ++-- src/benchmarks/micro/README.md | 10 +++++----- .../PowerShell.Benchmarks.csproj | 2 +- src/tools/ResultsComparer/README.md | 2 +- 8 files changed, 32 insertions(+), 32 deletions(-) diff --git a/docs/benchmarkdotnet.md b/docs/benchmarkdotnet.md index 2d2c018e98e..49c0fa47a0b 100644 --- a/docs/benchmarkdotnet.md +++ b/docs/benchmarkdotnet.md @@ -117,44 +117,44 @@ And select one of the benchmarks from the list by either entering its number or #### Filtering the Benchmarks -You can filter the benchmarks using `--filter $globPattern` console line argument. The filter is **case insensitive**. +You can filter the benchmarks using `--filter "$globPattern"` console line argument. The filter is **case insensitive**. The glob patterns are applied to full benchmark name: namespace.typeName.methodName. Examples (all in the `src\benchmarks\micro` folder): - Run all the benchmarks from BenchmarksGame namespace: ```cmd -dotnet run -c Release -f net9.0 --filter BenchmarksGame* +dotnet run -c Release -f net9.0 --filter 'BenchmarksGame*' ``` - Run all the benchmarks with type name Richards: ```cmd -dotnet run -c Release -f net9.0 --filter *.Richards.* +dotnet run -c Release -f net9.0 --filter '*.Richards.*' ``` - Run all the benchmarks with method name ToStream: ```cmd -dotnet run -c Release -f net9.0 --filter *.ToStream +dotnet run -c Release -f net9.0 --filter '*.ToStream' ``` - Run ALL benchmarks: ```cmd -dotnet run -c Release -f net9.0 --filter * +dotnet run -c Release -f net9.0 --filter '*' ``` - You can provide many filters (logical disjunction): ```cmd -dotnet run -c Release -f net9.0 --filter System.Collections*.Dictionary* *.Perf_Dictionary.* +dotnet run -c Release -f net9.0 --filter 'System.Collections*.Dictionary*' '*.Perf_Dictionary.*' ``` - To print a **joined summary** for all of the benchmarks (by default printed per type), use `--join`: ```cmd -dotnet run -c Release -f net9.0 --filter BenchmarksGame* --join +dotnet run -c Release -f net9.0 --filter 'BenchmarksGame*' --join ``` Please remember that on **Unix** systems `*` is resolved to all files in current directory, so you need to escape it `'*'`. @@ -166,7 +166,7 @@ To print the list of all available benchmarks you need to pass `--list [tree/fla Example: Show the tree of all the benchmarks from System.Threading namespace that can be run for .NET 7.0: ```cmd -dotnet run -c Release -f net9.0 --list tree --filter System.Threading* +dotnet run -c Release -f net9.0 --list tree --filter 'System.Threading*' ``` ```log @@ -261,7 +261,7 @@ If you want to disassemble the benchmarked code, you need to use the [Disassembl You can do that by passing `--disassm` to the app or by using `[DisassemblyDiagnoser(printAsm: true, printSource: true)]` attribute or by adding it to your config with `config.With(DisassemblyDiagnoser.Create(new DisassemblyDiagnoserConfig(printAsm: true, recursiveDepth: 1))`. -Example: `dotnet run -c Release -f net9.0 -- --filter System.Memory.Span.Reverse -d` +Example: `dotnet run -c Release -f net9.0 -- --filter 'System.Memory.Span.Reverse' -d` ```assembly ; System.Runtime.InteropServices.MemoryMarshal.GetReference[[System.Byte, System.Private.CoreLib]](System.Span`1) @@ -304,7 +304,7 @@ To perform a Mann–Whitney U Test and display the results in a dedicated column Example: run Mann–Whitney U test with relative ratio of 5% for `BinaryTrees_2` for .NET 7.0 (base) vs .NET 8.0 (diff). .NET 7.0 will be baseline because it was first. ```cmd -dotnet run -c Release -f net8.0 --filter *BinaryTrees_2* --runtimes net7.0 net8.0 --statisticalTest 5% +dotnet run -c Release -f net8.0 --filter '*BinaryTrees_2*' --runtimes net7.0 net8.0 --statisticalTest 5% ``` | Method | Toolchain | Mean | MannWhitney(5%) | diff --git a/docs/benchmarking-workflow-dotnet-runtime.md b/docs/benchmarking-workflow-dotnet-runtime.md index 735d710db9d..de30e0ff6a9 100644 --- a/docs/benchmarking-workflow-dotnet-runtime.md +++ b/docs/benchmarking-workflow-dotnet-runtime.md @@ -116,7 +116,7 @@ During the port from xunit-performance to BenchmarkDotNet, the namespaces, type Please remember that you can filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](./benchmarkdotnet.md#Filtering-the-Benchmarks)): ```cmd -dotnet run -c Release -f net9.0 --filter System.Memory* +dotnet run -c Release -f net9.0 --filter 'System.Memory*' ``` (Run the above command on `src/benchmarks/micro/MicroBenchmarks.csproj`.) @@ -136,7 +136,7 @@ C:\Projects\runtime> build -c Release Every time you want to run the benchmarks against local build of [dotnet/runtime](https://github.com/dotnet/runtime) you need to provide the path to CoreRun: ```cmd -dotnet run -c Release -f net9.0 --filter $someFilter \ +dotnet run -c Release -f net9.0 --filter "$someFilter" \ --coreRun C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe ``` @@ -257,7 +257,7 @@ dotnet restore $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj - dotnet build $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore /p:NuGetPackageRoot=$RunDir/performance/artifacts/packages /p:UseSharedCompilation=false /p:BuildInParallel=false /m:1 # Run -dotnet run --project $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter $TestToRun* --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir/artifacts/bin/aot/sgen/mini/mono-sgen --customruntimepack $RunDir/artifacts/bin/aot/pack --aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir/artifacts/BenchmarkDotNet.Artifacts --packages $RunDir/performance/artifacts/packages --buildTimeout 1200 +dotnet run --project $RunDir/performance/src/benchmarks/micro/MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter "$TestToRun*" --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir/artifacts/bin/aot/sgen/mini/mono-sgen --customruntimepack $RunDir/artifacts/bin/aot/pack --aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir/artifacts/BenchmarkDotNet.Artifacts --packages $RunDir/performance/artifacts/packages --buildTimeout 1200 ``` #### Running on Windows @@ -294,7 +294,7 @@ dotnet restore $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj - dotnet build $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore /p:NuGetPackageRoot=$RunDir\performance\artifacts\packages /p:UseSharedCompilation=false /p:BuildInParallel=false /m:1 # Run -dotnet run --project $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter $TestToRun* --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir\artifacts\bin\aot\sgen\mini\mono-sgen.exe --customruntimepack $RunDir\artifacts\bin\aot\pack -aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir\artifacts\BenchmarkDotNet.Artifacts --packages $RunDir\performance\artifacts\packages --buildTimeout 1200 +dotnet run --project $RunDir\performance\src\benchmarks\micro\MicroBenchmarks.csproj --configuration Release --framework net9.0 --no-restore --no-build -- --filter "$TestToRun*" --anyCategories Libraries Runtime "" --category-exclusion-filter NoAOT NoWASM --runtimes monoaotllvm --aotcompilerpath $RunDir\artifacts\bin\aot\sgen\mini\mono-sgen.exe --customruntimepack $RunDir\artifacts\bin\aot\pack -aotcompilermode llvm --logBuildOutput --generateBinLog "" --artifacts $RunDir\artifacts\BenchmarkDotNet.Artifacts --packages $RunDir\performance\artifacts\packages --buildTimeout 1200 ``` ### dotnet runtime testing for MonoInterpreter @@ -485,7 +485,7 @@ Preventing regressions is a fundamental part of our performance culture. The che C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net9.0 \ --artifacts "C:\results\before" \ --coreRun "C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \ - --filter System.IO.Pipes* + --filter 'System.IO.Pipes*' ``` Please try to **avoid running any resource-heavy processes** that could **spoil** the benchmark results while running the benchmarks. @@ -500,7 +500,7 @@ C:\Projects\runtime\src\libraries\System.IO.Pipes\src> dotnet msbuild /p:Configu C:\Projects\performance\src\benchmarks\micro> dotnet run -c Release -f net9.0 \ --artifacts "C:\results\after" \ --coreRun "C:\Projects\runtime\artifacts\bin\testhost\net9.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \ - --filter System.IO.Pipes* + --filter 'System.IO.Pipes*' ``` When you have the results you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to find out how your changes have affected the performance: @@ -526,7 +526,7 @@ To run the benchmarks against the latest .NET Core SDK you can use the [benchmar ```cmd C:\Projects\performance> py scripts\benchmarks_ci.py -f net9.0 \ --bdn-arguments="--artifacts "C:\results\latest_sdk"" \ - --filter System.IO.Pipes* + --filter 'System.IO.Pipes*' ``` ## Solving Regressions @@ -544,7 +544,7 @@ The real performance investigation starts with profiling. We have a comprehensiv To profile the benchmarked code and produce an ETW Trace file ([read more](./benchmarkdotnet.md#Profiling)): ```cmd -dotnet run -c Release -f net9.0 --profiler ETW --filter $YourFilter +dotnet run -c Release -f net9.0 --profiler ETW --filter "$YourFilter" ``` The benchmarking tool is going to print the path to the `.etl` trace file. You should open it with PerfView or Windows Performance Analyzer and start the analysis from there. If you are not familiar with PerfView, you should watch [PerfView Tutorial](https://channel9.msdn.com/Series/PerfView-Tutorial) by @vancem first. It's an investment that is going to pay off very quickly. @@ -567,7 +567,7 @@ BenchmarkDotNet has some extra features that might be useful when doing performa ### Confirmation -When you identify and fix the regression, you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to confirm that you have solved the problem. Please remember that if the regression was found in a very common type like `Span` and you are not sure which benchmarks to run, you can run all of them using `--filter *`. +When you identify and fix the regression, you should use [ResultsComparer](../src/tools/ResultsComparer/README.md) to confirm that you have solved the problem. Please remember that if the regression was found in a very common type like `Span` and you are not sure which benchmarks to run, you can run all of them using `--filter '*'`. Please take a moment to consider how the regression managed to enter the product. Are we now properly protected? @@ -612,7 +612,7 @@ Because the benchmarks are not in the [dotnet/runtime](https://github.com/dotnet The first thing you need to do is send a PR with the new API to the [dotnet/runtime](https://github.com/dotnet/runtime) repository. Once your PR gets merged and a new NuGet package is published to the [dotnet/runtime](https://github.com/dotnet/runtime) NuGet feed, you should remove the Reference to a `.dll` and install/update the package consumed by [MicroBenchmarks](../src/benchmarks/micro/MicroBenchmarks.csproj). You can do this by running the following script locally: ```cmd -/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter $YourFilter -f net9.0 +/home/adsitnik/projects/performance>python3 ./scripts/benchmarks_ci.py --filter "$YourFilter" -f net9.0 ```cmd This script will try to pull the latest .NET Core SDK from [dotnet/runtime](https://github.com/dotnet/runtime) nightly build, which should contain the new API that you just merged in your first PR, and use that to build MicroBenchmarks project and then run the benchmarks that satisfy the filter you provided. diff --git a/docs/crank-to-helix-workflow.md b/docs/crank-to-helix-workflow.md index bae1ee3c4f1..55f30b59c7d 100644 --- a/docs/crank-to-helix-workflow.md +++ b/docs/crank-to-helix-workflow.md @@ -95,7 +95,7 @@ After installing crank as mentioned in the prerequisites, you will be able to in Below is an example of a crank command which will run any benchmarks with Linq in the name on a Windows x64 queue. This command must be run in the performance repository, and the runtime repository must be located next to it so that you could navigate to it with `cd ../runtime`. ```cmd -crank --config .\helix.yml --scenario micro --profile win-x64 --variable bdnArgs="--filter *Linq*" --profile msft-internal --variable buildNumber="myalias-20230811.1" +crank --config .\helix.yml --scenario micro --profile win-x64 --variable bdnArgs="--filter '*Linq*'" --profile msft-internal --variable buildNumber="myalias-20230811.1" ``` An explanation for each argument: @@ -103,7 +103,7 @@ An explanation for each argument: - `--config .\helix.yml`: This tells crank what yaml file defines all the scenarios and jobs - `--scenario micro`: Runs the microbenchmarks scenario - `--profile win-x64`: Configures crank to a local Windows x64 build of the runtime, and sets the Helix Queue to a Windows x64 queue. -- `--variable bdnArgs="--filter *Linq*"`: Sets arguments to pass to BenchmarkDotNet that will filter it to only Linq benchmarks +- `--variable bdnArgs="--filter '*Linq*'"`: Sets arguments to pass to BenchmarkDotNet that will filter it to only Linq benchmarks - `--profile msft-internal`: Sets the crank agent endpoint to the internal hosted crank agent - `--variable buildNumber="myalias-20230811.1"`: Sets the build number which will be associated with the results when it gets uploaded to our storage accounts. You can use this to search for the run results in Azure Data Explorer. This build number does not have to follow any convention, the only recommendation would be to include something unique to yourself so that it doesn't conflict with other build numbers. diff --git a/docs/microbenchmark-design-guidelines.md b/docs/microbenchmark-design-guidelines.md index 3a85347702c..477deb64299 100644 --- a/docs/microbenchmark-design-guidelines.md +++ b/docs/microbenchmark-design-guidelines.md @@ -110,7 +110,7 @@ public int[] Reverse() Profile it using the [ETW Profiler](./benchmarkdotnet.md#Profiling): ```cmd -dotnet run -c Release -f netcoreapp3.1 --filter *.Reverse --profiler ETW +dotnet run -c Release -f netcoreapp3.1 --filter '*.Reverse' --profiler ETW ``` And open the produced trace file with [PerfView](https://github.com/Microsoft/perfview): diff --git a/scripts/BENCHMARKS_LOCAL_README.md b/scripts/BENCHMARKS_LOCAL_README.md index 1bccdacca59..3a71f0c8996 100644 --- a/scripts/BENCHMARKS_LOCAL_README.md +++ b/scripts/BENCHMARKS_LOCAL_README.md @@ -73,11 +73,11 @@ This group of arguments includes those that have a direct impact on the runtime Here is an example command line that runs the MonoJIT RunType from a local runtime for the tests matching `*Span.IndexerBench.CoveredIndex2*`: -`python .\benchmarks_local.py --local-test-repo "/runtime" --run-types MonoJIT --filter *Span.IndexerBench.CoveredIndex2*` +`python .\benchmarks_local.py --local-test-repo "/runtime" --run-types MonoJIT --filter '*Span.IndexerBench.CoveredIndex2*'` Here is an example command line that runs the MonoInterpreter and MonoJIT RunTypes using commits `dd079f53` and `69702c37` for the tests `*Span.IndexerBench.CoveredIndex2*` with the commits being cloned to the `--repo-storage-path` for building, it also passes `--join` to BenchmarkDotNet so all the reports from a single run will be joined into a single report: -`python .\benchmarks_local.py --commits dd079f53b95519c8398d8b0c6e796aaf7686b99a 69702c372a051580f76defc7ba899dde8fcd2723 --repo-storage-path "" --run-types MonoInterpreter MonoJIT --filter *Span.IndexerBench.CoveredIndex2* *WriteReadAsync* --bdn-arguments="--join"` +`python .\benchmarks_local.py --commits dd079f53b95519c8398d8b0c6e796aaf7686b99a 69702c372a051580f76defc7ba899dde8fcd2723 --repo-storage-path "" --run-types MonoInterpreter MonoJIT --filter '*Span.IndexerBench.CoveredIndex2*' '*WriteReadAsync*' --bdn-arguments="--join"` - Note: There is not currently a way to block specific RunTypes from being run on specific hardware. diff --git a/src/benchmarks/micro/README.md b/src/benchmarks/micro/README.md index c8208b66442..9c9929c04cf 100644 --- a/src/benchmarks/micro/README.md +++ b/src/benchmarks/micro/README.md @@ -31,19 +31,19 @@ dotnet run -c Release -f net10.0 --list flat|tree To filter the benchmarks using a glob pattern applied to namespace.typeName.methodName ([read more](../../../docs/benchmarkdotnet.md#Filtering-the-Benchmarks)): ```cmd -dotnet run -c Release -f net10.0 --filter *Span* +dotnet run -c Release -f net10.0 --filter '*Span*' ``` To profile the benchmarked code and produce an ETW Trace file ([read more](../../../docs/benchmarkdotnet.md#Profiling)): ```cmd -dotnet run -c Release -f net10.0 --filter $YourFilter --profiler ETW +dotnet run -c Release -f net10.0 --filter "$YourFilter" --profiler ETW ``` To run the benchmarks for multiple runtimes ([read more](../../../docs/benchmarkdotnet.md#Multiple-Runtimes)): ```cmd -dotnet run -c Release -f net8.0 --filter * --runtimes net8.0 net10.0 +dotnet run -c Release -f net8.0 --filter '*' --runtimes net8.0 net10.0 ``` ## Private Runtime Builds @@ -51,7 +51,7 @@ dotnet run -c Release -f net8.0 --filter * --runtimes net8.0 net10.0 If you contribute to [dotnet/runtime](https://github.com/dotnet/runtime) and want to benchmark **local builds of .NET Core** you need to build [dotnet/runtime](https://github.com/dotnet/runtime) in Release (including tests - so a command similar to `build clr+libs+libs.tests -rc release -lc release`) and then provide the path(s) to CoreRun(s). Provided CoreRun(s) will be used to execute every benchmark in a dedicated process: ```cmd -dotnet run -c Release -f net10.0 --filter $YourFilter \ +dotnet run -c Release -f net10.0 --filter "$YourFilter" \ --corerun C:\git\runtime\artifacts\bin\testhost\net10.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe ``` @@ -59,7 +59,7 @@ To make sure that your changes don't introduce any regressions, you can provide ```cmd dotnet run -c Release -f net10.0 \ - --filter BenchmarksGame* \ + --filter 'BenchmarksGame*' \ --statisticalTest 3ms \ --coreRun \ "C:\git\runtime_upstream\artifacts\bin\testhost\net10.0-windows-Release-x64\shared\Microsoft.NETCore.App\9.0.0\CoreRun.exe" \ diff --git a/src/benchmarks/real-world/PowerShell.Benchmarks/PowerShell.Benchmarks.csproj b/src/benchmarks/real-world/PowerShell.Benchmarks/PowerShell.Benchmarks.csproj index 9d9d5e0cc72..f87c9ad7d03 100644 --- a/src/benchmarks/real-world/PowerShell.Benchmarks/PowerShell.Benchmarks.csproj +++ b/src/benchmarks/real-world/PowerShell.Benchmarks/PowerShell.Benchmarks.csproj @@ -6,7 +6,7 @@ net9.0 enable true - --filter * + --filter '*' diff --git a/src/tools/ResultsComparer/README.md b/src/tools/ResultsComparer/README.md index e0ef7fd876f..c84e97d9cf1 100644 --- a/src/tools/ResultsComparer/README.md +++ b/src/tools/ResultsComparer/README.md @@ -51,7 +51,7 @@ Sample usage: ```cmd dotnet run -c Release matrix decompress --input D:\results\Performance-Runs.zip --output D:\results\net7.0-preview3 -dotnet run -c Release matrix --input D:\results\net7.0-preview3 --base net7.0-preview2 --diff net7.0-preview3 --threshold 10% --noise 2ns --filter System.IO* +dotnet run -c Release matrix --input D:\results\net7.0-preview3 --base net7.0-preview2 --diff net7.0-preview3 --threshold 10% --noise 2ns --filter 'System.IO*' ``` Sample results: From f02542a1084361a8cb86962fc6fff86f62c66377 Mon Sep 17 00:00:00 2001 From: xtqqczze <45661989+xtqqczze@users.noreply.github.com> Date: Tue, 30 Sep 2025 18:06:36 +0100 Subject: [PATCH 2/2] Amend comment for accuracy --- docs/benchmarkdotnet.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/docs/benchmarkdotnet.md b/docs/benchmarkdotnet.md index 49c0fa47a0b..183337bfac1 100644 --- a/docs/benchmarkdotnet.md +++ b/docs/benchmarkdotnet.md @@ -157,7 +157,7 @@ dotnet run -c Release -f net9.0 --filter 'System.Collections*.Dictionary*' '*.Pe dotnet run -c Release -f net9.0 --filter 'BenchmarksGame*' --join ``` -Please remember that on **Unix** systems `*` is resolved to all files in current directory, so you need to escape it `'*'`. +Please remember that in most Unix-like shells, `*` is subject to pathname expansion, so you need to quote it, e.g. '*'. #### Listing the Benchmarks