Measure the performance cost of antivirus software on Windows workloads.
Docs: Architecture · Metrics · Workloads
- Windows 10/11 VM (clean snapshot recommended)
- Administrator privileges
- Internet access during setup (downloads toolchains and source repos)
- ~20 GB free disk space on
C:\
cd C:\projects\av-benchmark\src
dotnet publish AvBench.Cli -c Release -o C:\tools\avbench
dotnet publish AvBench.Compare -c Release -o C:\tools\avbenchThis produces C:\tools\avbench\avbench.exe and C:\tools\avbench\avbench-compare.exe.
All commands run in an elevated PowerShell terminal.
C:\tools\avbench\avbench.exe setup --bench-dir C:\benchThis installs Git, Rust 1.85.0, .NET SDK, VS Build Tools, clones repos (ripgrep + Roslyn), hydrates dependencies, builds the unsigned noop.exe, creates the archive zip, and writes C:\bench\suite-manifest.json.
To set up only specific workloads:
C:\tools\avbench\avbench.exe setup --bench-dir C:\bench -w microbench
C:\tools\avbench\avbench.exe setup --bench-dir C:\bench -w ripgrep,roslynAfter setup completes, take a VM snapshot. Restore to this snapshot before each run.
C:\tools\avbench\avbench.exe run --name baseline-os --bench-dir C:\bench --output C:\resultsC:\tools\avbench\avbench.exe run --name defender-default --bench-dir C:\bench --output C:\resultsAV product and version are auto-detected from Windows Security Center. To override:
C:\tools\avbench\avbench.exe run --name eset-default --bench-dir C:\bench --output C:\results --av-product "ESET Security" --av-version "19.1.12.0"C:\tools\avbench\avbench.exe run --name defender-default --bench-dir C:\bench --output C:\results -w microbench
C:\tools\avbench\avbench.exe run --name defender-default --bench-dir C:\bench --output C:\results -w ripgrep,roslyn
C:\tools\avbench\avbench.exe run --name defender-default --bench-dir C:\bench --output C:\results -w file-create-deleteavbench setup accepts workload families only: ripgrep, roslyn, microbench, or all.
avbench run accepts those workload families plus specific microbench scenario IDs such as file-create-delete.
Copy C:\results from each VM to the host, then:
C:\tools\avbench\avbench-compare.exe --baseline C:\compare\baseline-os --input C:\compare\defender-default --output C:\compare\reportCompare multiple AV configs at once:
C:\tools\avbench\avbench-compare.exe --baseline C:\compare\baseline-os --input C:\compare\defender-default C:\compare\eset-default --output C:\compare\reportOutputs:
| File | Description |
|---|---|
compare.csv |
Comparison spreadsheet with per-scenario median slowdown, first-run slowdown, all-runs mean wall time, kernel CPU shift, system disk deltas, CV, and status |
summary.md |
Markdown report with fixed-order scenario tables, first-run wall time, all-runs mean wall time, steady-state and first-run cross-AV tables, plus ranked callouts for slowdowns, noisy rows, anomalies, and failures |
For statistically reliable results, run multiple sessions per AV configuration. Each session should start from the same VM snapshot:
Restore snapshot → avbench run ... → copy results → Restore snapshot → avbench run ... → copy results → ...
Collect 3–5 sessions per configuration. avbench-compare aggregates all run.json files found under each input directory, computing mean/median/CV across sessions.
C:\results\ # --output directory
├── suite-manifest.json
├── runs.csv
├── ripgrep-clean-build\
│ ├── run.json
│ ├── stdout.log
│ └── stderr.log
├── ripgrep-incremental-build\
│ └── ...
├── roslyn-clean-build\
│ └── ...
├── file-create-delete\
│ └── ...
├── mem-alloc-protect\
│ └── ...
└── ... (31 scenario folders total)
| Option | Default | Description |
|---|---|---|
--bench-dir |
C:\bench |
Root directory for repos and manifests |
--ripgrep-ref |
latest release | Optional branch/tag/SHA for ripgrep |
-w, --workload |
all | Workload families only: ripgrep, roslyn, microbench, or all |
Exit codes: 0 = success, 1 = error, 2 = reboot required (re-run setup after reboot).
| Option | Default | Description |
|---|---|---|
--name |
required | Label for this AV config (e.g., baseline-os, defender-default) |
--bench-dir |
C:\bench |
Where setup stored repos and manifest |
--output |
./results |
Where to write result folders |
-w, --workload |
all | Workload families to run, or specific microbench scenario IDs such as file-create-delete |
--av-product |
auto-detect | Override AV product name |
--av-version |
auto-detect | Override AV version string |
| Option | Default | Description |
|---|---|---|
--baseline |
required | Result directory for the no-AV baseline |
--input |
required | One or more result directories to compare |
--output |
required | Where to write compare.csv and summary.md |
--rebuild |
false |
Regenerate runs.csv in each results directory from run.json files before comparing |
Use --rebuild after replacing a single scenario's run.json to refresh runs.csv, compare.csv, and summary.md without re-running the full suite:
C:\tools\avbench\avbench-compare.exe --baseline C:\compare\baseline-os --input C:\compare\defender-default --output C:\compare\report --rebuild