A simple, cross-platform CLI tool for timing command-line benchmarks.
Measure how long any command takes to execute — once, multiple times with statistics, or in batch from a file. Works on Linux, macOS, and Windows.
cargo install --path .Or build from source:
cargo build --releaseBenchmark a single command:
lazymark echo hello
# ⏱ echo hello ··· 12.345ms
lazymark -n 10 echo hello
# ⏱ echo hello (10 runs)
# Min: 11.200ms Max: 15.800ms Mean: 12.500ms StdDev: 1.200msNote: Place lazymark flags (
-n,--show-output, etc.) before the command to benchmark.
Read commands from a file:
lazymark -f benchmarks.json
lazymark -f commands.csv -n 5 --log results.json --log-format jsonLaunch by running lazymark with no arguments:
$ lazymark
lazymark interactive mode. Type a command to benchmark it.
Use :help for available commands, :quit to exit.
lazymark> cargo build
⏱ cargo build ··· 2.345s
lazymark> :n 5 cargo test
Iterations set to 5
⏱ cargo test (5 runs)
Min: 1.200s Max: 1.800s Mean: 1.450s StdDev: 0.230s
lazymark> :quit
| Command | Description |
|---|---|
:quit, :q |
Exit |
:iterations N, :n N |
Set iteration count |
:n N COMMAND |
Set iterations and run command |
:show-output |
Show command stdout/stderr |
:hide-output |
Hide command output (default) |
:log PATH |
Log results to file |
:log-format FORMAT |
Set log format (csv, json, text) |
:log-stop |
Stop logging |
:history |
Show session benchmark history |
:help, :h |
Show help |
Usage: lazymark [OPTIONS] [COMMAND]...
Arguments:
[COMMAND]... Command and arguments to benchmark
Options:
-f, --file <PATH> Read commands from file (csv, newline, json, yaml)
-n, --iterations <N> Number of iterations per command [default: 1]
--show-output Show stdout/stderr of benchmarked commands
-l, --log <PATH> Log results to a file
--log-format <FORMAT> Log format: csv, json, text [default: csv]
-h, --help Print help
-V, --version Print version
Comma-separated commands, one or more per line:
echo hello,echo world
cargo buildOne command per line. Empty lines and # comments are ignored:
echo hello
# This is a comment
cargo build
cargo test
{
"benchmarks": [
{
"name": "build-and-test",
"commands": ["cargo build", "cargo test"]
},
{
"commands": ["echo hello"]
}
]
}When multiple commands are in the same object, their execution times are summed and reported as a group.
benchmarks:
- name: build-and-test
commands:
- cargo build
- cargo test
- commands:
- echo helloSame grouping behavior as JSON — commands in the same entry have times summed.
Log benchmark results to a file in any mode:
# Direct mode
lazymark -n 5 --log results.csv echo hello
# File mode with JSON log
lazymark -f benchmarks.json --log results.json --log-format jsonIn interactive mode, use :log and :log-format commands.
CSV (default) — appends rows, writes header on new file:
name,command,iteration,duration_secs
,echo hello,1,0.012345JSON — valid JSON array, rewritten on each append:
[
{
"name": null,
"command": "echo hello",
"iteration": 1,
"duration_secs": 0.012345
}
]Text — one result per line, appended:
echo hello: 0.012345s
With -n N where N > 1, lazymark reports:
- Min / Max — fastest and slowest runs
- Mean — arithmetic average
- StdDev — sample standard deviation (n−1)
Duration display adapts to magnitude:
| Range | Format |
|---|---|
| < 1 second | milliseconds (123.456ms) |
| < 1 minute | seconds (12.345s) |
| ≥ 1 minute | minutes + seconds (1m 30.500s) |