A fast, flexible HTTP benchmarking library and CLI tool built in Rust.
Features • Installation • Quick Start • Library Usage • CLI Usage • Examples
- Simple Builder API - Fluent, type-safe configuration
- High Performance - Async Rust with minimal overhead
- Flexible Rate Control - Fixed rates or dynamic rate functions
- Custom Request Generation - Generate requests dynamically per-worker
- Hook System - Inject custom logic before/after requests
- Detailed Metrics - Latency percentiles, throughput, status codes
- Concurrent Workers - Configurable parallelism
- Adaptive Testing - Duration-based or request-count-based
- Retry Logic - Smart retry with hook-based control
- Library + CLI - Use as Rust library or standalone tool
Add to your Cargo.toml:
[dependencies]
httpress = "0.5"
tokio = { version = "1", features = ["full"] }cargo install httpressuse httpress::Benchmark;
use std::time::Duration;
let results = Benchmark::builder()
.url("http://localhost:3000")
.concurrency(50)
.duration(Duration::from_secs(10))
.build()?
.run()
.await?;
results.print();let results = Benchmark::builder()
.url("http://localhost:3000")
.concurrency(50)
.requests(1000)
.build()?
.run()
.await?;.request_fn(|ctx: RequestContext| {
let user_id = ctx.request_number % 100;
RequestConfig {
url: format!("http://localhost:3000/user/{}", user_id),
method: HttpMethod::Get,
headers: HashMap::new(),
body: None,
}
}).rate_fn(|ctx: RateContext| {
let progress = (ctx.elapsed.as_secs_f64() / 10.0).min(1.0);
100.0 + (900.0 * progress) // Ramp from 100 to 1000 req/s
}).after_request(|ctx: AfterRequestContext| {
if let Some(status) = ctx.status {
if status >= 500 {
return HookAction::Retry;
}
}
HookAction::Continue
})For complete API documentation, see docs.rs/httpress.
# Run benchmark with 100 concurrent connections for 30 seconds
httpress http://example.com -c 100 -d 30s
# Fixed number of requests with rate limiting
httpress http://example.com -n 10000 -r 1000
# POST request with headers and body
httpress http://example.com/api -m POST \
-H "Content-Type: application/json" \
-b '{"key": "value"}'| Flag | Description | Default |
|---|---|---|
-n, --requests |
Total number of requests | - |
-d, --duration |
Test duration (e.g. 10s, 1m) | - |
-c, --concurrency |
Concurrent connections | 10 |
-r, --rate |
Rate limit (req/s) | - |
-m, --method |
HTTP method | GET |
-H, --header |
HTTP header (repeatable) | - |
-b, --body |
Request body | - |
-t, --timeout |
Request timeout in seconds | 30 |
--- Benchmark Complete ---
Requests: 1000 total, 1000 success, 0 errors
Duration: 0.06s
Throughput: 16185.07 req/s
Latency:
Min: 245us
Max: 2.41ms
Mean: 612us
p50: 544us
p90: 955us
p95: 1.08ms
p99: 1.61ms
Status codes:
200: 1000
The examples/ directory contains:
- basic_benchmark.rs - Simple benchmark example
- custom_requests.rs - Dynamic request generation with request_fn
- rate_ramping.rs - Rate control with rate_fn
- hooks_metrics.rs - Custom metrics collection using hooks
Run examples with:
cargo run --example basic_benchmark