Skip to content

CodewithEvilxd/Thundra

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

1 Commit
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Thundra

High-fidelity HTTP benchmarking for engineers who care about real numbers.

Crates.io Downloads Docs CI License

Maintainer: codewithevilxd
Portfolio: nishantdev.space
Email: codewithevilxd@gmail.com


Why Thundra

Thundra is a Rust-native HTTP benchmarking toolkit (CLI + library) designed for practical load testing:

  • sustained concurrency with low runtime overhead
  • precise latency percentiles (p50, p90, p95, p99)
  • dynamic request generation and rate shaping
  • hook system for custom retry/circuit-breaker behavior
  • human-readable or machine-readable JSON output

Use it in two modes:

  • as a CLI for fast terminal-driven benchmarks
  • as a library in integration tests and performance pipelines

Demo

Quick Usage Preview (GIF)

Thundra usage demo

Full Demo Video (MP4)

Download or play the full terminal demo video


Table of Contents


Install

CLI

cargo install thundra

Library

[dependencies]
thundra = "1"
tokio = { version = "1", features = ["full"] }

Build from source

git clone https://github.com/codewithevilxd/thundra.git
cd thundra
cargo build --release

Binary will be available at:

target/release/thundra

Quick Start

1) Fast CLI benchmark

thundra https://httpbin.org/get -c 100 -d 20s -r 1000

2) JSON output for automation

thundra https://httpbin.org/get -c 80 -d 15s -o json

3) Minimal library usage

use std::time::Duration;
use thundra::{Benchmark, Result};

#[tokio::main]
async fn main() -> Result<()> {
    let results = Benchmark::builder()
        .url("http://localhost:3000")
        .concurrency(50)
        .duration(Duration::from_secs(10))
        .build()?
        .run()
        .await?;

    results.print();
    Ok(())
}

CLI Deep Dive

Core syntax

thundra <url> [flags]

High-value commands

# fixed request budget
thundra https://api.example.com/health -n 10000 -c 50

# duration-driven run
thundra https://api.example.com/health -d 30s -c 120

# post workload
thundra https://api.example.com/v1/items \
  -m POST \
  -H "Content-Type: application/json" \
  -b '{"name":"thundra"}' \
  -c 40 -n 5000

# insecure tls for internal env only
thundra https://staging.internal.local -k -d 15s

Flags reference

Flag Meaning Default
-c, --concurrency concurrent workers 10
-n, --requests total requests stop condition none
-d, --duration duration stop condition (10s, 1m) none
-r, --rate fixed request rate (req/s) none
-m, --method HTTP method GET
-H, --header repeatable headers none
-b, --body request body none
-t, --timeout per-request timeout (seconds) 30
-k, --insecure skip TLS verification false
-o, --output text or json text

Shell completions

# bash
source <(thundra completions bash)

# zsh
source <(thundra completions zsh)

# fish
thundra completions fish | source

# powershell
Invoke-Expression (& thundra completions powershell)

# elvish
eval (thundra completions elvish | slurp)

Library Deep Dive

Builder model

The Benchmark::builder() API supports:

  • stop by request count, duration, or run-until-interrupt
  • static request config (url, method, header, body)
  • dynamic request generation via request_fn
  • fixed rate via rate or dynamic rate via rate_fn
  • before/after hooks with retry control

Dynamic request generation

use std::collections::HashMap;
use thundra::{Benchmark, HttpMethod, RequestConfig, RequestContext};

let bench = Benchmark::builder()
    .request_fn(|ctx: RequestContext| {
        let shard = ctx.request_number % 8;
        RequestConfig {
            url: format!("http://localhost:3000/items/{}", shard),
            method: HttpMethod::Get,
            headers: HashMap::new(),
            body: None,
        }
    })
    .concurrency(32)
    .requests(20_000)
    .build()?;

Production-like headers/body

use thundra::{Benchmark, HttpMethod};

let bench = Benchmark::builder()
    .url("https://api.example.com/v1/orders")
    .method(HttpMethod::Post)
    .header("Authorization", "Bearer token")
    .header("Content-Type", "application/json")
    .body(r#"{"sku":"ABC-001","qty":2}"#)
    .requests(5000)
    .concurrency(64)
    .build()?;

Rate Control Patterns

Fixed rate (stable load)

let bench = Benchmark::builder()
    .url("http://localhost:3000")
    .rate(1500)
    .duration(std::time::Duration::from_secs(60))
    .build()?;

Dynamic ramp (warm-up + peak)

use thundra::{Benchmark, RateContext};

let bench = Benchmark::builder()
    .url("http://localhost:3000")
    .rate_fn(|ctx: RateContext| {
        let t = ctx.elapsed.as_secs_f64();
        if t < 10.0 {
            200.0 + t * 80.0
        } else if t < 30.0 {
            1000.0
        } else {
            600.0
        }
    })
    .duration(std::time::Duration::from_secs(45))
    .build()?;

Hooks and Retry Control

Before-request hook (circuit-breaker style)

use thundra::{BeforeRequestContext, Benchmark, HookAction};

let bench = Benchmark::builder()
    .url("http://localhost:3000")
    .before_request(|ctx: BeforeRequestContext| {
        let failure_rate = ctx.failed_requests as f64 / ctx.total_requests.max(1) as f64;
        if ctx.total_requests > 500 && failure_rate > 0.40 {
            HookAction::Abort
        } else {
            HookAction::Continue
        }
    })
    .build()?;

After-request hook (retry on 5xx)

use thundra::{AfterRequestContext, Benchmark, HookAction};

let bench = Benchmark::builder()
    .url("http://localhost:3000")
    .after_request(|ctx: AfterRequestContext| {
        if let Some(status) = ctx.status {
            if status >= 500 {
                return HookAction::Retry;
            }
        }
        HookAction::Continue
    })
    .max_retries(3)
    .build()?;

Result Model

Thundra returns rich BenchmarkResults with:

  • total/success/failed request counts
  • throughput (req/s)
  • latency stats (min, max, mean, p50, p90, p95, p99)
  • status code distribution
  • total transferred bytes

Sample JSON:

{
  "total_requests": 120000,
  "successful_requests": 119980,
  "failed_requests": 20,
  "duration": "20.00s",
  "throughput": 6000.0,
  "latency_min": "220us",
  "latency_max": "18.20ms",
  "latency_mean": "1.80ms",
  "latency_p50": "1.50ms",
  "latency_p90": "2.90ms",
  "latency_p95": "3.40ms",
  "latency_p99": "5.90ms",
  "status_codes": {
    "200": 119980,
    "500": 20
  },
  "total_bytes": 9876543
}

Performance Workflow

A recommended practical flow:

  1. run baseline with moderate concurrency (-c 20) and fixed duration
  2. increase concurrency in steps (20 -> 50 -> 100 -> 200)
  3. track p99 and failure rate, not just throughput
  4. capture JSON output in CI for trend regression
  5. apply dynamic rate ramps to simulate real traffic profiles

Examples

Built-in examples live in examples/:

  • basic_benchmark.rs
  • custom_requests.rs
  • rate_ramping.rs
  • hooks_metrics.rs
  • test_server.rs

Run:

cargo run --example basic_benchmark
cargo run --example custom_requests
cargo run --example rate_ramping
cargo run --example hooks_metrics

Development

# format
cargo fmt --all

# lint
cargo clippy --all-targets --all-features -- -D warnings

# tests
cargo test --all-features

On some Windows environments, app control policies may block generated test binaries. If that happens, run with a trusted target dir.

cargo test --all-features --target-dir "C:\Users\Nishant Gaurav\AppData\Local\thundra-target"

Roadmap

  • coordinated omission correction
  • HDR histogram export
  • HTTP/2 support
  • HTTP/3 support
  • latency breakdown (DNS, TCP, TLS, TTFB)
  • warm-up and cool-down phases
  • multi-step scenario support

If you build something cool with Thundra, share it with me at codewithevilxd@gmail.com.

About

Fast and flexible HTTP benchmarking CLI for load testing and performance checks

Topics

Resources

License

Contributing

Stars

Watchers

Forks

Packages

 
 
 

Contributors

Languages