A high-performance, production-ready forward proxy server built in Rust with HTTP/2+ support, automatic HTTPS, and hot config reload.
- HTTP/2+ Support: Native HTTP/2 with automatic fallback to HTTP/1.1
- Automatic HTTPS: Self-signed certificates for development, Let's Encrypt for production
- Hot Config Reload: Update configuration without dropping connections
- Simple Configuration: Custom config format with comments support
- Load Balancing: Round-robin, weighted, and health-checked backends
- WebSocket Support: Full WebSocket proxy capabilities
- Middleware: Authentication (Basic, API Key, JWT), Rate Limiting, JSON Logging
- Health Checks: Kubernetes-compatible liveness and readiness probes
- High Performance: Built on Tokio and Hyper for maximum throughput
# Build and run in dev mode
cargo run -- dev
# Or with custom config
SOLI_CONFIG_PATH=./proxy.conf cargo run -- dev# Build release
cargo build --release
# Run in production mode (requires Let's Encrypt config)
cargo run -- prod[server]
bind = "0.0.0.0:8080"
https_port = 8443
worker_threads = "auto"
[tls]
mode = "auto" # "auto" for dev, "letsencrypt" for production
[letsencrypt]
email = "admin@example.com"
staging = false
[logging]
level = "info"
format = "json"
[metrics]
enabled = true
endpoint = "/metrics"
[health]
enabled = true
liveness_path = "/health/live"
readiness_path = "/health/ready"
[rate_limiting]
enabled = true
requests_per_second = 1000
burst_size = 2000# Comments are supported
default -> http://localhost:3000
/api/* -> http://localhost:8080
/ws -> ws://localhost:9000
# Load balancing
/api/* -> http://10.0.0.10:8080, http://10.0.0.11:8080, http://10.0.0.12:8080
# Weighted routing
/api/heavy -> weight:70 http://heavy:8080, weight:30 http://light:8080
# Regex routing
~^/users/(\d+)$ -> http://user-service:8080/users/$1
# Headers to add
headers {
X-Forwarded-For: $client_ip
X-Forwarded-Proto: $scheme
}
# Authentication
/auth/* {
auth: basic
realm: "Restricted"
}
┌─────────────────────────────────────────────────────┐
│ Soli Proxy Server │
├─────────────────────────────────────────────────────┤
│ ┌─────────────┐ ┌─────────────┐ ┌──────────────┐ │
│ │ Config │ │ TLS/HTTPS │ │ HTTP/2+ │ │
│ │ Manager │ │ Handler │ │ Listener │ │
│ │ (hot reload)│ │ (rcgen/LE) │ │ (tokio/hyper)│ │
│ └─────────────┘ └─────────────┘ └──────────────┘ │
│ │ │ │ │
│ └────────────────┼───────────────┘ │
│ │ │
│ ┌──────▼──────┐ │
│ │ Router │ │
│ │ (matching) │ │
│ └─────────────┘ │
│ │ │
│ ┌────────────────┼────────────────┐ │
│ │ │ │ │
│ ┌────▼────┐ ┌─────▼─────┐ ┌────▼────┐ │
│ │ Auth │ │ Rate │ │ Logging │ │
│ │ Middle │ │ Limit │ │ JSON │ │
│ └─────────┘ └───────────┘ └─────────┘ │
└─────────────────────────────────────────────────────┘
soli-proxy [dev|prod] [OPTIONS]
Modes:
dev Development mode with self-signed certificates
prod Production mode with Let's Encrypt support
Environment Variables:
SOLI_CONFIG_PATH Path to proxy.conf (default: ./proxy.conf)soli-proxy/
├── Cargo.toml
├── config.toml # Main configuration
├── proxy.conf # Proxy rules
├── src/
│ ├── main.rs # Entry point
│ ├── lib.rs # Library root
│ ├── bin/
│ │ ├── httptest.rs # End-to-end proxy throughput test
│ │ └── hash-password.rs
│ ├── config/ # Config parsing & hot reload
│ ├── server/ # HTTP/HTTPS server
│ ├── admin/ # Admin API server
│ ├── acme/ # ACME / Let's Encrypt
│ ├── tls.rs # TLS & certificate management
│ ├── circuit_breaker.rs
│ ├── metrics.rs # Prometheus-format metrics
│ ├── pool.rs # Connection pool
│ ├── auth.rs # Authentication
│ ├── app/ # App management & blue-green deploy
│ └── shutdown.rs # Graceful shutdown
├── benches/
│ ├── routing.rs # Rule matching & scaling benchmarks
│ ├── components.rs # Circuit breaker, load balancer, metrics
│ └── config_parsing.rs # Config file parsing benchmarks
└── scripts/ # Helper scripts
Built on Tokio and Hyper with SO_REUSEPORT multi-listener architecture.
| Endpoint | Throughput | p50 | p95 | p99 |
|---|---|---|---|---|
| Proxy (default route → backend) | 228,196 req/s | 0.64 ms | 0.92 ms | 1.20 ms |
| Admin API (GET /api/v1/status) | 508,049 req/s | 0.37 ms | 0.58 ms | 0.71 ms |
| Component | Operation | Time |
|---|---|---|
| Routing | Domain match | 54 ns |
| Routing | Regex match | 57 ns |
| Routing | 500 rules worst-case | 587 ns |
| Circuit breaker | is_available (1k targets) | 18 ns |
| Load balancer | select_index (round-robin) | 1.6 ns |
| Metrics | record_request | 29 ns |
| Metrics | format_metrics (1k requests) | 601 ns |
| Config parsing | 5 rules | 6.9 µs |
| Config parsing | 100 rules | 45 µs |
# Criterion micro-benchmarks (routing, components, config parsing)
cargo bench
# End-to-end proxy throughput test
cargo run --release --bin httptest -- --requests 50000 --concurrency 200Configuration changes are detected automatically:
- File watcher monitors proxy.conf
- On change, config is reloaded atomically
- New connections use new config
- Existing connections continue with old config
- Graceful draining of old connections
This project uses Conventional Commits for semantic release. Use the format type(scope): description (e.g. feat(proxy): add retry). Allowed types: feat, fix, docs, style, refactor, perf, test, chore, ci, build.
Optional setup:
- Commit template (reminder in the message box):
git config commit.template .gitmessage - Auto-fix non-conventional messages (prepend
chore:if the first line doesn’t match):cp scripts/git-hooks/prepare-commit-msg .git/hooks/prepare-commit-msg && chmod +x .git/hooks/prepare-commit-msg
MIT