Solution of this coding challenge. A lightweight HTTP load balancer built with Rust, designed for high throughput and low latency. It distributes incoming HTTP requests across multiple backend servers using configurable routing strategies.
cargo build --releasecargo test./load-balancer \
--port 3000 \
--target-servers http://localhost:9000,http://localhost:9001 \
--routing-policy round-robinload-balancer [OPTIONS]
Options:
-p, --port <PORT> Port to listen on [default: 3000]
-t, --target-servers <SERVERS> Comma-separated list of backend servers
Example: http://server1:8000,http://server2:8000
-r, --routing-policy <POLICY> Load balancing strategy [default: round-robin]
Possible values:
- round-robin: Distribute requests evenly
- random: Random server selection
--target-servers-health-path <PATH> Path to check backend server health [default: /health]
--health-checker-polling-seconds <SECONDS> Polling interval for health checks in seconds [default: 10]
-h, --help Print help
-V, --version Print version
You can start a full mock environment with dummy backend servers using Docker:
make allThis will launch:
- Wakanda-LB on port 3000
- Two dummy backend servers on ports 9000 and 9001
┌─────────────┐
│ Client │
└──────┬──────┘
│ ← HTTP Request
▼
┌─────────────────┐
│ Wakanda-LB │
│ (Port 3000) │
│ │
│ ┌───────────┐ │
│ │ Routing │ │ ← Round Robin / Random
│ │ Strategy │ │
│ └───────────┘ │
└────┬────────┬───┘
│ │
▼ ▼
┌─────────┐ ┌─────────┐
│ Backend │ │ Backend │
│ Server │ │ Server │
│ :9000 │ │ :9001 │
└─────────┘ └─────────┘
./benchmark.sh <max_request_count> <url>