A Rust library for writing HAProxy SPOE agents (SPOA — Stream Processing Offload Agent).
HAProxy's Stream Processing Offload Engine suspends an HTTP request, sends a NOTIFY frame
to the agent with key-value arguments, and waits for an ACK containing variables to set back.
This library handles all framing, handshake, pipelining, and connection lifecycle — you write
only the handler logic.
A comprehensive blog post covering the architecture in depth is available at haproxy-spoe-rs: A Rust SPOA Agent Library for HAProxy.
use haproxy_spoe::{Agent, Scope, TypedData};
use tokio::net::TcpListener;
#[tokio::main]
async fn main() {
let listener = TcpListener::bind("0.0.0.0:9000").await.unwrap();
Agent::new(|req| {
let Some(msg) = req.get_message("check-client-ip") else { return };
let Some(TypedData::IPv4(ip)) = msg.get("ip") else { return };
let score = if ip.octets()[0] == 10 { 100i32 } else { 0 };
req.set_var(Scope::Session, "ip_score", TypedData::Int32(score));
})
.serve(listener)
.await
.unwrap();
}The handler is a plain Fn(&mut Request) — synchronous, no async, no trait boilerplate.
For graceful shutdown, wrap agent.serve() in a tokio::select! with signal futures (see
examples/ip_reputation.rs).
frontend http-in
bind *:80
filter spoe engine ip-reputation config /etc/haproxy/ip-reputation.conf
http-request deny if { var(sess.ip.ip_score) -m int ge 80 }
# /etc/haproxy/ip-reputation.conf
[ip-reputation]
spoe-agent ip-reputation
messages check-client-ip
option var-prefix ip
timeout hello 100ms
timeout idle 30s
timeout processing 15ms
use-backend spoe-backend
spoe-message check-client-ip
args ip=src
event on-frontend-http-request
backend spoe-backend
mode tcp
server spoa 127.0.0.1:9000
| Crate | Role |
|---|---|
tokio |
Async I/O and task runtime |
mimalloc |
Global allocator (better multi-threaded allocation) |
log = "0.4" |
Logging facade — zero overhead when no subscriber is registered |
No serialization framework, no codec crate. The SPOE wire format is implemented directly.
One tokio task per TCP connection, one JoinSet::spawn per NOTIFY frame. The write path
uses an mpsc::unbounded_channel and a dedicated writer task with BufWriter: handler tasks
push encoded ACK bytes via a non-blocking channel send; the writer task drains with try_recv
and issues one flush() per burst — multiple ACKs per syscall under pipelining load.
Benchmark (cargo run --release --example bench -- [FRAMES] [CONNECTIONS]) against the
Go reference implementation:
| Config | Rust | Go | Speedup |
|---|---|---|---|
| 1 connection, 200 000 frames | ~687 000 fps | ~244 000 fps | ~2.8× |
| 4 connections, 1 000 000 frames | ~1 740 000 fps | ~354 000 fps | ~4.9× |
The Go bottleneck is one conn.Write() syscall per ACK. The Rust writer task batches
ACKs into one flush() per burst.
cargo test # 65 unit + integration tests
make vtc # VTest end-to-end test (real HAProxy required)
make vtc-cov # combined LLVM coverage report
Line coverage: 95.33% (unit tests + VTest against ip_reputation example).
The ip_reputation example is configured entirely through environment variables — no config
file needed.
| Variable | Default | Description |
|---|---|---|
SPOE_ADDR |
0.0.0.0:9000 |
TCP address the agent listens on |
TOKIO_WORKER_THREADS |
number of CPU cores | Tokio async worker threads; read automatically by the runtime |
RUST_LOG |
(logging off) | Log filter, e.g. warn or haproxy_spoe=debug. Requires an env_logger subscriber in the binary. |
Most operational tuning (connection limits, timeouts, rate limiting) lives on the HAProxy side in the SPOE config — see HAProxy configuration above.
podman build -f Containerfile -t spoe-agent .
podman run -e SPOE_ADDR=0.0.0.0:9000 \
-e TOKIO_WORKER_THREADS=4 \
-e RUST_LOG=warn \
-p 9000:9000 spoe-agent
MIT