"I don't shadow test. I shadow WIN."
— Harvey Specter, probably
A distributed shadow-mode traffic mirror with divergence analysis.
Because your rewrite says it works. Specter makes it prove it.
Specter sits in front of your services and plays the long game.
Every request that comes in gets forwarded to your live service as normal. Simultaneously, a silent copy gets fired at your shadow service — a canary, a rewrite, a new version, whatever you're testing. The shadow response never reaches the client. Instead, Specter compares the two, logs every divergence, and builds a statistical profile of how your new service behaves under real production traffic.
No fake load tests. No synthetic data. The real thing — with zero risk.
"Anyone can do it with perfect data. You want to be great? Test against the messy stuff." — Harvey Specter (we're paraphrasing)
Every team doing a service rewrite, database migration, or language port faces the same problem: you can't know if the new thing is correct until real traffic hits it — but you can't risk real traffic hitting it until you know it's correct.
That's the catch-22. Specter breaks it.
| Without Specter | With Specter |
|---|---|
| "It passed staging" 🤞 | "It matched 99.97% of production traffic" ✅ |
| Find bugs after cutover | Find bugs before cutover |
| Blind confidence | Evidence-based confidence |
| Sleepless deploy nights | Boring deploy afternoons |
Three commands to run Specter locally with Docker Compose:
git clone https://github.com/Dubjay/specter.git && cd specter
docker compose -f docker/docker-compose.yaml up --build
curl -s -H "X-User-ID: user-123" http://127.0.0.1:8080/profileOptional: inspect aggregated divergence stats:
curl -s http://127.0.0.1:8080/api/statsThe terminal UI polls GET /api/stats every second and shows live divergence metrics.
For a one-command local demo (starts mock upstreams + Specter proxy + TUI):
make tui-demoIf Specter is already running on :8080 (for example via Docker Compose):
TERM=xterm-256color go run ./cmd/specter --config internal/config/specter.yaml --ui tuiStart the two mock upstreams and Specter proxy:
go run ./cmd/testserver --port 3000 --mode live
go run ./cmd/testserver --port 3001 --mode shadow
go run ./cmd/specter --config internal/config/specter.yaml --ui proxyGenerate some traffic in another terminal:
for i in $(seq 1 20); do
curl -s -H "X-User-ID: user-$i" http://127.0.0.1:8080/profile > /dev/null
doneThen open the TUI:
TERM=xterm-256color go run ./cmd/specter --config internal/config/specter.yaml --ui tuij/kor↑/↓: move selection/scrollEnter: open selected divergence drill-downEscorb: return to dashboard from drill-downq(orCtrl+C): quit
Specter config lives in YAML (examples: internal/config/specter.yaml, internal/config/specter-1.yaml, internal/config/specter-2.yaml).
listen(string): Address and port for Specter to bind to, for example":8080". Default:":8080".live_target(string): Base URL for the live/primary upstream service that serves client traffic.shadow_target(string): Base URL for the shadow/candidate upstream service used for mirrored requests.routing_key(string): HTTP header name used for deterministic routing/ring ownership, e.g."X-User-ID". Default:"X-User-ID".
node_name(string): Unique node identifier in the Specter cluster.bind_addr(string): Gossip/memberlist bind address (host:port), for example"0.0.0.0:7946".peers(array of strings): Seed peers to join on startup, e.g.["10.0.0.10:7946", "10.0.0.11:7946"].
backend(string): Storage backend. Supported values are"badger"and"postgres". Default:"badger".badger_path(string): Filesystem path for local BadgerDB data. Used whenbackend: "badger". Default:"./data/specter".postgres_dsn(string): DSN/connection string for Postgres. Used whenbackend: "postgres".
rate(float): Mirror percentage from0.0to1.0(1.0means mirror all requests). Default:1.0.divergence_only(bool): Iftrue, only stores events where live vs shadow responses diverge. Default:false.
Specter uses consistent hashing with virtual nodes: each cluster node is inserted many times on a hash ring, and each request key (from specter.routing_key, such as X-User-ID) is hashed to the nearest clockwise position. That determines request ownership consistently across the cluster, minimizes remapping when nodes join/leave, and keeps traffic distribution smoother than single-slot-per-node hashing.
Contributions are welcome.
- Fork and clone the repo.
- Create a feature branch.
- Run checks locally:
go test ./.... - Open a pull request with a clear summary of the change and any behavior/UX impact.
If your change adds or modifies behavior, include corresponding tests under internal/... where appropriate.
