Distributed HTTP request tracer for Node.js — no code changes required.
reqflow proxy :4000 → http://localhost:3000 (api, rate=1.0)
reqflow api :4001
method url status dur trace
──────────────────────────────────────────────────────────────────────────
GET /users 200 23ms [a1b2c3d4]
POST /orders 201 41ms [e5f6a7b8]
GET /orders/99/items 200 18ms [e5f6a7b8]
GET /users/42 404 8ms [c9d0e1f2]
POST /payments 500 61ms [f3a4b5c6]
✖ 1 server error
You have two Node.js services calling each other. A request fails and you want to know which service caused it and how long each hop took. Your options are:
- Add
console.logeverywhere and grep through interleaved output from multiple processes - Install an APM SDK in every service, redeploy everything, and wait for traces to appear in a cloud dashboard
- Use
cls-rtracer— which only traces within a single process and requires code changes in every service
None of these are good when you just want to see what's happening locally in the next five minutes.
reqflow sits in front of your services as a transparent HTTP proxy. It intercepts requests, injects traceparent headers, records spans, and stitches them into a waterfall timeline — without touching a single line of your application code.
npm install -g reqflowRequires Node.js ≥ 18.
reqflow start --target http://localhost:3000 --port 4000Point all traffic to :4000 instead of :3000. reqflow forwards everything unchanged and records spans in the background. Your service doesn't know it's being proxied.
reqflow tracesShows a numbered list of recent traces — timestamp, span count, root request, status code, and total duration.
reqflow show a1b2c3d4Renders a proportional timing waterfall in your terminal. Partial traceId prefixes work — no need to copy the full 32-character ID.
Run one proxy per service. When Service A calls Service B, it passes the trace headers forward at the network layer. Both proxies see the same traceId and their spans are stitched into one tree.
Client → reqflow:4000 → Service A
↓ (traceparent header forwarded)
reqflow:4002 → Service B
Both proxies report to the same collector (:4001). reqflow show <id> renders the full cross-service waterfall. For a ready-to-use Docker setup, see docker-compose.example.yml.
reqflow export a1b2c3d4 --out exports/segment.jsonOutput is compatible with PutTraceSegments and the X-Ray daemon UDP format.
| Flag | Default | Description |
|---|---|---|
--target <url> |
(required) | URL of the service to proxy |
--port <port> |
4000 |
Port the proxy listens on |
--api-port <port> |
4001 |
Port for the collector API |
--service <n> |
service |
Label attached to every span from this proxy |
--sample-rate <rate> |
1.0 |
Fraction of requests to trace — 0.1 traces ~10% |
| Flag | Default | Description |
|---|---|---|
--limit <n> |
20 |
Max traces to display |
--api <url> |
http://localhost:4001 |
Collector API to query |
| Flag | Default | Description |
|---|---|---|
--api <url> |
http://localhost:4001 |
Collector API to query |
| Flag | Default | Description |
|---|---|---|
--api <url> |
http://localhost:4001 |
Collector API to query |
--out <file> |
— | Write to file instead of stdout |
Every request through the proxy goes through six steps:
| Step | What happens |
|---|---|
| Intercept | The proxy receives the request before it reaches your service |
| Assign | A traceId is read from an incoming traceparent header, or a new one is generated |
| Inject | traceparent, x-trace-id, and x-span-id are added to the forwarded request |
| Record | After the response, a span is stored: service name, method, URL, status, duration, timestamp, parentId |
| Stitch | Spans sharing a traceId are assembled into a parent→child tree via parentId references |
| Render | reqflow show draws a proportional timing bar for each span, sorted by start time |
Sampling is deterministic: the same traceId always produces the same decision, so a trace is either fully sampled or fully dropped — no partial traces where only some service spans are recorded.
Place a .reqflowrc in your project root:
{
"target": "http://localhost:3000",
"port": 4000,
"apiPort": 4001,
"service": "api",
"sampleRate": 1.0,
"maxTraces": 500,
"skipPaths": ["/health", "/healthz", "/metrics", "/ping"],
"store": "memory"
}Priority order: CLI flags > .reqflowrc > environment variables > defaults.
Environment variables:
| Variable | Maps to |
|---|---|
REQFLOW_TARGET |
target |
REQFLOW_PORT |
port |
REQFLOW_API_PORT |
apiPort |
REQFLOW_SERVICE |
service |
REQFLOW_SAMPLE_RATE |
sampleRate |
REQFLOW_STORE |
store (memory or redis) |
REDIS_URL |
Redis connection string when store=redis |
reqflow traces, reqflow show, and reqflow export are read-only. They query the collector API and render data — they never modify anything.
reqflow start writes spans to the collector as requests pass through. It will:
- Forward all requests unchanged to your target service — your service sees the same request, plus trace headers
- Skip paths in
skipPathsentirely — no tracing, no recording, no overhead - Emit a 502 span if the target service is unreachable, so connection errors appear in the waterfall
The collector API is unauthenticated. The default port is :4001. Don't expose it outside localhost — anyone who can reach it can read all captured trace data including request URLs and headers.
The memory store holds all spans in process memory. At the default maxTraces: 500, this is a few MB at most for typical payloads. For sustained high traffic, use the Redis store.
reqflow never modifies your service's responses. It only intercepts at the proxy layer — status codes, headers, and bodies are forwarded byte-for-byte to the original caller.
# 1. Start your service as normal
node server.js
# 2. Start reqflow in front of it
reqflow start --target http://localhost:3000 --port 4000 --service api
# 3. Point traffic at the proxy, not your service directly
curl http://localhost:4000/users
curl http://localhost:4000/users/1
curl -X POST http://localhost:4000/orders -H "Content-Type: application/json" -d '{"item":"book"}'
# 4. List what was captured
reqflow traces
# 5. Show a waterfall (any traceId prefix works)
reqflow show a1b2c3d4
# 6. Export to X-Ray format
reqflow export a1b2c3d4 --out exports/segment.json- The collector API has no authentication. Don't expose
:4001outside your local network. - The memory store keeps up to
maxTracestraces in-process. For high-traffic scenarios, use the Redis store with a TTL. - reqflow intercepts inbound requests only. Outbound calls your service makes are traced only when the downstream service also has a reqflow proxy in front of it.
- WebSocket upgrades and long-polling connections are forwarded but not recorded as spans.
- Node.js ≥ 18
- npm ≥ 9
git clone https://github.com/santoshkumar-in/reqflow.git
cd reqflow
npm install# Run directly
node bin/reqflow.js start --target http://localhost:3000 --port 4000
# Or link it so `reqflow` resolves as a command globally on your machine
npm link
reqflow start --target http://localhost:3000 --port 4000
npm linkcreates a symlink from your globalbintobin/reqflow.js. Runnpm unlink -g reqflowto remove it.
bin/
└── reqflow.js # CLI entry point — wires Commander commands
src/
├── proxy.js # HTTP intercept proxy — header injection, span emission
├── collector.js # Span store + trace tree stitching (async, pluggable backend)
├── tracer.js # ID generation, W3C traceparent parsing, header normalisation
├── waterfall.js # Terminal waterfall renderer — proportional timing bars
├── sampler.js # Deterministic request sampling via traceId bucket
├── exporter.js # AWS X-Ray segment format serialisation
├── index.js # Public API re-exports
├── store/
│ ├── memory.js # Bounded in-memory store (default)
│ └── redis.js # Redis-backed store with TTL + sorted index
├── commands/
│ ├── start.js # `reqflow start` — proxy + collector + HTTP API
│ ├── traces.js # `reqflow traces` — fetch and render trace list
│ ├── show.js # `reqflow show` — fetch and render waterfall
│ └── export.js # `reqflow export` — serialise trace to X-Ray JSON
└── utils/
├── config.js # .reqflowrc loader — CLI > file > env > defaults
└── logger.js # Structured stderr logger with level filter and JSON mode
__tests__/
├── tracer.test.js # ID generation, header extraction, normalisation, propagation
├── collector.test.js # addSpan, buildTree, eviction, circular refs
├── proxy.test.js # Header injection, skip paths, sampler integration
├── waterfall.test.js # Smoke tests for all edge case inputs
├── sampler.test.js # Distribution, determinism, describeSampling
├── exporter.test.js # X-Ray segment shape, fault/error flags, subsegments
└── config.test.js # DEFAULTS, readEnv, loadConfig merging and clamping
src/proxy.js is where traces begin. The proxyReq handler assigns traceId/spanId, injects headers, and stores context on req._reqflow. The proxyRes handler reads that context and fires onSpan(). The error handler emits a 502 span so failures are visible in the waterfall. DEFAULT_SKIP_PATHS is an exported constant — override it per-instance via the skipPaths option.
src/collector.js receives spans and assembles trace trees. buildTree() converts the flat span list into a parent→child nested structure using a byId map and a visited Set to guard against circular parentId references. All store calls are async so the memory and Redis backends are interchangeable without touching collector logic.
src/tracer.js handles header parsing. normaliseHeaders() lowercases every key before any lookup — this is what makes Traceparent, TRACEPARENT, and traceparent all work correctly on HTTP/1.1. extractContext() always prefers W3C traceparent over the custom x-trace-id headers when both are present.
src/sampler.js uses the first 8 hex characters of the traceId as a deterministic bucket: parseInt(traceId.slice(0,8), 16) / 0xffffffff. A trace is always fully sampled or fully dropped — you never get partial traces where one service recorded a span and another didn't.
npm testTests use Jest with --experimental-vm-modules for ESM support. To test a specific file:
node --experimental-vm-modules node_modules/.bin/jest tracerTo run with coverage:
npm run test:coverageNo linter is configured by default. To add ESLint:
npm install --save-dev eslint
npx eslint --initThe most useful local test is running reqflow against an actual HTTP service and watching it capture real traces.
Single service — basic tracing
Start a minimal Express app in a scratch directory:
mkdir /tmp/test-svc && cd /tmp/test-svc
npm init -y && npm pkg set type=module && npm install expressPaste this into server.js:
import express from "express";
const app = express();
app.use(express.json());
app.get("/users", (req, res) => res.json([{ id: 1, name: "Alice" }]));
app.get("/users/:id", (req, res) => res.json({ id: req.params.id }));
app.post("/orders", (req, res) => res.status(201).json({ orderId: "ord_001" }));
app.get("/health", (req, res) => res.json({ ok: true }));
app.listen(3000, () => console.log("service :3000"));node server.js
# → service :3000In a second terminal, start reqflow in front of it:
cd /path/to/reqflow
node bin/reqflow.js start --target http://localhost:3000 --port 4000 --service apiIn a third terminal, send traffic through the proxy — not to :3000 directly:
curl http://localhost:4000/users
curl http://localhost:4000/users/1
curl -X POST http://localhost:4000/orders -H "Content-Type: application/json" -d '{"item":"book"}'
curl http://localhost:4000/users/999Then inspect:
node bin/reqflow.js traces --api http://localhost:4001
node bin/reqflow.js show <traceId-prefix> --api http://localhost:4001Confirm that GET /health does not appear in reqflow traces — it's in DEFAULT_SKIP_PATHS and is forwarded silently without recording.
Multi-service — cross-service stitching
Add a second service that calls the first. Paste into service-b.js inside /tmp/test-svc:
import express from "express";
const app = express();
app.get("/cart", async (req, res) => {
// Forward all incoming trace headers so spans stitch into one trace
const fwd = {};
for (const h of ["traceparent", "x-trace-id", "x-span-id", "x-parent-span-id"])
if (req.headers[h]) fwd[h] = req.headers[h];
// Call service-a through its reqflow proxy (:4000), not directly (:3000)
const user = await fetch("http://localhost:4000/users/1", { headers: fwd }).then(r => r.json());
res.json({ cart: [], user });
});
app.listen(3001, () => console.log("service-b :3001"));node /tmp/test-svc/service-b.jsStart a second reqflow proxy for service-b:
node bin/reqflow.js start \
--target http://localhost:3001 \
--port 4002 \
--api-port 4003 \
--service cartHit service-b through its proxy and check both collectors:
curl http://localhost:4002/cart
node bin/reqflow.js traces --api http://localhost:4001 # api spans
node bin/reqflow.js traces --api http://localhost:4003 # cart spans
node bin/reqflow.js show <traceId-prefix> --api http://localhost:4001The same traceId appears in both collectors — the cross-service span tree is stitched via the forwarded traceparent header.
Sampling — verify capture rate
node bin/reqflow.js start --target http://localhost:3000 --port 4000 --sample-rate 0.3
for i in $(seq 1 30); do curl -s http://localhost:4000/users > /dev/null; done
node bin/reqflow.js traces --api http://localhost:4001
# Expect roughly 9 traces captured (~30%)Sampling is deterministic — the same traceId always produces the same decision, so re-sending an identical request won't flip it from sampled to dropped.
Redis store — spans survive proxy restarts
# Requires Redis (brew install redis && brew services start redis on macOS)
REQFLOW_STORE=redis REDIS_URL=redis://localhost:6379 \
node bin/reqflow.js start --target http://localhost:3000 --port 4000
curl http://localhost:4000/users
# Ctrl-C the proxy, restart it with the same command, then:
node bin/reqflow.js traces --api http://localhost:4001
# The trace from before the restart is still thereX-Ray export — verify segment format
node bin/reqflow.js export <traceId-prefix> --out /tmp/seg.json --api http://localhost:4001
cat /tmp/seg.jsonThe output is a JSON array. The root segment should have trace_id in 1-xxxxxxxx-xxxxxxxxxxxxxxxxxxxxxxxx format, plus id, name, start_time, end_time, http, fault, and error fields. Child spans appear as subsegments.
Create a .reqflowrc in any directory you're testing from to avoid repeating flags:
{
"target": "http://localhost:3000",
"port": 4000,
"service": "dev",
"sampleRate": 1.0,
"maxTraces": 100
}Then run node bin/reqflow.js start with no flags — the config file is found automatically by walking up from the current directory.
Bug reports and pull requests are welcome. For significant changes, open an issue first to discuss what you'd like to change.
When contributing:
- New header formats belong in
src/tracer.js— keepextractContext()the single source of truth for context extraction - Store backends live in
src/store/and must implement the async{ set, get, keys, size, clear }interface to remain interchangeable - The
onSpancallback is the only coupling point betweenproxy.jsandcollector.js— keep it that way
MIT © Santosh Kumar