perf!: keep-alive, request body stream and faster proxyFetch#124
Conversation
mitata-based benchmarks for httpxy server, proxyFetch, fast-proxy, @fastify/http-proxy, and http-proxy-3 with pre-bench validation.
1. Default keep-alive agents (http/https) with 256 maxSockets for connection reuse — skipped for HTTP/2 incoming requests. 2. LRU URL parse cache (256 entries) avoids repeated `new URL()` for the same string targets in server.ts, fetch.ts, and parseAddr. 3. Single header copy in `setupOutgoing` — merge req.headers and options.headers in one pass instead of two spread operations. httpxy server is now on par with fast-proxy (~50µs/req vs ~43µs GET).
📝 WalkthroughWalkthroughAdds a Docker-based benchmarking suite and multiple proxy implementations, introduces default keep-alive HTTP/HTTPS agents and agent-selection changes in core utilities/fetch, updates docs to document agent behavior, and adjusts tests and workspace configuration accordingly. Changes
Sequence Diagram(s)sequenceDiagram
actor User
participant BenchRunner as Bench Runner
participant Docker as Docker
participant Target as Target Server
participant Proxy as Proxy Server
participant Bombardier as Bombardier
participant Parser as Result Parser
User->>BenchRunner: start benchmark
BenchRunner->>Docker: build httpxy-bench image
Docker-->>BenchRunner: image ready
BenchRunner->>Docker: start target container
Docker->>Target: launch (port 3000)
Target-->>Docker: healthy
BenchRunner->>Docker: start proxy containers
Docker->>Proxy: launch (ports 3001-3006)
Proxy-->>Docker: healthy
BenchRunner->>Bombardier: run GET/POST load
Bombardier->>Proxy: generate requests
Proxy->>Target: forward requests
Target-->>Proxy: responses
Bombardier-->>BenchRunner: JSON results
BenchRunner->>Parser: parse metrics (RPS, latency)
Parser-->>BenchRunner: BenchResult
BenchRunner->>User: print markdown tables
BenchRunner->>Docker: cleanup containers
Docker-->>BenchRunner: removed
Estimated code review effort🎯 4 (Complex) | ⏱️ ~45 minutes Possibly related PRs
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Codecov Report❌ Patch coverage is
Additional details and impacted files@@ Coverage Diff @@
## main #124 +/- ##
==========================================
- Coverage 95.74% 94.91% -0.84%
==========================================
Files 8 8
Lines 752 786 +34
Branches 303 320 +17
==========================================
+ Hits 720 746 +26
- Misses 30 35 +5
- Partials 2 5 +3 ☔ View full report in Codecov by Sentry. 🚀 New features to boost your workflow:
|
- Default keep-alive agents (http/https) for connection reuse in proxyFetch - Sync _toBuffer fast path avoids async overhead for string/ArrayBuffer bodies - Response headers built from rawHeaders array pairs instead of Headers object - Request headers: fast path for plain object (Object.assign) skipping Headers API proxyFetch: ~170µs -> ~57µs GET, ~165µs -> ~55µs POST 1KB (3x faster)
…cache - Restore original header merge order in setupOutgoing so options.headers still overrides :authority host for HTTP/2 requests - Stream request bodies in proxyFetch instead of buffering (buffer only when followRedirects is enabled for 307/308 replay) - Remove URL parse cache (cloning is slower than parsing, mutable cache is fragile) - Document default keep-alive agent and agent: false opt-out in README
Unify Headers and Array branches into single iterable loop.
There was a problem hiding this comment.
Actionable comments posted: 6
🧹 Nitpick comments (4)
bench/src/target.ts (1)
11-20: Consider adding error handling for aborted requests.If the client disconnects mid-request, the
errorevent fires but is unhandled. For a benchmark target this is low risk, but adding a handler prevents potential uncaught exceptions.Suggested fix
const chunks: Buffer[] = []; + req.on("error", () => { + res.destroy(); + }); req.on("data", (c) => chunks.push(c));🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bench/src/target.ts` around lines 11 - 20, The request stream handlers (the chunk collection block using chunks, req.on("data") and req.on("end")) lack handlers for aborted/error cases; add req.on("error", ...) and req.on("aborted", ...) handlers alongside the existing data/end listeners to stop collecting, clean up any state, and ensure the response is ended or destroyed safely (check res.writableEnded before calling res.end/destroy) to avoid uncaught exceptions when the client disconnects mid-request.bench/Dockerfile (2)
4-4: Pinpnpmversion for reproducible builds.Using
pnpm@latestcan lead to non-reproducible builds if pnpm introduces breaking changes. Consider pinning to a specific version.Suggested fix
-RUN corepack enable && corepack prepare pnpm@latest --activate +RUN corepack enable && corepack prepare pnpm@9 --activate🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bench/Dockerfile` at line 4, The Dockerfile currently enables Corepack and installs pnpm using the floating tag "pnpm@latest" (the RUN line invoking corepack prepare pnpm@latest --activate), which risks non-reproducible builds; change this to pin a concrete version by replacing "pnpm@latest" with a specific version string (or introduce a build ARG like PNPM_VERSION and use it in the corepack prepare invocation) so the RUN command consistently installs the same pnpm release across builds.
1-13: Consider adding a non-root user for defense-in-depth.Static analysis flagged running as root. While this is benchmark tooling (lower risk), adding a non-root user is a good practice for defense-in-depth.
Suggested fix
COPY src/ src/ COPY bench/ bench/ COPY tsconfig.json ./ + +RUN adduser --disabled-password --gecos "" benchuser +USER benchuser🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bench/Dockerfile` around lines 1 - 13, Add a non-root user in the Dockerfile: create a group and user (e.g., "bench"), set HOME, chown the WORKDIR (/app) and any binaries copied (like /usr/local/bin/bombardier) to that user, and switch to that user with USER before running any install or runtime steps; ensure RUN corepack prepare/pnpm install and subsequent COPY/WORKDIR operations occur with proper ownership so the non-root user can access files and execute binaries.bench/test.ts (1)
68-77:collectBodyhelper duplicated frombench/src/httpxy-fetch.ts.This is a minor duplication. For a benchmark suite, this is acceptable, though you could extract it to a shared utility if more benchmark files need it in the future.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@bench/test.ts` around lines 68 - 77, The collectBody function is duplicated from bench/src/httpxy-fetch.ts; remove the duplicate and instead import and reuse the single implementation (or extract it into a new shared helper module) so benchmarks share the same code. Locate the collectBody declaration in bench/test.ts and replace it with an import from the existing bench/src/httpxy-fetch.ts export (or create a new bench/utils/http.ts exporting collectBody and update both bench/test.ts and bench/src/httpxy-fetch.ts to import it). Ensure the exported symbol name is collectBody so callers need no other changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@bench/bench.ts`:
- Line 23: The POST workload constant POST_BODY is only ~41 bytes but the run
label and summary claim "~1KB JSON"; either replace POST_BODY with a generated
~1 KiB JSON payload (e.g., repeat or expand the message string to reach ~1024
bytes) or change the run/summary strings to reflect the actual size. Locate
POST_BODY and the associated run label/summary usages (the POST case labels that
say "~1KB JSON" around the POST workload definitions) and update them
consistently so the payload size and label match; apply the same change to the
other POST references noted in the file.
- Line 15: The CLI option "sequential" currently defaults to false causing
parallel shared-target runs; change its default to true by updating the option
definition (sequential: { type: "boolean", short: "s", default: true }) so
isolated runs are the default, and make the corresponding change for the
duplicated option instance referenced later in the file; if you still want
parallel comparisons, instead modify the parallel-run logic to provision a
separate target per proxy rather than driving a single shared target.
- Around line 45-55: The cleanup() function currently removes any Docker
container matching the 'bench-' name prefix which is unsafe; change it to only
remove containers tracked by the local containers array (or alternatively start
containers with a per-run label and filter on that label). Specifically, update
the cleanup implementation to use the containers: string[] array (and e.g. its
entries or a map of started IDs) rather than execSync('docker ps -q --filter
"name=bench-"'), and when creating containers ensure you push the created
container IDs into containers so cleanup only calls docker rm -f on those IDs
(or if you choose the label approach, ensure container startup uses a unique
per-run label and change the execSync filter to use that label).
- Around line 169-177: parseResult currently ignores failed HTTP status counts
and transport errors, so update parseResult(json: string): BenchResult to detect
and fail on any non-2xx or transport errors present in the parsed
BombardierResult (symbol: parseResult, return type: BenchResult, input:
BombardierResult). After parsing const { result: r } = JSON.parse(json) as {
result: BombardierResult }, inspect r.statusCodes (or equivalent map of status
counts) and any error/transport fields (e.g., r.errors, r.transportErrors,
r.connectionErrors) and if any non-2xx count or any transport/error count is > 0
throw a descriptive Error (or return a failing result) so the run fails;
otherwise continue to compute rps, avgLatency, p50, p99 and bytesPerSec as
before.
In `@bench/src/http-proxy.ts`:
- Around line 7-10: The proxy instance created by httpProxy.createProxyServer
(symbol: proxy) lacks an 'error' listener so proxy errors can crash the process;
add proxy.on('error', ...) that handles errors emitted by proxy (reference
createProxyServer and proxy.web) and properly responds to the client: set an
appropriate status (e.g., 502), ensure headers aren't already sent, send a
minimal error body, and log the error; this prevents uncaught exceptions and
returns a safe HTTP error when the target is unreachable.
In `@bench/src/httpxy-server.ts`:
- Around line 8-10: The call to proxy.web inside the http.createServer callback
can return a rejected Promise and cause unhandled rejections; update the server
request handler (the http.createServer callback that calls proxy.web(req, res))
to handle the returned Promise by attaching a .catch handler (or using
async/await) that logs the error and sends an appropriate error response (e.g.,
502) to the client, ensuring the response is ended and errors from proxy.web are
not left unhandled.
---
Nitpick comments:
In `@bench/Dockerfile`:
- Line 4: The Dockerfile currently enables Corepack and installs pnpm using the
floating tag "pnpm@latest" (the RUN line invoking corepack prepare pnpm@latest
--activate), which risks non-reproducible builds; change this to pin a concrete
version by replacing "pnpm@latest" with a specific version string (or introduce
a build ARG like PNPM_VERSION and use it in the corepack prepare invocation) so
the RUN command consistently installs the same pnpm release across builds.
- Around line 1-13: Add a non-root user in the Dockerfile: create a group and
user (e.g., "bench"), set HOME, chown the WORKDIR (/app) and any binaries copied
(like /usr/local/bin/bombardier) to that user, and switch to that user with USER
before running any install or runtime steps; ensure RUN corepack prepare/pnpm
install and subsequent COPY/WORKDIR operations occur with proper ownership so
the non-root user can access files and execute binaries.
In `@bench/src/target.ts`:
- Around line 11-20: The request stream handlers (the chunk collection block
using chunks, req.on("data") and req.on("end")) lack handlers for aborted/error
cases; add req.on("error", ...) and req.on("aborted", ...) handlers alongside
the existing data/end listeners to stop collecting, clean up any state, and
ensure the response is ended or destroyed safely (check res.writableEnded before
calling res.end/destroy) to avoid uncaught exceptions when the client
disconnects mid-request.
In `@bench/test.ts`:
- Around line 68-77: The collectBody function is duplicated from
bench/src/httpxy-fetch.ts; remove the duplicate and instead import and reuse the
single implementation (or extract it into a new shared helper module) so
benchmarks share the same code. Locate the collectBody declaration in
bench/test.ts and replace it with an import from the existing
bench/src/httpxy-fetch.ts export (or create a new bench/utils/http.ts exporting
collectBody and update both bench/test.ts and bench/src/httpxy-fetch.ts to
import it). Ensure the exported symbol name is collectBody so callers need no
other changes.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: f15e71fa-82f5-44d9-a0d5-b4c852dfee2d
⛔ Files ignored due to path filters (1)
pnpm-lock.yamlis excluded by!**/pnpm-lock.yaml
📒 Files selected for processing (17)
CHANGELOG.mdREADME.mdbench/Dockerfilebench/bench.tsbench/package.jsonbench/src/fast-proxy.tsbench/src/fastify.tsbench/src/http-proxy-3.tsbench/src/http-proxy.tsbench/src/httpxy-fetch.tsbench/src/httpxy-server.tsbench/src/target.tsbench/test.tspnpm-workspace.yamlsrc/_utils.tssrc/fetch.tstest/_utils.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
src/fetch.ts (1)
28-32:⚠️ Potential issue | 🟡 MinorUpdate JSDoc to reflect the new default agent behavior.
The comment states
Default: false (no agent, no keep-alive)but the implementation now defaults to shared keep-alive agents whenopts.agentis not provided (lines 159-165). This inconsistency will mislead API consumers.📝 Proposed fix
/** * HTTP agent for connection pooling / reuse. - * Default: `false` (no agent, no keep-alive). + * Default: shared keep-alive agent (256 maxSockets, 64 maxFreeSockets). + * Set to `false` to disable connection pooling. */ agent?: any;🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/fetch.ts` around lines 28 - 32, The JSDoc for the agent property is out of date: update the comment above the agent?: any property to state that when opts.agent is not provided the implementation uses shared keep-alive HTTP(S) agents (i.e., a shared keep-alive agent is created and reused) rather than defaulting to false/no-agent; mention opts.agent as the override so callers know supplying opts.agent disables the shared default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/fetch.ts`:
- Around line 375-378: In the Readable-body branch inside src/fetch.ts (the
block handling "if (body instanceof Readable)"), replace the current
body.on("error", reject) handler with an error handler that both destroys the
outgoing request and rejects the promise (e.g., call req.destroy(err) then
reject(err)) so the socket is closed on stream errors; keep piping
(body.pipe(req)) but ensure the new handler references the same req and reject
symbols used in that scope.
---
Outside diff comments:
In `@src/fetch.ts`:
- Around line 28-32: The JSDoc for the agent property is out of date: update the
comment above the agent?: any property to state that when opts.agent is not
provided the implementation uses shared keep-alive HTTP(S) agents (i.e., a
shared keep-alive agent is created and reused) rather than defaulting to
false/no-agent; mention opts.agent as the override so callers know supplying
opts.agent disables the shared default.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
Ensures the socket is properly closed when the request body stream errors.
… detection - Default to sequential mode to avoid shared-target contention - Generate actual ~1KB JSON POST body matching the label - Scope cleanup to only tracked container IDs - Fail on non-2xx responses and transport errors in parseResult
This PR improves httpxy's throughput by reducing per-request allocations, enabling connection reuse, and streaming request bodies.
Connection reuse via default keep-alive agents
ProxyServerandproxyFetchnow use sharedhttp.Agent/https.Agentinstances withkeepAlive: true(256 max sockets, 64 max free sockets) instead of creating a new socket per request. HTTP/2 incoming requests are excluded to avoid conflicts with stream lifecycle. Explicitagentoption still takes precedence. Setagent: falseto opt out and restore the previous per-request connection behavior.Streaming request bodies in
proxyFetchproxyFetchnow pipesReadableStreamandBlobrequest bodies directly to the upstream request instead of buffering them in memory. WhenfollowRedirectsis enabled, bodies are still buffered for 307/308 replay.string,ArrayBuffer, andTypedArraybodies are converted toBuffersynchronously in both paths.Single-pass header merge in
setupOutgoingPreviously, outgoing headers were built with two object spreads (
{ ...req.headers }then{ ...outgoing.headers, ...options.headers }). Nowreq.headersandoptions.headersare merged in a single pass while preserving the original merge order (:authority→ host override happens beforeoptions.headers, so explicithostin options still wins).proxyFetchoptimizationsrawHeaderspairs instead of iteratingres.headersand constructing aHeadersobject.init.headersis a plainRecord(most common programmatic use), headers are merged withObject.assigninstead of wrapping in aHeadersinstance.ProxyServer.Benchmark suite
Added a Docker-based benchmark (
bench/) comparing httpxy againstfast-proxy,@fastify/http-proxy,http-proxy, andhttp-proxy-3using bombardier. Includes a validation test to ensure all implementations produce correct results before benchmarking.Benchmarks
bench/bench.ts -s -d 60s -c 128GET (no body)
POST (~1KB JSON)
Acknowledgements
Performance optimizations were inspired by analysis of fast-proxy and @fastify/http-proxy.
Summary by CodeRabbit
Documentation
agentoption defaults, new keep-alive behavior, and an Acknowledgements section.New Features
false).Bug Fixes
Tests