Skip to content

perf!: keep-alive, request body stream and faster proxyFetch#124

Merged
pi0 merged 16 commits intomainfrom
perf-improvements
Mar 26, 2026
Merged

perf!: keep-alive, request body stream and faster proxyFetch#124
pi0 merged 16 commits intomainfrom
perf-improvements

Conversation

@pi0
Copy link
Copy Markdown
Member

@pi0 pi0 commented Mar 25, 2026

This PR improves httpxy's throughput by reducing per-request allocations, enabling connection reuse, and streaming request bodies.

Connection reuse via default keep-alive agents

ProxyServer and proxyFetch now use shared http.Agent / https.Agent instances with keepAlive: true (256 max sockets, 64 max free sockets) instead of creating a new socket per request. HTTP/2 incoming requests are excluded to avoid conflicts with stream lifecycle. Explicit agent option still takes precedence. Set agent: false to opt out and restore the previous per-request connection behavior.

Streaming request bodies in proxyFetch

proxyFetch now pipes ReadableStream and Blob request bodies directly to the upstream request instead of buffering them in memory. When followRedirects is enabled, bodies are still buffered for 307/308 replay. string, ArrayBuffer, and TypedArray bodies are converted to Buffer synchronously in both paths.

Single-pass header merge in setupOutgoing

Previously, outgoing headers were built with two object spreads ({ ...req.headers } then { ...outgoing.headers, ...options.headers }). Now req.headers and options.headers are merged in a single pass while preserving the original merge order (:authority → host override happens before options.headers, so explicit host in options still wins).

proxyFetch optimizations

  • Raw header pairs for response — Response headers are built from rawHeaders pairs instead of iterating res.headers and constructing a Headers object.
  • Plain-object header fast-path — When init.headers is a plain Record (most common programmatic use), headers are merged with Object.assign instead of wrapping in a Headers instance.
  • Default keep-alive agent — Same shared agent pool as ProxyServer.

Benchmark suite

Added a Docker-based benchmark (bench/) comparing httpxy against fast-proxy, @fastify/http-proxy, http-proxy, and http-proxy-3 using bombardier. Includes a validation test to ensure all implementations produce correct results before benchmarking.

Benchmarks

bench/bench.ts -s -d 60s -c 128

Duration: 60s | Connections: 128 | Mode: sequential

GET (no body)

Proxy Req/s Scale Avg P50 P99 Throughput
httpxy.server 19694 1.00x 6µs 5µs 33µs 3.6MB/s
fast-proxy 19664 1.00x 7µs 4µs 38µs 3.6MB/s
@fastify/http-proxy 18957 0.96x 7µs 4µs 44µs 3.5MB/s
httpxy.proxyFetch 15433 0.78x 8µs 6µs 34µs 2.8MB/s
http-proxy-3 13010 0.66x 10µs 10µs 13µs 2.0MB/s
http-proxy 12893 0.65x 10µs 10µs 13µs 2.0MB/s

POST (~1KB JSON)

Proxy Req/s Scale Avg P50 P99 Throughput
httpxy.server 17316 1.00x 7µs 6µs 31µs 20.6MB/s
fast-proxy 15365 0.89x 8µs 5µs 42µs 18.3MB/s
@fastify/http-proxy 15117 0.87x 8µs 5µs 47µs 18.1MB/s
httpxy.proxyFetch 13179 0.76x 10µs 7µs 41µs 15.7MB/s
http-proxy-3 11487 0.66x 11µs 11µs 15µs 13.4MB/s
http-proxy 11052 0.64x 12µs 11µs 14µs 12.9MB/s

Acknowledgements

Performance optimizations were inspired by analysis of fast-proxy and @fastify/http-proxy.

Summary by CodeRabbit

  • Documentation

    • README updated with clarified agent option defaults, new keep-alive behavior, and an Acknowledgements section.
    • CHANGELOG formatting fix to correctly render wildcard characters.
  • New Features

    • Default keep-alive connection pooling enabled for improved performance.
    • Option to explicitly disable connection reuse (false).
  • Bug Fixes

    • More robust request/response header handling and improved request body streaming.
  • Tests

    • Tests updated to validate new agent/connection behavior.

pi0 and others added 3 commits March 25, 2026 22:48
mitata-based benchmarks for httpxy server, proxyFetch, fast-proxy,
@fastify/http-proxy, and http-proxy-3 with pre-bench validation.
1. Default keep-alive agents (http/https) with 256 maxSockets for
   connection reuse — skipped for HTTP/2 incoming requests.
2. LRU URL parse cache (256 entries) avoids repeated `new URL()` for
   the same string targets in server.ts, fetch.ts, and parseAddr.
3. Single header copy in `setupOutgoing` — merge req.headers and
   options.headers in one pass instead of two spread operations.

httpxy server is now on par with fast-proxy (~50µs/req vs ~43µs GET).
@coderabbitai
Copy link
Copy Markdown

coderabbitai Bot commented Mar 25, 2026

📝 Walkthrough

Walkthrough

Adds a Docker-based benchmarking suite and multiple proxy implementations, introduces default keep-alive HTTP/HTTPS agents and agent-selection changes in core utilities/fetch, updates docs to document agent behavior, and adjusts tests and workspace configuration accordingly.

Changes

Cohort / File(s) Summary
Documentation
CHANGELOG.md, README.md
Escape markdown wildcard in changelog; update Options table to document agent as http.Agent | false, default keep-alive behavior, formatting tweaks, and add Acknowledgements.
Benchmarking infra & tooling
bench/Dockerfile, bench/package.json, pnpm-workspace.yaml
Add Dockerfile for bench image, bench package manifest, and PNPM workspace entry for bench.
Benchmark runner & tests
bench/bench.ts, bench/test.ts
Add benchmark runner that builds/runs containers, health-checks services, invokes bombardier, parses results to Markdown tables; add test runner to validate proxy implementations.
Benchmark server implementations
bench/src/target.ts, bench/src/httpxy-server.ts, bench/src/httpxy-fetch.ts, bench/src/fast-proxy.ts, bench/src/fastify.ts, bench/src/http-proxy.ts, bench/src/http-proxy-3.ts
Add multiple proxy server entrypoints and an echo target server reading PORT/TARGET from env; each forwards requests to upstream target.
Core library
src/_utils.ts
Introduce exported defaultAgents (http/https keep-alive Agents); change setupOutgoing to use explicit options.agent when provided, otherwise select protocol-specific default agent or false for HTTP/2; change header assignment to iterate keys.
Fetch implementation
src/fetch.ts
Use defaultAgents when opts.agent omitted; normalize opts.agent to allow explicit false; refactor header normalization, add _toNodeStream, buffer-for-redirects-only logic, accept Readable bodies, and forward raw response headers from res.rawHeaders.
Tests
test/_utils.test.ts
Update tests to expect default keep-alive agents when agent unspecified and add/adjust tests for explicit agent: false behavior.

Sequence Diagram(s)

sequenceDiagram
    actor User
    participant BenchRunner as Bench Runner
    participant Docker as Docker
    participant Target as Target Server
    participant Proxy as Proxy Server
    participant Bombardier as Bombardier
    participant Parser as Result Parser

    User->>BenchRunner: start benchmark
    BenchRunner->>Docker: build httpxy-bench image
    Docker-->>BenchRunner: image ready
    BenchRunner->>Docker: start target container
    Docker->>Target: launch (port 3000)
    Target-->>Docker: healthy
    BenchRunner->>Docker: start proxy containers
    Docker->>Proxy: launch (ports 3001-3006)
    Proxy-->>Docker: healthy
    BenchRunner->>Bombardier: run GET/POST load
    Bombardier->>Proxy: generate requests
    Proxy->>Target: forward requests
    Target-->>Proxy: responses
    Bombardier-->>BenchRunner: JSON results
    BenchRunner->>Parser: parse metrics (RPS, latency)
    Parser-->>BenchRunner: BenchResult
    BenchRunner->>User: print markdown tables
    BenchRunner->>Docker: cleanup containers
    Docker-->>BenchRunner: removed
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

Possibly related PRs

Suggested reviewers

  • sapphi-red

Poem

🐰 I hop with joy, benchmarks in tow,

Keep-alive agents warm and slow,
Docker pots and proxies play,
Metrics dance at break of day,
Logs and tables neatly show.

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 27.59% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The PR title 'perf!: keep-alive, request body stream and faster proxyFetch' accurately summarizes the main performance improvements introduced in the changeset, including keep-alive agent pooling, request body streaming, and proxyFetch optimizations.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch perf-improvements

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@codecov
Copy link
Copy Markdown

codecov Bot commented Mar 25, 2026

Codecov Report

❌ Patch coverage is 89.47368% with 6 lines in your changes missing coverage. Please review.
✅ Project coverage is 94.91%. Comparing base (1de0a6c) to head (d9d81c5).
⚠️ Report is 3 commits behind head on main.

Files with missing lines Patch % Lines
src/fetch.ts 87.23% 4 Missing and 2 partials ⚠️
Additional details and impacted files
@@            Coverage Diff             @@
##             main     #124      +/-   ##
==========================================
- Coverage   95.74%   94.91%   -0.84%     
==========================================
  Files           8        8              
  Lines         752      786      +34     
  Branches      303      320      +17     
==========================================
+ Hits          720      746      +26     
- Misses         30       35       +5     
- Partials        2        5       +3     

☔ View full report in Codecov by Sentry.
📢 Have feedback on the report? Share it here.

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

- Default keep-alive agents (http/https) for connection reuse in proxyFetch
- Sync _toBuffer fast path avoids async overhead for string/ArrayBuffer bodies
- Response headers built from rawHeaders array pairs instead of Headers object
- Request headers: fast path for plain object (Object.assign) skipping Headers API

proxyFetch: ~170µs -> ~57µs GET, ~165µs -> ~55µs POST 1KB (3x faster)
@pi0 pi0 changed the title perf: connection pooling, URL cache, single header copy perf: perf improvements Mar 25, 2026
@pi0 pi0 force-pushed the perf-improvements branch from 71f5ba4 to 2e69768 Compare March 26, 2026 09:12
@pi0 pi0 force-pushed the perf-improvements branch from 740e264 to 7989012 Compare March 26, 2026 09:18
@pi0 pi0 changed the title perf: perf improvements perf: keep-alive agents, URL cache, faster header merge and proxyFetch Mar 26, 2026
…cache

- Restore original header merge order in setupOutgoing so options.headers
  still overrides :authority host for HTTP/2 requests
- Stream request bodies in proxyFetch instead of buffering (buffer only
  when followRedirects is enabled for 307/308 replay)
- Remove URL parse cache (cloning is slower than parsing, mutable cache
  is fragile)
- Document default keep-alive agent and agent: false opt-out in README
@pi0 pi0 changed the title perf: keep-alive agents, URL cache, faster header merge and proxyFetch perf: keep-alive agents, streaming request bodies, faster header merge and proxyFetch Mar 26, 2026
@pi0 pi0 marked this pull request as ready for review March 26, 2026 09:54
@pi0 pi0 changed the title perf: keep-alive agents, streaming request bodies, faster header merge and proxyFetch perf: keep-alive, request body stream and faster proxyFetch Mar 26, 2026
pi0 and others added 2 commits March 26, 2026 10:58
Unify Headers and Array branches into single iterable loop.
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (4)
bench/src/target.ts (1)

11-20: Consider adding error handling for aborted requests.

If the client disconnects mid-request, the error event fires but is unhandled. For a benchmark target this is low risk, but adding a handler prevents potential uncaught exceptions.

Suggested fix
   const chunks: Buffer[] = [];
+  req.on("error", () => {
+    res.destroy();
+  });
   req.on("data", (c) => chunks.push(c));
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bench/src/target.ts` around lines 11 - 20, The request stream handlers (the
chunk collection block using chunks, req.on("data") and req.on("end")) lack
handlers for aborted/error cases; add req.on("error", ...) and req.on("aborted",
...) handlers alongside the existing data/end listeners to stop collecting,
clean up any state, and ensure the response is ended or destroyed safely (check
res.writableEnded before calling res.end/destroy) to avoid uncaught exceptions
when the client disconnects mid-request.
bench/Dockerfile (2)

4-4: Pin pnpm version for reproducible builds.

Using pnpm@latest can lead to non-reproducible builds if pnpm introduces breaking changes. Consider pinning to a specific version.

Suggested fix
-RUN corepack enable && corepack prepare pnpm@latest --activate
+RUN corepack enable && corepack prepare pnpm@9 --activate
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bench/Dockerfile` at line 4, The Dockerfile currently enables Corepack and
installs pnpm using the floating tag "pnpm@latest" (the RUN line invoking
corepack prepare pnpm@latest --activate), which risks non-reproducible builds;
change this to pin a concrete version by replacing "pnpm@latest" with a specific
version string (or introduce a build ARG like PNPM_VERSION and use it in the
corepack prepare invocation) so the RUN command consistently installs the same
pnpm release across builds.

1-13: Consider adding a non-root user for defense-in-depth.

Static analysis flagged running as root. While this is benchmark tooling (lower risk), adding a non-root user is a good practice for defense-in-depth.

Suggested fix
 COPY src/ src/
 COPY bench/ bench/
 COPY tsconfig.json ./
+
+RUN adduser --disabled-password --gecos "" benchuser
+USER benchuser
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bench/Dockerfile` around lines 1 - 13, Add a non-root user in the Dockerfile:
create a group and user (e.g., "bench"), set HOME, chown the WORKDIR (/app) and
any binaries copied (like /usr/local/bin/bombardier) to that user, and switch to
that user with USER before running any install or runtime steps; ensure RUN
corepack prepare/pnpm install and subsequent COPY/WORKDIR operations occur with
proper ownership so the non-root user can access files and execute binaries.
bench/test.ts (1)

68-77: collectBody helper duplicated from bench/src/httpxy-fetch.ts.

This is a minor duplication. For a benchmark suite, this is acceptable, though you could extract it to a shared utility if more benchmark files need it in the future.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@bench/test.ts` around lines 68 - 77, The collectBody function is duplicated
from bench/src/httpxy-fetch.ts; remove the duplicate and instead import and
reuse the single implementation (or extract it into a new shared helper module)
so benchmarks share the same code. Locate the collectBody declaration in
bench/test.ts and replace it with an import from the existing
bench/src/httpxy-fetch.ts export (or create a new bench/utils/http.ts exporting
collectBody and update both bench/test.ts and bench/src/httpxy-fetch.ts to
import it). Ensure the exported symbol name is collectBody so callers need no
other changes.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@bench/bench.ts`:
- Line 23: The POST workload constant POST_BODY is only ~41 bytes but the run
label and summary claim "~1KB JSON"; either replace POST_BODY with a generated
~1 KiB JSON payload (e.g., repeat or expand the message string to reach ~1024
bytes) or change the run/summary strings to reflect the actual size. Locate
POST_BODY and the associated run label/summary usages (the POST case labels that
say "~1KB JSON" around the POST workload definitions) and update them
consistently so the payload size and label match; apply the same change to the
other POST references noted in the file.
- Line 15: The CLI option "sequential" currently defaults to false causing
parallel shared-target runs; change its default to true by updating the option
definition (sequential: { type: "boolean", short: "s", default: true }) so
isolated runs are the default, and make the corresponding change for the
duplicated option instance referenced later in the file; if you still want
parallel comparisons, instead modify the parallel-run logic to provision a
separate target per proxy rather than driving a single shared target.
- Around line 45-55: The cleanup() function currently removes any Docker
container matching the 'bench-' name prefix which is unsafe; change it to only
remove containers tracked by the local containers array (or alternatively start
containers with a per-run label and filter on that label). Specifically, update
the cleanup implementation to use the containers: string[] array (and e.g. its
entries or a map of started IDs) rather than execSync('docker ps -q --filter
"name=bench-"'), and when creating containers ensure you push the created
container IDs into containers so cleanup only calls docker rm -f on those IDs
(or if you choose the label approach, ensure container startup uses a unique
per-run label and change the execSync filter to use that label).
- Around line 169-177: parseResult currently ignores failed HTTP status counts
and transport errors, so update parseResult(json: string): BenchResult to detect
and fail on any non-2xx or transport errors present in the parsed
BombardierResult (symbol: parseResult, return type: BenchResult, input:
BombardierResult). After parsing const { result: r } = JSON.parse(json) as {
result: BombardierResult }, inspect r.statusCodes (or equivalent map of status
counts) and any error/transport fields (e.g., r.errors, r.transportErrors,
r.connectionErrors) and if any non-2xx count or any transport/error count is > 0
throw a descriptive Error (or return a failing result) so the run fails;
otherwise continue to compute rps, avgLatency, p50, p99 and bytesPerSec as
before.

In `@bench/src/http-proxy.ts`:
- Around line 7-10: The proxy instance created by httpProxy.createProxyServer
(symbol: proxy) lacks an 'error' listener so proxy errors can crash the process;
add proxy.on('error', ...) that handles errors emitted by proxy (reference
createProxyServer and proxy.web) and properly responds to the client: set an
appropriate status (e.g., 502), ensure headers aren't already sent, send a
minimal error body, and log the error; this prevents uncaught exceptions and
returns a safe HTTP error when the target is unreachable.

In `@bench/src/httpxy-server.ts`:
- Around line 8-10: The call to proxy.web inside the http.createServer callback
can return a rejected Promise and cause unhandled rejections; update the server
request handler (the http.createServer callback that calls proxy.web(req, res))
to handle the returned Promise by attaching a .catch handler (or using
async/await) that logs the error and sends an appropriate error response (e.g.,
502) to the client, ensuring the response is ended and errors from proxy.web are
not left unhandled.

---

Nitpick comments:
In `@bench/Dockerfile`:
- Line 4: The Dockerfile currently enables Corepack and installs pnpm using the
floating tag "pnpm@latest" (the RUN line invoking corepack prepare pnpm@latest
--activate), which risks non-reproducible builds; change this to pin a concrete
version by replacing "pnpm@latest" with a specific version string (or introduce
a build ARG like PNPM_VERSION and use it in the corepack prepare invocation) so
the RUN command consistently installs the same pnpm release across builds.
- Around line 1-13: Add a non-root user in the Dockerfile: create a group and
user (e.g., "bench"), set HOME, chown the WORKDIR (/app) and any binaries copied
(like /usr/local/bin/bombardier) to that user, and switch to that user with USER
before running any install or runtime steps; ensure RUN corepack prepare/pnpm
install and subsequent COPY/WORKDIR operations occur with proper ownership so
the non-root user can access files and execute binaries.

In `@bench/src/target.ts`:
- Around line 11-20: The request stream handlers (the chunk collection block
using chunks, req.on("data") and req.on("end")) lack handlers for aborted/error
cases; add req.on("error", ...) and req.on("aborted", ...) handlers alongside
the existing data/end listeners to stop collecting, clean up any state, and
ensure the response is ended or destroyed safely (check res.writableEnded before
calling res.end/destroy) to avoid uncaught exceptions when the client
disconnects mid-request.

In `@bench/test.ts`:
- Around line 68-77: The collectBody function is duplicated from
bench/src/httpxy-fetch.ts; remove the duplicate and instead import and reuse the
single implementation (or extract it into a new shared helper module) so
benchmarks share the same code. Locate the collectBody declaration in
bench/test.ts and replace it with an import from the existing
bench/src/httpxy-fetch.ts export (or create a new bench/utils/http.ts exporting
collectBody and update both bench/test.ts and bench/src/httpxy-fetch.ts to
import it). Ensure the exported symbol name is collectBody so callers need no
other changes.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: f15e71fa-82f5-44d9-a0d5-b4c852dfee2d

📥 Commits

Reviewing files that changed from the base of the PR and between 04aaaba and 5332bef.

⛔ Files ignored due to path filters (1)
  • pnpm-lock.yaml is excluded by !**/pnpm-lock.yaml
📒 Files selected for processing (17)
  • CHANGELOG.md
  • README.md
  • bench/Dockerfile
  • bench/bench.ts
  • bench/package.json
  • bench/src/fast-proxy.ts
  • bench/src/fastify.ts
  • bench/src/http-proxy-3.ts
  • bench/src/http-proxy.ts
  • bench/src/httpxy-fetch.ts
  • bench/src/httpxy-server.ts
  • bench/src/target.ts
  • bench/test.ts
  • pnpm-workspace.yaml
  • src/_utils.ts
  • src/fetch.ts
  • test/_utils.test.ts

Comment thread bench/bench.ts Outdated
Comment thread bench/bench.ts Outdated
Comment thread bench/bench.ts
Comment thread bench/bench.ts
Comment thread bench/src/http-proxy.ts
Comment thread bench/src/httpxy-server.ts
Copy link
Copy Markdown

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/fetch.ts (1)

28-32: ⚠️ Potential issue | 🟡 Minor

Update JSDoc to reflect the new default agent behavior.

The comment states Default: false (no agent, no keep-alive) but the implementation now defaults to shared keep-alive agents when opts.agent is not provided (lines 159-165). This inconsistency will mislead API consumers.

📝 Proposed fix
   /**
    * HTTP agent for connection pooling / reuse.
-   * Default: `false` (no agent, no keep-alive).
+   * Default: shared keep-alive agent (256 maxSockets, 64 maxFreeSockets).
+   * Set to `false` to disable connection pooling.
    */
   agent?: any;
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/fetch.ts` around lines 28 - 32, The JSDoc for the agent property is out
of date: update the comment above the agent?: any property to state that when
opts.agent is not provided the implementation uses shared keep-alive HTTP(S)
agents (i.e., a shared keep-alive agent is created and reused) rather than
defaulting to false/no-agent; mention opts.agent as the override so callers know
supplying opts.agent disables the shared default.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@src/fetch.ts`:
- Around line 375-378: In the Readable-body branch inside src/fetch.ts (the
block handling "if (body instanceof Readable)"), replace the current
body.on("error", reject) handler with an error handler that both destroys the
outgoing request and rejects the promise (e.g., call req.destroy(err) then
reject(err)) so the socket is closed on stream errors; keep piping
(body.pipe(req)) but ensure the new handler references the same req and reject
symbols used in that scope.

---

Outside diff comments:
In `@src/fetch.ts`:
- Around line 28-32: The JSDoc for the agent property is out of date: update the
comment above the agent?: any property to state that when opts.agent is not
provided the implementation uses shared keep-alive HTTP(S) agents (i.e., a
shared keep-alive agent is created and reused) rather than defaulting to
false/no-agent; mention opts.agent as the override so callers know supplying
opts.agent disables the shared default.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 01275e23-129c-4cef-b5c0-2ccdb4d2e6a3

📥 Commits

Reviewing files that changed from the base of the PR and between 5332bef and b0ad03c.

📒 Files selected for processing (1)
  • src/fetch.ts

Comment thread src/fetch.ts
pi0 added 2 commits March 26, 2026 11:12
Ensures the socket is properly closed when the request body stream errors.
… detection

- Default to sequential mode to avoid shared-target contention
- Generate actual ~1KB JSON POST body matching the label
- Scope cleanup to only tracked container IDs
- Fail on non-2xx responses and transport errors in parseResult
@pi0 pi0 changed the title perf: keep-alive, request body stream and faster proxyFetch perf!: keep-alive, request body stream and faster proxyFetch Mar 26, 2026
@pi0 pi0 merged commit 8aa34ee into main Mar 26, 2026
5 of 7 checks passed
@pi0 pi0 deleted the perf-improvements branch March 26, 2026 10:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant