Phase 1: Local MITM proxy with CA lifecycle and trace capture#2
Phase 1: Local MITM proxy with CA lifecycle and trace capture#2
Conversation
- CA generation (RSA 4096 root, ECDSA P-256 leaf) via @peculiar/x509 - CA manager with auto-generation, expiry detection, secure key storage - LRU cert cache (max 100 domains, 25-day expiry) - CONNECT proxy with per-domain TLS terminators (Bun-compatible pattern) - SNI parser for ClientHello hostname extraction - HTTP/1.1 parser with chunked transfer decoding - SSE stream reassembler for Anthropic and OpenAI formats - Trace emitter with typed EventEmitter for captured API traces - Response completion detection via Content-Length/chunked terminator - Main process wired with CA init, proxy toggle, trace counter - AGENTS.md updated with key Bun TLS learnings
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 11 minutes and 29 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: Path: .coderabbit.yaml Review profile: CHILL Plan: Pro Run ID: ⛔ Files ignored due to path filters (2)
📒 Files selected for processing (1)
📝 WalkthroughWalkthroughAdds a CONNECT-based TLS‑terminating MITM proxy with CA/key generation and leaf cert caching, HTTP/SSE parsing and reassembly, trace emission/types, SSE token extraction, tray integration for CA/proxy status, and related docs and runtime dependency declarations. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Comment |
- Regenerate tray icon as black silhouette with alpha (macOS template) - Add @2x retina variant (44x44) - Remove title text from menu bar (icon only) - Remove setTitle calls that showed "AgentTap (ON)" text
There was a problem hiding this comment.
Actionable comments posted: 11
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bun/ca/ca-manager.ts`:
- Around line 31-42: The current load path reads CA_KEY_PATH/CA_CERT_PATH and
blindly returns them when isCertExpired(cert) is false, which lets corrupted
ca.pem be used; update the CA loading logic in ca-manager.ts (the block using
CA_KEY_PATH, CA_CERT_PATH, isCertExpired, regenerateCA, loaded, CA_DIR) to
validate readability and parseability of both key and cert before accepting
them: wrap the Bun.file(...).text() and certificate validation in a try/catch
(or otherwise check parse success), treat any I/O or parse error as an
expired/invalid cert and call regenerateCA(), and apply the same defensive
checks to the later similar block around lines 75-83 so unreadable/corrupted
files trigger regeneration instead of being loaded.
- Around line 12-20: The CA private key is being persisted under HOME via the
CA_DIR/CA_KEY_PATH/CA_CERT_PATH constants; change CA_DIR to a path inside the
application's sandboxed container (e.g., the app's data/userData directory or a
dedicated secure storage directory provided by the runtime) and update
CA_KEY_PATH and CA_CERT_PATH to join against that sandboxed CA_DIR instead of
process.env.HOME; ensure the code that reads/writes the key (references to
CA_KEY_PATH and CA_CERT_PATH) uses the new location, add a migration step to
move existing keys into the sandbox if present, and enforce strict file
permissions when creating the key files.
In `@src/bun/ca/cert-cache.ts`:
- Around line 19-39: Current code can trigger multiple simultaneous
generateLeafCert calls for the same domain; add a deduplication layer by
introducing an inFlight map (e.g., inFlightPromises: Map<string,
Promise<CertificatePair>>) and check it before calling generateLeafCert in the
cache-get path, so if inFlightPromises.get(domain) exists you await and return
that promise's result instead of starting a new generation; when starting
generation, set inFlightPromises.set(domain, promise), await it, then remove the
entry in finally and only then write the resolved pair into cache (using the
existing cache.set logic and evictLRU/MAX_CACHE_SIZE handling); also ensure
rejected promises are removed from inFlightPromises so subsequent requests can
retry.
In `@src/bun/ca/cert-generator.ts`:
- Around line 116-121: pemToDer currently returns the underlying Buffer's entire
ArrayBuffer which may include extra bytes or an offset and can break
crypto.subtle.importKey; change pemToDer to create the base64-decoded Buffer,
then return a sliced ArrayBuffer that uses the Buffer's byteOffset and
byteLength (i.e., slice buf.buffer from buf.byteOffset to buf.byteOffset +
buf.byteLength) so the returned ArrayBuffer contains exactly the DER bytes
consumed by crypto.subtle.importKey.
In `@src/bun/proxy/proxy-server.ts`:
- Around line 192-200: The upstream close handler currently calls
clientSocket.destroy(), which aborts and discards buffered writes when
upstream.destroy() triggers; change the handler to call clientSocket.end() (or
use clientSocket.end() with a callback) so the socket performs a graceful close
and flushes any queued response bytes. Locate the upstream 'close'/'end' event
handler that invokes clientSocket.destroy() (referencing clientSocket and
upstream.destroy in proxy-server.ts) and replace destroy() calls with end(),
ensuring both places where upstream.destroy() is invoked (the non-chunked and
chunked response branches around where finalize() is called) use the updated
graceful close behavior.
- Around line 91-100: The promise returned around connectProxy.listen never
rejects on bind errors because the "error" handler only logs; update the Promise
in startProxy so its executor accepts (resolve, reject), attach a one-time
"error" listener on connectProxy that calls reject(err) (or rejects with a
descriptive Error) and a one-time "listening" listener that calls resolve();
ensure you remove the opposite listener on success/failure (or use once) so
connectProxy's "error" handler no longer just logs but propagates the failure to
the Promise; reference connectProxy and the startProxy promise/listen call to
locate where to add reject behavior.
- Around line 56-84: Wrap the async CONNECT handler (the callback passed to
clientSocket.once("data", async (chunk) => {...})) in a try/catch and handle
failures from getOrCreateDomainTLS(domain) and any TLS initialization steps: on
error, write an HTTP/1.1 502 Bad Gateway response (e.g.
clientSocket.write("HTTP/1.1 502 Bad Gateway\r\n\r\n")), then destroy/end the
client socket; ensure any resources (like any partially created local sockets)
are cleaned up and that promise rejections from getOrCreateDomainTLS are caught
so they don't remain unhandled.
In `@src/bun/proxy/sni-parser.ts`:
- Around line 25-27: The code reads a 16-bit length with
data.readUInt16BE(offset) into cipherSuitesLen without first ensuring there are
at least 2 bytes available; add a bounds guard before that call (e.g., check
offset + 2 <= data.length or offset <= data.length - 2) and return null if the
buffer is too short, then proceed to advance offset and validate the full cipher
suites slice (offset + cipherSuitesLen) fits within data.length before using it;
update the logic around the cipherSuitesLen/readUInt16BE usage in the SNI/TLS
parsing function so it gracefully handles truncated input instead of throwing
RangeError.
In `@src/bun/sse/sse-reassembler.ts`:
- Line 16: The split in sse-reassembler.ts currently uses raw.split("\n\n")
(assigned to blocks) which fails for CRLF-delimited streams; normalize CRLF or
use a regex like raw.split(/\r?\n\r?\n/) (or first replace(/\r\n/g, "\n") then
split("\n\n")) to correctly separate SSE events, and apply the same change to
the other parsing block(s) in the same file (the code region around lines 25-38)
so all places that split raw event chunks handle both "\n\n" and "\r\n\r\n".
- Around line 51-55: The provider detection is using case-sensitive includes on
domain which can miss mixed-case hostnames; normalize domain to lowercase before
checks by replacing usages of domain.includes("anthropic") and
domain.includes("openai") with a lowercase comparison (e.g., compute a local
lowercasedDomain = domain.toLowerCase() and use
lowercasedDomain.includes("anthropic") / lowercasedDomain.includes("openai")),
and update any related branches (reassembleAnthropic, reassembleOpenAI) to use
this normalized value so provider-specific reassembly runs reliably for
mixed-case hostnames.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 181f8af8-571c-487e-92b6-8cef3ade3d72
⛔ Files ignored due to path filters (1)
bun.lockis excluded by!**/*.lock
📒 Files selected for processing (13)
AGENTS.mdpackage.jsonsrc/bun/ca/ca-manager.tssrc/bun/ca/cert-cache.tssrc/bun/ca/cert-generator.tssrc/bun/capture/trace-emitter.tssrc/bun/capture/types.tssrc/bun/index.tssrc/bun/proxy/http-parser.tssrc/bun/proxy/proxy-server.tssrc/bun/proxy/sni-parser.tssrc/bun/sse/sse-reassembler.tssrc/shared/types.ts
| const CA_DIR = join( | ||
| process.env.HOME ?? "~", | ||
| "Library", | ||
| "Application Support", | ||
| "AgentTap", | ||
| "ca", | ||
| ); | ||
| const CA_KEY_PATH = join(CA_DIR, "ca-key.pem"); | ||
| const CA_CERT_PATH = join(CA_DIR, "ca.pem"); |
There was a problem hiding this comment.
Do not persist the CA private key under $HOME/Library/....
This stores the MITM root key outside the app container, which weakens the trust boundary around the most sensitive secret in the system. Based on learnings, Never store the custom CA private key outside the app's sandboxed container.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/ca/ca-manager.ts` around lines 12 - 20, The CA private key is being
persisted under HOME via the CA_DIR/CA_KEY_PATH/CA_CERT_PATH constants; change
CA_DIR to a path inside the application's sandboxed container (e.g., the app's
data/userData directory or a dedicated secure storage directory provided by the
runtime) and update CA_KEY_PATH and CA_CERT_PATH to join against that sandboxed
CA_DIR instead of process.env.HOME; ensure the code that reads/writes the key
(references to CA_KEY_PATH and CA_CERT_PATH) uses the new location, add a
migration step to move existing keys into the sandbox if present, and enforce
strict file permissions when creating the key files.
| if (existsSync(CA_KEY_PATH) && existsSync(CA_CERT_PATH)) { | ||
| const key = await Bun.file(CA_KEY_PATH).text(); | ||
| const cert = await Bun.file(CA_CERT_PATH).text(); | ||
|
|
||
| if (isCertExpired(cert)) { | ||
| console.log("[CA] Certificate expired, regenerating..."); | ||
| return regenerateCA(); | ||
| } | ||
|
|
||
| loaded = { key, cert }; | ||
| console.log("[CA] Loaded existing CA from", CA_DIR); | ||
| return loaded; |
There was a problem hiding this comment.
Treat unreadable CA certs as invalid.
If ca.pem is corrupted, isCertExpired() returns false, so Lines 31-42 load broken material instead of regenerating it. That defers the failure into proxy startup and leaf issuance.
Suggested fix
function isCertExpired(pem: string): boolean {
try {
const crypto = require("node:crypto");
const cert = new crypto.X509Certificate(pem);
return new Date(cert.validTo) < new Date();
} catch {
- return false;
+ return true;
}
}Also applies to: 75-83
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/ca/ca-manager.ts` around lines 31 - 42, The current load path reads
CA_KEY_PATH/CA_CERT_PATH and blindly returns them when isCertExpired(cert) is
false, which lets corrupted ca.pem be used; update the CA loading logic in
ca-manager.ts (the block using CA_KEY_PATH, CA_CERT_PATH, isCertExpired,
regenerateCA, loaded, CA_DIR) to validate readability and parseability of both
key and cert before accepting them: wrap the Bun.file(...).text() and
certificate validation in a try/catch (or otherwise check parse success), treat
any I/O or parse error as an expired/invalid cert and call regenerateCA(), and
apply the same defensive checks to the later similar block around lines 75-83 so
unreadable/corrupted files trigger regeneration instead of being loaded.
| const existing = cache.get(domain); | ||
| const now = Date.now(); | ||
|
|
||
| if (existing && existing.expiresAt > now) { | ||
| existing.lastUsed = now; | ||
| return existing.pair; | ||
| } | ||
|
|
||
| const pair = await generateLeafCert(domain, caCert, caKey); | ||
|
|
||
| if (cache.size >= MAX_CACHE_SIZE) { | ||
| evictLRU(); | ||
| } | ||
|
|
||
| cache.set(domain, { | ||
| pair, | ||
| expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30) | ||
| lastUsed: now, | ||
| }); | ||
|
|
||
| return pair; |
There was a problem hiding this comment.
Deduplicate concurrent cache misses per domain.
Multiple simultaneous misses for the same domain will each generate a new cert. This is avoidable work on a hot path.
Proposed fix
const cache = new Map<string, CacheEntry>();
+const inflight = new Map<string, Promise<KeyCertPair>>();
@@
if (existing && existing.expiresAt > now) {
existing.lastUsed = now;
return existing.pair;
}
- const pair = await generateLeafCert(domain, caCert, caKey);
+ const pending = inflight.get(domain);
+ if (pending) return pending;
+
+ const generation = generateLeafCert(domain, caCert, caKey).finally(() => {
+ inflight.delete(domain);
+ });
+ inflight.set(domain, generation);
+ const pair = await generation;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const existing = cache.get(domain); | |
| const now = Date.now(); | |
| if (existing && existing.expiresAt > now) { | |
| existing.lastUsed = now; | |
| return existing.pair; | |
| } | |
| const pair = await generateLeafCert(domain, caCert, caKey); | |
| if (cache.size >= MAX_CACHE_SIZE) { | |
| evictLRU(); | |
| } | |
| cache.set(domain, { | |
| pair, | |
| expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30) | |
| lastUsed: now, | |
| }); | |
| return pair; | |
| const cache = new Map<string, CacheEntry>(); | |
| const inflight = new Map<string, Promise<KeyCertPair>>(); | |
| const existing = cache.get(domain); | |
| const now = Date.now(); | |
| if (existing && existing.expiresAt > now) { | |
| existing.lastUsed = now; | |
| return existing.pair; | |
| } | |
| const pending = inflight.get(domain); | |
| if (pending) return pending; | |
| const generation = generateLeafCert(domain, caCert, caKey).finally(() => { | |
| inflight.delete(domain); | |
| }); | |
| inflight.set(domain, generation); | |
| const pair = await generation; | |
| if (cache.size >= MAX_CACHE_SIZE) { | |
| evictLRU(); | |
| } | |
| cache.set(domain, { | |
| pair, | |
| expiresAt: now + 25 * 86400_000, // 25 days (leaf certs valid 30) | |
| lastUsed: now, | |
| }); | |
| return pair; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/ca/cert-cache.ts` around lines 19 - 39, Current code can trigger
multiple simultaneous generateLeafCert calls for the same domain; add a
deduplication layer by introducing an inFlight map (e.g., inFlightPromises:
Map<string, Promise<CertificatePair>>) and check it before calling
generateLeafCert in the cache-get path, so if inFlightPromises.get(domain)
exists you await and return that promise's result instead of starting a new
generation; when starting generation, set inFlightPromises.set(domain, promise),
await it, then remove the entry in finally and only then write the resolved pair
into cache (using the existing cache.set logic and evictLRU/MAX_CACHE_SIZE
handling); also ensure rejected promises are removed from inFlightPromises so
subsequent requests can retry.
| function pemToDer(pem: string): ArrayBuffer { | ||
| const b64 = pem | ||
| .replace(/-----BEGIN [\w\s]+-----/, "") | ||
| .replace(/-----END [\w\s]+-----/, "") | ||
| .replace(/\s/g, ""); | ||
| return Buffer.from(b64, "base64").buffer; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
# First, let's examine the actual file content at the specified lines
cat -n src/bun/ca/cert-generator.ts | sed -n '110,130p'Repository: GeiserX/AgentTap
Length of output: 496
🏁 Script executed:
# Check if pemToDer is used with crypto.subtle.importKey
rg -n "pemToDer|importKey" src/bun/ca/cert-generator.ts -A 3 -B 1Repository: GeiserX/AgentTap
Length of output: 421
🏁 Script executed:
# Understand Buffer semantics with offset - let's check Node/Bun docs or any test files
fd -t f "buffer|crypto" src --type f | head -20Repository: GeiserX/AgentTap
Length of output: 42
Return the exact DER slice from pemToDer().
Buffer.from(b64, "base64").buffer exposes the entire backing ArrayBuffer. When passed to crypto.subtle.importKey() at line 68, this can cause CA key import to fail if the underlying buffer has any offset or extra data beyond the valid DER bytes.
Suggested fix
function pemToDer(pem: string): ArrayBuffer {
const b64 = pem
.replace(/-----BEGIN [\w\s]+-----/, "")
.replace(/-----END [\w\s]+-----/, "")
.replace(/\s/g, "");
- return Buffer.from(b64, "base64").buffer;
+ const der = Buffer.from(b64, "base64");
+ return der.buffer.slice(der.byteOffset, der.byteOffset + der.byteLength);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| function pemToDer(pem: string): ArrayBuffer { | |
| const b64 = pem | |
| .replace(/-----BEGIN [\w\s]+-----/, "") | |
| .replace(/-----END [\w\s]+-----/, "") | |
| .replace(/\s/g, ""); | |
| return Buffer.from(b64, "base64").buffer; | |
| function pemToDer(pem: string): ArrayBuffer { | |
| const b64 = pem | |
| .replace(/-----BEGIN [\w\s]+-----/, "") | |
| .replace(/-----END [\w\s]+-----/, "") | |
| .replace(/\s/g, ""); | |
| const der = Buffer.from(b64, "base64"); | |
| return der.buffer.slice(der.byteOffset, der.byteOffset + der.byteLength); | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/ca/cert-generator.ts` around lines 116 - 121, pemToDer currently
returns the underlying Buffer's entire ArrayBuffer which may include extra bytes
or an offset and can break crypto.subtle.importKey; change pemToDer to create
the base64-decoded Buffer, then return a sliced ArrayBuffer that uses the
Buffer's byteOffset and byteLength (i.e., slice buf.buffer from buf.byteOffset
to buf.byteOffset + buf.byteLength) so the returned ArrayBuffer contains exactly
the DER bytes consumed by crypto.subtle.importKey.
| clientSocket.once("data", async (chunk) => { | ||
| const headStr = chunk.toString("latin1"); | ||
|
|
||
| if (headStr.startsWith("CONNECT ")) { | ||
| const target = headStr.split("\r\n")[0].split(" ")[1]; | ||
| const domain = target.split(":")[0]; | ||
|
|
||
| // Get or create a per-domain TLS terminator | ||
| const tlsPort = await getOrCreateDomainTLS(domain); | ||
|
|
||
| // Send 200 to establish tunnel | ||
| clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n"); | ||
|
|
||
| // Wait for the TLS ClientHello, then bridge to the domain's TLS server | ||
| clientSocket.once("data", (clientHello) => { | ||
| const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => { | ||
| // Forward the ClientHello | ||
| local.write(clientHello); | ||
|
|
||
| // Bidirectional relay for the rest of the TLS handshake + data | ||
| clientSocket.on("data", (c) => local.write(c)); | ||
| local.on("data", (c) => clientSocket.write(c)); | ||
| local.on("end", () => clientSocket.end()); | ||
| clientSocket.on("end", () => local.end()); | ||
| }); | ||
|
|
||
| local.on("error", () => clientSocket.destroy()); | ||
| clientSocket.on("error", () => local.destroy()); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "proxy-server.ts" -type f 2>/dev/null | head -20Repository: GeiserX/AgentTap
Length of output: 91
🏁 Script executed:
cd src/bun/proxy 2>/dev/null && wc -l proxy-server.ts && echo "---" && sed -n '56,84p' proxy-server.tsRepository: GeiserX/AgentTap
Length of output: 1308
🏁 Script executed:
cd src/bun/proxy 2>/dev/null && sed -n '1,110p' proxy-server.ts | cat -nRepository: GeiserX/AgentTap
Length of output: 4627
Add try/catch around CONNECT setup to handle TLS initialization failures gracefully.
The await getOrCreateDomainTLS(domain) call at line 64 lacks error handling. If leaf certificate generation or TLS server startup fails, the promise rejection goes unhandled and the client never receives an error response. Wrap the entire async callback (lines 56–87) in try/catch to send a 502 response on failure, per the requirement that all proxy operations must handle TLS errors gracefully.
Suggested fix
- clientSocket.once("data", async (chunk) => {
- const headStr = chunk.toString("latin1");
+ clientSocket.once("data", async (chunk) => {
+ try {
+ const headStr = chunk.toString("latin1");
- if (headStr.startsWith("CONNECT ")) {
- const target = headStr.split("\r\n")[0].split(" ")[1];
- const domain = target.split(":")[0];
+ if (headStr.startsWith("CONNECT ")) {
+ const target = headStr.split("\r\n")[0].split(" ")[1];
+ const domain = target.split(":")[0];
- // Get or create a per-domain TLS terminator
- const tlsPort = await getOrCreateDomainTLS(domain);
+ const tlsPort = await getOrCreateDomainTLS(domain);
- // Send 200 to establish tunnel
- clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n");
+ clientSocket.write("HTTP/1.1 200 Connection Established\r\n\r\n");
- // Wait for the TLS ClientHello, then bridge to the domain's TLS server
- clientSocket.once("data", (clientHello) => {
- const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => {
- // Forward the ClientHello
- local.write(clientHello);
+ clientSocket.once("data", (clientHello) => {
+ const local = net.connect({ host: "127.0.0.1", port: tlsPort }, () => {
+ local.write(clientHello);
+ });
- // Bidirectional relay for the rest of the TLS handshake + data
- clientSocket.on("data", (c) => local.write(c));
- local.on("data", (c) => clientSocket.write(c));
- local.on("end", () => clientSocket.end());
- clientSocket.on("end", () => local.end());
+ clientSocket.on("data", (c) => local.write(c));
+ local.on("data", (c) => clientSocket.write(c));
+ local.on("end", () => clientSocket.end());
+ clientSocket.on("end", () => local.end());
+ local.on("error", () => clientSocket.destroy());
+ clientSocket.on("error", () => local.destroy());
});
-
- local.on("error", () => clientSocket.destroy());
- clientSocket.on("error", () => local.destroy());
- });
- } else {
- clientSocket.end("HTTP/1.1 400 Bad Request\r\n\r\nOnly CONNECT supported\r\n");
- }
+ } else {
+ clientSocket.end("HTTP/1.1 400 Bad Request\r\n\r\nOnly CONNECT supported\r\n");
+ }
+ } catch (err) {
+ console.error("[Proxy] CONNECT setup failed:", err);
+ clientSocket.end("HTTP/1.1 502 Bad Gateway\r\n\r\n");
+ }
});🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/proxy/proxy-server.ts` around lines 56 - 84, Wrap the async CONNECT
handler (the callback passed to clientSocket.once("data", async (chunk) =>
{...})) in a try/catch and handle failures from getOrCreateDomainTLS(domain) and
any TLS initialization steps: on error, write an HTTP/1.1 502 Bad Gateway
response (e.g. clientSocket.write("HTTP/1.1 502 Bad Gateway\r\n\r\n")), then
destroy/end the client socket; ensure any resources (like any partially created
local sockets) are cleaned up and that promise rejections from
getOrCreateDomainTLS are caught so they don't remain unhandled.
| connectProxy.on("error", (err) => { | ||
| console.error("[Proxy] Error:", err.message); | ||
| }); | ||
|
|
||
| await new Promise<void>((resolve) => { | ||
| connectProxy!.listen(port, "127.0.0.1", () => { | ||
| console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`); | ||
| resolve(); | ||
| }); | ||
| }); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -type f -name "proxy-server.ts" | head -20Repository: GeiserX/AgentTap
Length of output: 91
🏁 Script executed:
cat -n ./src/bun/proxy/proxy-server.ts | sed -n '80,110p'Repository: GeiserX/AgentTap
Length of output: 1097
🏁 Script executed:
cat -n ./src/bun/proxy/proxy-server.ts | sed -n '1,95p'Repository: GeiserX/AgentTap
Length of output: 4108
Reject startProxy() when listen() fails.
If bind fails, the error handler (lines 91–93) only logs—it doesn't reject the promise. startProxy() never settles, leaving the tray stuck in "starting" and the caller unable to reach its .catch() path.
Suggested fix
- await new Promise<void>((resolve) => {
- connectProxy!.listen(port, "127.0.0.1", () => {
- console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`);
- resolve();
- });
- });
+ await new Promise<void>((resolve, reject) => {
+ const server = connectProxy!;
+ const onError = (err: Error) => reject(err);
+ server.once("error", onError);
+ server.listen(port, "127.0.0.1", () => {
+ server.off("error", onError);
+ console.log(`[Proxy] CONNECT proxy on 127.0.0.1:${port}`);
+ resolve();
+ });
+ });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/proxy/proxy-server.ts` around lines 91 - 100, The promise returned
around connectProxy.listen never rejects on bind errors because the "error"
handler only logs; update the Promise in startProxy so its executor accepts
(resolve, reject), attach a one-time "error" listener on connectProxy that calls
reject(err) (or rejects with a descriptive Error) and a one-time "listening"
listener that calls resolve(); ensure you remove the opposite listener on
success/failure (or use once) so connectProxy's "error" handler no longer just
logs but propagates the failure to the Promise; reference connectProxy and the
startProxy promise/listen call to locate where to add reject behavior.
| if (contentLength >= 0 && bodyBytes >= contentLength) { | ||
| finalize(); | ||
| upstream.destroy(); | ||
| } else if (isChunked) { | ||
| const fullBuf = Buffer.concat(responseChunks as unknown as Uint8Array[]); | ||
| const bodyBuf = fullBuf.subarray(headerEndIndex); | ||
| if (bodyBuf.includes(CHUNKED_TERMINATOR)) { | ||
| finalize(); | ||
| upstream.destroy(); |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
find . -name "proxy-server.ts" -path "*/src/bun/proxy/*" | head -5Repository: GeiserX/AgentTap
Length of output: 91
🏁 Script executed:
cat -n src/bun/proxy/proxy-server.ts | sed -n '180,210p'Repository: GeiserX/AgentTap
Length of output: 1367
🏁 Script executed:
cat -n src/bun/proxy/proxy-server.ts | sed -n '230,250p'Repository: GeiserX/AgentTap
Length of output: 959
Replace clientSocket.destroy() with graceful close in upstream close handler.
Line 243 calls destroy() on the client socket, which abruptly terminates the connection and discards buffered writes. When upstream.destroy() is triggered at lines 194 or 200, it fires the close event that immediately calls clientSocket.destroy(), potentially losing response bytes still in the write queue. Use clientSocket.end() instead to flush buffered data before closing.
Fix
- upstream.on("close", () => { finalize(); clientSocket.destroy(); });
+ upstream.on("close", () => {
+ finalize();
+ if (!clientSocket.writableEnded) clientSocket.end();
+ });Also applies to: 242-243
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/proxy/proxy-server.ts` around lines 192 - 200, The upstream close
handler currently calls clientSocket.destroy(), which aborts and discards
buffered writes when upstream.destroy() triggers; change the handler to call
clientSocket.end() (or use clientSocket.end() with a callback) so the socket
performs a graceful close and flushes any queued response bytes. Locate the
upstream 'close'/'end' event handler that invokes clientSocket.destroy()
(referencing clientSocket and upstream.destroy in proxy-server.ts) and replace
destroy() calls with end(), ensuring both places where upstream.destroy() is
invoked (the non-chunked and chunked response branches around where finalize()
is called) use the updated graceful close behavior.
| const cipherSuitesLen = data.readUInt16BE(offset); | ||
| offset += 2 + cipherSuitesLen; | ||
| if (offset >= data.length) return null; |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Verify current guard coverage around cipher-suite length parsing
rg -n -C2 'Cipher suites|readUInt16BE\(offset\)' src/bun/proxy/sni-parser.tsRepository: GeiserX/AgentTap
Length of output: 650
🏁 Script executed:
cat -n src/bun/proxy/sni-parser.ts | head -70Repository: GeiserX/AgentTap
Length of output: 2556
🌐 Web query:
Node.js Buffer readUInt16BE RangeError out of bounds behavior
💡 Result:
When calling buf.readUInt16BE(offset) on a Node.js Buffer where the offset is out of bounds (i.e., offset < 0 or offset > buf.length - 2), it throws a RangeError with the message "The value of "offset" is out of range. It must be >= 0 and <= [buf.length - 2]. Received [offset]". This behavior is consistent across Node.js versions, including recent ones like v25.9.0. The official documentation states that offset must satisfy 0 <= offset <= buf.length - 2, and examples explicitly show it throwing ERR_OUT_OF_RANGE for invalid offsets, such as buf.readUInt16BE(2) on a 3-byte buffer. In some contexts or older versions, out-of-bounds access might manifest as ERR_BUFFER_OUT_OF_BOUNDS, but for readUInt16BE specifically, the error is ERR_OUT_OF_RANGE from the bounds check on the offset parameter before reading. Example from sources: const buf = Buffer.from([0x12, 0x34, 0x56]); console.log(buf.readUInt16BE(2).toString(16)); // Throws ERR_OUT_OF_RANGE RangeError [ERR_OUT_OF_RANGE]: The value of "offset" is out of range. It must be >= 0 and <= 2. Received 3 at boundsError (internal/buffer.js:49:9) at Buffer.readUInt16BE (internal/buffer.js:215:5)
Citations:
- 1: https://bun.com/reference/node/buffer/Buffer/readUInt16BE
- 2: https://nodejs.org/docs/latest-v16.x/api/buffer.html
- 3: https://www.geeksforgeeks.org/node-js/node-js-buffer-readuint16be-method/
- 4: https://nodejs.org/api/buffer.html
- 5: https://bun.com/reference/node/buffer/Buffer/readUInt16LE
Add bounds guard before readUInt16BE(offset) on line 25.
Line 25 throws RangeError when offset + 2 > data.length because readUInt16BE requires offset ≤ length - 2. The check on line 22 only validates offset < length, which is insufficient. The function design expects graceful handling of truncated input.
Proposed fix
// Cipher suites: length(2) + data
+ if (offset + 2 > data.length) return null;
const cipherSuitesLen = data.readUInt16BE(offset);
offset += 2 + cipherSuitesLen;
if (offset >= data.length) return null;📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| const cipherSuitesLen = data.readUInt16BE(offset); | |
| offset += 2 + cipherSuitesLen; | |
| if (offset >= data.length) return null; | |
| if (offset + 2 > data.length) return null; | |
| const cipherSuitesLen = data.readUInt16BE(offset); | |
| offset += 2 + cipherSuitesLen; | |
| if (offset >= data.length) return null; |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/proxy/sni-parser.ts` around lines 25 - 27, The code reads a 16-bit
length with data.readUInt16BE(offset) into cipherSuitesLen without first
ensuring there are at least 2 bytes available; add a bounds guard before that
call (e.g., check offset + 2 <= data.length or offset <= data.length - 2) and
return null if the buffer is too short, then proceed to advance offset and
validate the full cipher suites slice (offset + cipherSuitesLen) fits within
data.length before using it; update the logic around the
cipherSuitesLen/readUInt16BE usage in the SNI/TLS parsing function so it
gracefully handles truncated input instead of throwing RangeError.
| */ | ||
| export function parseSSEStream(raw: string): SSEChunk[] { | ||
| const chunks: SSEChunk[] = []; | ||
| const blocks = raw.split("\n\n"); |
There was a problem hiding this comment.
Handle CRLF SSE delimiters to avoid chunk coalescing.
Line 16 only splits on "\n\n". Streams using "\r\n\r\n" won’t split correctly, so multiple SSE events can collapse into one parsed chunk.
Proposed fix
export function parseSSEStream(raw: string): SSEChunk[] {
const chunks: SSEChunk[] = [];
- const blocks = raw.split("\n\n");
+ const normalized = raw.replace(/\r\n/g, "\n");
+ const blocks = normalized.split(/\n\n+/);
@@
- for (const line of block.split("\n")) {
+ for (const line of block.split("\n")) {Also applies to: 25-38
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/sse/sse-reassembler.ts` at line 16, The split in sse-reassembler.ts
currently uses raw.split("\n\n") (assigned to blocks) which fails for
CRLF-delimited streams; normalize CRLF or use a regex like
raw.split(/\r?\n\r?\n/) (or first replace(/\r\n/g, "\n") then split("\n\n")) to
correctly separate SSE events, and apply the same change to the other parsing
block(s) in the same file (the code region around lines 25-38) so all places
that split raw event chunks handle both "\n\n" and "\r\n\r\n".
| if (domain.includes("anthropic")) { | ||
| return reassembleAnthropic(chunks); | ||
| } | ||
| if (domain.includes("openai")) { | ||
| return reassembleOpenAI(chunks); |
There was a problem hiding this comment.
Normalize domain before provider detection.
Line 51 and Line 54 perform case-sensitive checks. Hostnames are case-insensitive, so mixed-case inputs can skip provider-specific reassembly.
Proposed fix
export function reassembleSSE(
chunks: SSEChunk[],
domain: string,
): ReassembledSSE {
- if (domain.includes("anthropic")) {
+ const normalizedDomain = domain.toLowerCase();
+ if (normalizedDomain.includes("anthropic")) {
return reassembleAnthropic(chunks);
}
- if (domain.includes("openai")) {
+ if (normalizedDomain.includes("openai")) {
return reassembleOpenAI(chunks);
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| if (domain.includes("anthropic")) { | |
| return reassembleAnthropic(chunks); | |
| } | |
| if (domain.includes("openai")) { | |
| return reassembleOpenAI(chunks); | |
| export function reassembleSSE( | |
| chunks: SSEChunk[], | |
| domain: string, | |
| ): ReassembledSSE { | |
| const normalizedDomain = domain.toLowerCase(); | |
| if (normalizedDomain.includes("anthropic")) { | |
| return reassembleAnthropic(chunks); | |
| } | |
| if (normalizedDomain.includes("openai")) { | |
| return reassembleOpenAI(chunks); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/sse/sse-reassembler.ts` around lines 51 - 55, The provider detection
is using case-sensitive includes on domain which can miss mixed-case hostnames;
normalize domain to lowercase before checks by replacing usages of
domain.includes("anthropic") and domain.includes("openai") with a lowercase
comparison (e.g., compute a local lowercasedDomain = domain.toLowerCase() and
use lowercasedDomain.includes("anthropic") /
lowercasedDomain.includes("openai")), and update any related branches
(reassembleAnthropic, reassembleOpenAI) to use this normalized value so
provider-specific reassembly runs reliably for mixed-case hostnames.
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/bun/index.ts`:
- Around line 81-86: The tray item currently sets label to only the bind address
using state.proxyPort so the proxyStatus (starting/running/error) never appears;
update the tray item construction (the object with type:"normal", label,
action:"proxy-port", enabled:false) to include state.proxyStatus in the label
(or map state.proxyStatus to a human string) so the label becomes e.g. `Proxy:
127.0.0.1:${state.proxyPort} — ${state.proxyStatus}` (or similar), ensuring you
reference the proxyStatus property from state where the tray items are built.
- Around line 188-195: The log is printing raw trace.request.url (which may
contain sensitive query params); update the trace handler registered in
traceEmitter.on("trace", ...) to redact the URL's query string before logging:
parse trace.request.url (using URL in a try/catch to handle absolute URLs and
fall back to stripping anything after '?' for non-absolute values), replace the
query portion with either an empty string or a placeholder like "?REDACTED", and
use that redactedUrl in the console.log call while leaving the rest of the
message (provider, model, method, status, duration, tokens) unchanged.
- Around line 122-133: Prevent stuck "starting" and duplicate starts by first
guarding in the start path: before calling startProxy, check state.captureStatus
(and/or state.proxyStatus) and return early if captureStatus !== "inactive" or
proxyStatus !== "idle" to avoid scheduling multiple starts; then wrap the
startProxy(state.proxyPort) call with a local timeout watchdog that, if the
start promise neither resolves nor rejects within a short window (e.g., 10s),
sets state.captureStatus = "error", state.proxyStatus = "error", calls
buildTrayMenu(), and clears the watchdog once the promise settles; keep
references to state.proxyStatus, state.captureStatus, startProxy and
buildTrayMenu so the fix is easy to locate.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: d03e1b36-4813-40f9-93eb-721edc39a26a
⛔ Files ignored due to path filters (2)
src/views/mainview/assets/tray-icon-template.pngis excluded by!**/*.pngsrc/views/mainview/assets/tray-icon-template@2x.pngis excluded by!**/*.png
📒 Files selected for processing (2)
electrobun.config.tssrc/bun/index.ts
✅ Files skipped from review due to trivial changes (1)
- electrobun.config.ts
| { | ||
| type: "normal", | ||
| label: `Proxy: 127.0.0.1:${state.proxyPort}`, | ||
| action: "proxy-port", | ||
| enabled: false, | ||
| }, |
There was a problem hiding this comment.
Render proxyStatus in the tray item.
Line 83 only shows the bind address, so starting / running / error never becomes visible even though the state tracks it.
Proposed fix
{
type: "normal",
- label: `Proxy: 127.0.0.1:${state.proxyPort}`,
+ label: `Proxy: ${state.proxyStatus} · 127.0.0.1:${state.proxyPort}`,
action: "proxy-port",
enabled: false,
},📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| { | |
| type: "normal", | |
| label: `Proxy: 127.0.0.1:${state.proxyPort}`, | |
| action: "proxy-port", | |
| enabled: false, | |
| }, | |
| { | |
| type: "normal", | |
| label: `Proxy: ${state.proxyStatus} · 127.0.0.1:${state.proxyPort}`, | |
| action: "proxy-port", | |
| enabled: false, | |
| }, |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/index.ts` around lines 81 - 86, The tray item currently sets label to
only the bind address using state.proxyPort so the proxyStatus
(starting/running/error) never appears; update the tray item construction (the
object with type:"normal", label, action:"proxy-port", enabled:false) to include
state.proxyStatus in the label (or map state.proxyStatus to a human string) so
the label becomes e.g. `Proxy: 127.0.0.1:${state.proxyPort} —
${state.proxyStatus}` (or similar), ensuring you reference the proxyStatus
property from state where the tray items are built.
| state.proxyStatus = "starting"; | ||
| startProxy(state.proxyPort).then(() => { | ||
| state.captureStatus = "active"; | ||
| state.proxyStatus = "running"; | ||
| console.log("[AgentTap] Capture started on port", state.proxyPort); | ||
| buildTrayMenu(); | ||
| }).catch((err: unknown) => { | ||
| state.captureStatus = "error"; | ||
| state.proxyStatus = "error"; | ||
| console.error("[AgentTap] Failed to start proxy:", err); | ||
| buildTrayMenu(); | ||
| }); |
There was a problem hiding this comment.
This startup path can get stuck in starting.
This code assumes startProxy() rejects on failure, but src/bun/proxy/proxy-server.ts:49-103 only logs server errors and never rejects the startup promise. A bind failure leaves the tray in starting, and repeated clicks can schedule multiple starts while captureStatus is still "inactive".
Local guard to add here
- state.proxyStatus = "starting";
- startProxy(state.proxyPort).then(() => {
+ if (state.proxyStatus === "starting") return;
+ state.proxyStatus = "starting";
+ buildTrayMenu();
+ void startProxy(state.proxyPort).then(() => {
state.captureStatus = "active";
state.proxyStatus = "running";
console.log("[AgentTap] Capture started on port", state.proxyPort);
buildTrayMenu();
}).catch((err: unknown) => {This still needs startProxy() to reject listen / server startup errors instead of only logging them.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| state.proxyStatus = "starting"; | |
| startProxy(state.proxyPort).then(() => { | |
| state.captureStatus = "active"; | |
| state.proxyStatus = "running"; | |
| console.log("[AgentTap] Capture started on port", state.proxyPort); | |
| buildTrayMenu(); | |
| }).catch((err: unknown) => { | |
| state.captureStatus = "error"; | |
| state.proxyStatus = "error"; | |
| console.error("[AgentTap] Failed to start proxy:", err); | |
| buildTrayMenu(); | |
| }); | |
| if (state.proxyStatus === "starting") return; | |
| state.proxyStatus = "starting"; | |
| buildTrayMenu(); | |
| void startProxy(state.proxyPort).then(() => { | |
| state.captureStatus = "active"; | |
| state.proxyStatus = "running"; | |
| console.log("[AgentTap] Capture started on port", state.proxyPort); | |
| buildTrayMenu(); | |
| }).catch((err: unknown) => { | |
| state.captureStatus = "error"; | |
| state.proxyStatus = "error"; | |
| console.error("[AgentTap] Failed to start proxy:", err); | |
| buildTrayMenu(); | |
| }); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/index.ts` around lines 122 - 133, Prevent stuck "starting" and
duplicate starts by first guarding in the start path: before calling startProxy,
check state.captureStatus (and/or state.proxyStatus) and return early if
captureStatus !== "inactive" or proxyStatus !== "idle" to avoid scheduling
multiple starts; then wrap the startProxy(state.proxyPort) call with a local
timeout watchdog that, if the start promise neither resolves nor rejects within
a short window (e.g., 10s), sets state.captureStatus = "error",
state.proxyStatus = "error", calls buildTrayMenu(), and clears the watchdog once
the promise settles; keep references to state.proxyStatus, state.captureStatus,
startProxy and buildTrayMenu so the fix is easy to locate.
| traceEmitter.on("trace", (trace) => { | ||
| state.tracesCount++; | ||
| const tokens = trace.tokensIn || trace.tokensOut | ||
| ? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)` | ||
| : ""; | ||
| console.log( | ||
| `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url} → ${trace.response.status} ${trace.durationMs}ms${tokens}`, | ||
| ); |
There was a problem hiding this comment.
Do not log raw captured URLs.
trace.request.url can include sensitive query params. Writing it verbatim to console/system logs creates an avoidable data leak path.
Redact the query string before logging
traceEmitter.on("trace", (trace) => {
state.tracesCount++;
+ const safeUrl = (() => {
+ try {
+ const url = new URL(trace.request.url);
+ return `${url.origin}${url.pathname}`;
+ } catch {
+ return trace.request.url.split("?")[0];
+ }
+ })();
const tokens = trace.tokensIn || trace.tokensOut
? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)`
: "";
console.log(
- `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url} → ${trace.response.status} ${trace.durationMs}ms${tokens}`,
+ `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${safeUrl} → ${trace.response.status} ${trace.durationMs}ms${tokens}`,
);📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| traceEmitter.on("trace", (trace) => { | |
| state.tracesCount++; | |
| const tokens = trace.tokensIn || trace.tokensOut | |
| ? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)` | |
| : ""; | |
| console.log( | |
| `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${trace.request.url} → ${trace.response.status} ${trace.durationMs}ms${tokens}`, | |
| ); | |
| traceEmitter.on("trace", (trace) => { | |
| state.tracesCount++; | |
| const safeUrl = (() => { | |
| try { | |
| const url = new URL(trace.request.url); | |
| return `${url.origin}${url.pathname}`; | |
| } catch { | |
| return trace.request.url.split("?")[0]; | |
| } | |
| })(); | |
| const tokens = trace.tokensIn || trace.tokensOut | |
| ? ` (${trace.tokensIn ?? "?"}→${trace.tokensOut ?? "?"} tokens)` | |
| : ""; | |
| console.log( | |
| `[Trace] ${trace.provider} ${trace.model ?? "unknown"} ${trace.request.method} ${safeUrl} → ${trace.response.status} ${trace.durationMs}ms${tokens}`, | |
| ); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@src/bun/index.ts` around lines 188 - 195, The log is printing raw
trace.request.url (which may contain sensitive query params); update the trace
handler registered in traceEmitter.on("trace", ...) to redact the URL's query
string before logging: parse trace.request.url (using URL in a try/catch to
handle absolute URLs and fall back to stripping anything after '?' for
non-absolute values), replace the query portion with either an empty string or a
placeholder like "?REDACTED", and use that redactedUrl in the console.log call
while leaving the rest of the message (provider, model, method, status,
duration, tokens) unchanged.
Summary
@peculiar/x509, auto-generation on first run, expiry detection, secure key storage (0600)node:tlslimitations), bidirectional data relay via manualwrite()forwardingEventEmitterfor trace eventsArchitecture
New files
src/bun/ca/cert-generator.tssrc/bun/ca/ca-manager.tssrc/bun/ca/cert-cache.tssrc/bun/proxy/proxy-server.tssrc/bun/proxy/sni-parser.tssrc/bun/proxy/http-parser.tssrc/bun/sse/sse-reassembler.tssrc/bun/capture/types.tssrc/bun/capture/trace-emitter.tsTest plan
openssl verifychain validation passescurl --proxy --cacertthrough proxy toapi.anthropic.comreturns 401 (expected without API key)traceEmitter.on("trace")receivesCapturedTracewith correct provider, domain, status, durationdechunk()with chunked responsesSummary by CodeRabbit
New Features
Documentation
Chores