Conversation
📝 WalkthroughWalkthroughSplit proxy/auth logic into dedicated controllers and modules, added a Claude-compatible builtin proxy runtime and adapter code, removed special-case "local" provider rules, introduced a configurable share-command prefix persisted in the Web UI, wired new Web/MCP proxy control actions, and added/updated unit and E2E tests covering the new proxy and UI behavior. Changes
Sequence DiagramsequenceDiagram
actor Client
participant Proxy as Builtin Claude Proxy Runtime
participant Upstream as Upstream Provider (responses API)
Client->>Proxy: POST /v1/messages (Anthropic format)
Proxy->>Proxy: Convert Anthropic → responses\n(system→instructions, messages→input, tools→parameters)
Proxy->>Upstream: POST /v1/responses (responses format with auth)
Upstream-->>Proxy: responses (output_text / function_call / usage)
Proxy->>Proxy: Convert responses → Anthropic message/stream\n(map output_text, function_call→tool_use, include usage/stop_reason)
Proxy-->>Client: Anthropic message or SSE stream
Client->>Proxy: GET /v1/models
Proxy->>Upstream: GET /v1/models
Upstream-->>Proxy: { data: [...] }
Proxy->>Proxy: Reshape to Anthropic-style models payload
Proxy-->>Client: Models payload
Estimated code review effort🎯 4 (Complex) | ⏱️ ~60 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches📝 Generate docstrings
🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 4
🧹 Nitpick comments (2)
tests/e2e/test-claude-proxy.js (1)
205-212: Avoid order-dependent assertions on upstream calls.Asserting
upstreamMessages[0]and[1]is brittle if request ordering changes; match by payload shape instead.Proposed patch
const upstreamMessages = upstream.requests.filter((item) => item.path === '/v1/responses'); assert(upstreamMessages.length >= 2, 'claude proxy should hit upstream /v1/responses'); -assert(upstreamMessages[0].headers.authorization === 'Bearer sk-claude-upstream', 'claude proxy should use provider auth for upstream'); -assert(upstreamMessages[0].body.instructions === 'system prompt', 'claude proxy should map system prompt to responses instructions'); -assert(upstreamMessages[0].body.max_output_tokens === 128, 'claude proxy should map max_tokens to max_output_tokens'); -assert(Array.isArray(upstreamMessages[0].body.input), 'claude proxy should map anthropic messages into responses input array'); -assert(upstreamMessages[1].body.tool_choice && upstreamMessages[1].body.tool_choice.name === 'lookup', 'claude proxy should map tool_choice to responses tool_choice'); -assert(Array.isArray(upstreamMessages[1].body.tools) && upstreamMessages[1].body.tools[0].type === 'function', 'claude proxy should map anthropic tools into responses tools'); +const plainReq = upstreamMessages.find((item) => item.body && item.body.instructions === 'system prompt'); +const toolReq = upstreamMessages.find((item) => item.body && Array.isArray(item.body.tools) && item.body.tools.length > 0); +assert(plainReq && plainReq.headers.authorization === 'Bearer sk-claude-upstream', 'claude proxy should use provider auth for upstream'); +assert(plainReq.body.max_output_tokens === 128, 'claude proxy should map max_tokens to max_output_tokens'); +assert(Array.isArray(plainReq.body.input), 'claude proxy should map anthropic messages into responses input array'); +assert(toolReq && toolReq.body.tool_choice && toolReq.body.tool_choice.name === 'lookup', 'claude proxy should map tool_choice to responses tool_choice'); +assert(Array.isArray(toolReq.body.tools) && toolReq.body.tools[0].type === 'function', 'claude proxy should map anthropic tools into responses tools');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/test-claude-proxy.js` around lines 205 - 212, The test uses order-dependent assertions on upstreamMessages[0] and [1], which is brittle; refactor the checks to find matching requests by payload shape instead of index. In the tests/e2e/test-claude-proxy.js assertions, replace direct index access of upstreamMessages with searches (e.g., Array.prototype.find or filter) that locate the request where body.instructions==='system prompt' (validate authorization, max_output_tokens, and input array) and separately locate the request containing tool_choice.name==='lookup' (validate tool_choice and tools[0].type==='function'); keep the same assertion semantics but use these shape-based lookups so ordering changes won't break the test.tests/unit/claude-proxy-adapter.test.mjs (1)
13-53: Prefer importing the adapter from a real module.Reading
cli.js, slicing a#region, and executing it withFunction(...)makes this test fragile to harmless refactors incli.jsand bypasses the actual module boundary. Extracting these helpers into a small shared module would make bothcli.jsand this suite exercise the same implementation path.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/claude-proxy-adapter.test.mjs` around lines 13 - 53, The test is brittle because it slices and evals a region from cli.js using extractByRegion/adapterSrc and a Function(...) wrapper; instead import the adapter helpers from a real module: move the implementations for buildBuiltinClaudeResponsesRequest, buildAnthropicMessageFromResponses, buildAnthropicStreamEvents, and buildAnthropicModelsPayload into a small shared module (or export them from the CLI module), then change the test to import that module and call the real functions directly rather than using extractByRegion/adapterSrc and the dynamic Function invocation.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli.js`:
- Around line 7153-7190: The current transport.request handler (inside the
Promise that creates upstreamReq/upstreamRes) buffers all upstreamRes 'data'
into chunks and JSON-parses on 'end', which prevents streaming (stream: true /
/responses SSE) from delivering incremental events; change the logic in the
upstreamRes 'data' handler to bypass full buffering when a streaming request is
detected (e.g., requestOptions.stream || path includes '/responses' or SSE
Accept header) and instead forward each incoming chunk immediately as an
SSE/data fragment to the caller (or to the response writable) while still
enforcing MAX_API_BODY_SIZE and handling errors via upstreamReq.destroy/reject;
keep the existing behavior only for non-streaming responses so resolve with
rawBody/payload on 'end' otherwise.
- Around line 7681-7688: Wrap the calls to readJsonRequestBody and
buildBuiltinClaudeResponsesRequest in a local try/catch so client-side
parsing/validation errors (malformed JSON, oversized body, missing fields) are
caught before calling requestBuiltinClaudeProxyUpstream; on such errors return a
4xx HTTP response (400 for invalid JSON/missing fields, 413 for oversized body)
with the existing invalid_request_error body shape instead of letting the error
bubble to the global catch that returns 502. Apply the same change to the other
handler that invokes readJsonRequestBody/buildBuiltinClaudeResponsesRequest (the
similar block around the second requestBuiltinClaudeProxyUpstream call) so both
paths classify client errors as 4xx.
- Around line 7265-7295: The loop over normalizeAnthropicContentBlocks currently
silently converts any unmapped block into a fake text payload; instead, detect
unsupported block types (anything not handled by the 'text', 'tool_use', or
'tool_result' branches) and fail fast by throwing a clear error (including the
offending block.type and any identifying block.id or block.tool_use_id) rather
than pushing a placeholder string; update the code in the loop that currently
pushes the buffered fake text to call flushBuffered() and then throw a
descriptive Error (or return/propagate an error) referencing the offending
block, while keeping existing behavior for text (uses buffered.push with
textType), tool_use (uses safeJsonStringify and crypto-based call_id), and
tool_result (uses stringifyAnthropicToolResultContent and tool_use_id) so
multimodal/tool payloads are not corrupted.
In `@tests/e2e/test-claude-proxy.js`:
- Around line 4-34: requestRaw has no timeout and can hang tests; update the
requestRaw function to set a client-side timeout and fail fast: attach a timeout
(e.g. via req.setTimeout(...) or via req.on('socket', s => s.setTimeout(...)))
and handle the 'timeout' event by aborting the request
(req.abort()/req.destroy()) and rejecting the Promise with a clear timeout
error; also ensure you remove/ignore any further response handlers after timeout
to avoid memory leaks. Reference: function requestRaw, local variables req and
response handlers.
---
Nitpick comments:
In `@tests/e2e/test-claude-proxy.js`:
- Around line 205-212: The test uses order-dependent assertions on
upstreamMessages[0] and [1], which is brittle; refactor the checks to find
matching requests by payload shape instead of index. In the
tests/e2e/test-claude-proxy.js assertions, replace direct index access of
upstreamMessages with searches (e.g., Array.prototype.find or filter) that
locate the request where body.instructions==='system prompt' (validate
authorization, max_output_tokens, and input array) and separately locate the
request containing tool_choice.name==='lookup' (validate tool_choice and
tools[0].type==='function'); keep the same assertion semantics but use these
shape-based lookups so ordering changes won't break the test.
In `@tests/unit/claude-proxy-adapter.test.mjs`:
- Around line 13-53: The test is brittle because it slices and evals a region
from cli.js using extractByRegion/adapterSrc and a Function(...) wrapper;
instead import the adapter helpers from a real module: move the implementations
for buildBuiltinClaudeResponsesRequest, buildAnthropicMessageFromResponses,
buildAnthropicStreamEvents, and buildAnthropicModelsPayload into a small shared
module (or export them from the CLI module), then change the test to import that
module and call the real functions directly rather than using
extractByRegion/adapterSrc and the dynamic Function invocation.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 8d1530d9-4e5e-4a22-9a4c-5ad32de75692
📒 Files selected for processing (5)
cli.jstests/e2e/run.jstests/e2e/test-claude-proxy.jstests/unit/claude-proxy-adapter.test.mjstests/unit/run.mjs
📜 Review details
🧰 Additional context used
🪛 GitHub Actions: ci
tests/unit/run.mjs
[error] 57-57: Unit test runner exited with failure after the parity test assertion error.
🔇 Additional comments (5)
tests/e2e/run.js (1)
18-18: Good E2E wiring for the new Claude proxy flow.Import + execution placement is clean and keeps the existing lifecycle intact.
Also applies to: 129-129
tests/e2e/test-claude-proxy.js (1)
145-204: Strong protocol-level coverage in this E2E.The models, non-streaming message, and streaming SSE assertions provide good end-to-end confidence for the adapter path.
tests/unit/run.mjs (1)
44-44: This concern is not supported by the code.claude-proxy-adapter.test.mjshas top-level file I/O (fs.readFileSyncat line 14) and dynamic function evaluation, but no global state mutations, mocking, or cleanup issues that would leak into subsequent test modules. The regex search confirmed no stubbing, mocking, prototype pollution, or process environment mutations at the top level.> Likely an incorrect or invalid review comment.tests/unit/claude-proxy-adapter.test.mjs (2)
62-210: Nice happy-path coverage across the adapter surface.These cases hit the core request mapping, reverse mapping, streaming, and model reshaping paths, which should catch most regressions in the new bridge logic.
1-60: No action needed —testis properly provided by the test runner.
tests/unit/run.mjsexplicitly setsglobalThis.teston line 8, so baretest(...)calls in this file work as expected.
| return new Promise((resolve, reject) => { | ||
| const upstreamReq = transport.request({ | ||
| protocol: targetUrl.protocol, | ||
| hostname: targetUrl.hostname, | ||
| port: targetUrl.port || (targetUrl.protocol === 'https:' ? 443 : 80), | ||
| method: requestOptions.method || 'POST', | ||
| path: `${targetUrl.pathname}${targetUrl.search}`, | ||
| headers, | ||
| agent: targetUrl.protocol === 'https:' ? HTTPS_KEEP_ALIVE_AGENT : HTTP_KEEP_ALIVE_AGENT | ||
| }, (upstreamRes) => { | ||
| const chunks = []; | ||
| let total = 0; | ||
| upstreamRes.on('data', (chunk) => { | ||
| total += chunk.length; | ||
| if (total > MAX_API_BODY_SIZE) { | ||
| upstreamReq.destroy(new Error(`upstream body too large (${MAX_API_BODY_SIZE} bytes max)`)); | ||
| return; | ||
| } | ||
| chunks.push(chunk); | ||
| }); | ||
| upstreamRes.on('error', reject); | ||
| upstreamRes.on('end', () => { | ||
| const rawBody = Buffer.concat(chunks).toString('utf-8'); | ||
| let payload = null; | ||
| if (rawBody.trim()) { | ||
| try { | ||
| payload = JSON.parse(rawBody); | ||
| } catch (_) { | ||
| payload = null; | ||
| } | ||
| } | ||
| resolve({ | ||
| statusCode: upstreamRes.statusCode || 502, | ||
| headers: upstreamRes.headers, | ||
| rawBody, | ||
| payload | ||
| }); | ||
| }); |
There was a problem hiding this comment.
stream: true is currently buffered, not streamed.
The proxy waits for the full upstream /responses payload to finish, then synthesizes SSE events afterward. That means Claude clients see no incremental output at all and can hit idle timeouts on long generations, even though they requested streaming.
Also applies to: 7701-7704
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli.js` around lines 7153 - 7190, The current transport.request handler
(inside the Promise that creates upstreamReq/upstreamRes) buffers all
upstreamRes 'data' into chunks and JSON-parses on 'end', which prevents
streaming (stream: true / /responses SSE) from delivering incremental events;
change the logic in the upstreamRes 'data' handler to bypass full buffering when
a streaming request is detected (e.g., requestOptions.stream || path includes
'/responses' or SSE Accept header) and instead forward each incoming chunk
immediately as an SSE/data fragment to the caller (or to the response writable)
while still enforcing MAX_API_BODY_SIZE and handling errors via
upstreamReq.destroy/reject; keep the existing behavior only for non-streaming
responses so resolve with rawBody/payload on 'end' otherwise.
| const payload = await readJsonRequestBody(req); | ||
| const upstreamRequestBody = buildBuiltinClaudeResponsesRequest(payload); | ||
| const upstreamResponse = await requestBuiltinClaudeProxyUpstream(upstream, { | ||
| method: 'POST', | ||
| pathSuffix: 'responses', | ||
| body: upstreamRequestBody, | ||
| authHeader: authResult.authHeader, | ||
| timeoutMs: settings.timeoutMs |
There was a problem hiding this comment.
Return 4xx for bad client requests instead of collapsing them into 502.
readJsonRequestBody() and buildBuiltinClaudeResponsesRequest() throw for malformed JSON, oversized bodies, or missing required fields, but those exceptions currently fall through to the server-level catch and get reported as 502 api_error. That misclassifies client input bugs as upstream failures and breaks callers that rely on 400/413 invalid_request_error.
Suggested fix
- const payload = await readJsonRequestBody(req);
- const upstreamRequestBody = buildBuiltinClaudeResponsesRequest(payload);
+ let payload;
+ let upstreamRequestBody;
+ try {
+ payload = await readJsonRequestBody(req);
+ upstreamRequestBody = buildBuiltinClaudeResponsesRequest(payload);
+ } catch (err) {
+ const message = err && err.message ? err.message : 'invalid request';
+ const statusCode = /too large/i.test(message) ? 413 : 400;
+ writeAnthropicProxyError(res, statusCode, message, 'invalid_request_error');
+ return;
+ }Also applies to: 7718-7724
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli.js` around lines 7681 - 7688, Wrap the calls to readJsonRequestBody and
buildBuiltinClaudeResponsesRequest in a local try/catch so client-side
parsing/validation errors (malformed JSON, oversized body, missing fields) are
caught before calling requestBuiltinClaudeProxyUpstream; on such errors return a
4xx HTTP response (400 for invalid JSON/missing fields, 413 for oversized body)
with the existing invalid_request_error body shape instead of letting the error
bubble to the global catch that returns 502. Apply the same change to the other
handler that invokes readJsonRequestBody/buildBuiltinClaudeResponsesRequest (the
similar block around the second requestBuiltinClaudeProxyUpstream call) so both
paths classify client errors as 4xx.
| function requestRaw(port, pathname, options = {}) { | ||
| return new Promise((resolve, reject) => { | ||
| const body = options.body !== undefined ? JSON.stringify(options.body) : ''; | ||
| const req = http.request({ | ||
| hostname: '127.0.0.1', | ||
| port, | ||
| path: pathname, | ||
| method: options.method || (body ? 'POST' : 'GET'), | ||
| headers: { | ||
| ...(options.headers || {}), | ||
| ...(body ? { | ||
| 'Content-Type': 'application/json', | ||
| 'Content-Length': Buffer.byteLength(body) | ||
| } : {}) | ||
| } | ||
| }, (res) => { | ||
| let responseBody = ''; | ||
| res.setEncoding('utf-8'); | ||
| res.on('data', chunk => responseBody += chunk); | ||
| res.on('end', () => resolve({ | ||
| statusCode: res.statusCode || 0, | ||
| headers: res.headers, | ||
| body: responseBody | ||
| })); | ||
| }); | ||
| req.on('error', reject); | ||
| if (body) { | ||
| req.write(body); | ||
| } | ||
| req.end(); | ||
| }); |
There was a problem hiding this comment.
Add HTTP client timeouts to prevent E2E hangs.
requestRaw has no timeout, so a stalled socket can block the whole suite indefinitely.
Proposed patch
function requestRaw(port, pathname, options = {}) {
return new Promise((resolve, reject) => {
+ const timeoutMs = Number.isFinite(options.timeoutMs) ? options.timeoutMs : 10000;
const body = options.body !== undefined ? JSON.stringify(options.body) : '';
const req = http.request({
hostname: '127.0.0.1',
port,
path: pathname,
@@
}, (res) => {
let responseBody = '';
res.setEncoding('utf-8');
res.on('data', chunk => responseBody += chunk);
res.on('end', () => resolve({
@@
body: responseBody
}));
});
req.on('error', reject);
+ req.setTimeout(timeoutMs, () => {
+ req.destroy(new Error(`request timeout after ${timeoutMs}ms: ${pathname}`));
+ });
if (body) {
req.write(body);
}
req.end();
});
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@tests/e2e/test-claude-proxy.js` around lines 4 - 34, requestRaw has no
timeout and can hang tests; update the requestRaw function to set a client-side
timeout and fail fast: attach a timeout (e.g. via req.setTimeout(...) or via
req.on('socket', s => s.setTimeout(...))) and handle the 'timeout' event by
aborting the request (req.abort()/req.destroy()) and rejecting the Promise with
a clear timeout error; also ensure you remove/ignore any further response
handlers after timeout to avoid memory leaks. Reference: function requestRaw,
local variables req and response handlers.
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
cli.js (1)
9204-9221:⚠️ Potential issue | 🟠 MajorExport can serialize a
localactive state that cannot be restored.This skips managed providers from
providers, but it still exportscurrentProvider,currentModel, andcurrentModelsverbatim. If the active provider islocal, a later import withapplyCurrentcan reachcmdSwitch('local'), which reads rawconfig.tomland fails because that provider is synthetic and never persisted.Suggested fix
function buildExportPayload(includeKeys) { const { config } = readConfigOrVirtualDefault(); const providers = config.model_providers || {}; const providerData = {}; for (const [name, provider] of Object.entries(providers)) { if (isBuiltinManagedProvider(name)) { continue; } providerData[name] = { baseUrl: provider.base_url || '', apiKey: includeKeys ? (provider.preferred_auth_method || '') : null }; } + const rawCurrentProvider = typeof config.model_provider === 'string' ? config.model_provider : ''; + const currentProvider = isBuiltinManagedProvider(rawCurrentProvider) ? '' : rawCurrentProvider; + const rawCurrentModels = readCurrentModels(); + const currentModels = Object.fromEntries( + Object.entries(rawCurrentModels).filter(([name]) => !isBuiltinManagedProvider(name)) + ); return { version: 1, - currentProvider: config.model_provider || '', - currentModel: config.model || '', + currentProvider, + currentModel: currentProvider ? (config.model || '') : '', providers: providerData, models: readModels(), - currentModels: readCurrentModels() + currentModels }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 9204 - 9221, The export currently skips managed providers when building providerData but still exports currentProvider/currentModel/currentModels verbatim, which can reference a synthetic managed provider like "local"; change the return construction to clear or omit currentProvider/currentModel/currentModels when the configured provider is a built-in managed provider (use isBuiltinManagedProvider(config.model_provider) or check that config.model_provider exists in the built providerData map) so imports cannot attempt to switch to a non-persisted provider; update the logic near providerData creation and the final return to only export active state when the provider is present in providerData.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Outside diff comments:
In `@cli.js`:
- Around line 9204-9221: The export currently skips managed providers when
building providerData but still exports
currentProvider/currentModel/currentModels verbatim, which can reference a
synthetic managed provider like "local"; change the return construction to clear
or omit currentProvider/currentModel/currentModels when the configured provider
is a built-in managed provider (use
isBuiltinManagedProvider(config.model_provider) or check that
config.model_provider exists in the built providerData map) so imports cannot
attempt to switch to a non-persisted provider; update the logic near
providerData creation and the final return to only export active state when the
provider is present in providerData.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: cbcd4570-bd0e-46f8-83fa-5eb27ce12e44
📒 Files selected for processing (9)
cli.jstests/unit/provider-share-command.test.mjstests/unit/web-ui-behavior-parity.test.mjsweb-ui/app.jsweb-ui/modules/app.computed.dashboard.mjsweb-ui/modules/app.methods.codex-config.mjsweb-ui/modules/app.methods.providers.mjsweb-ui/modules/app.methods.startup-claude.mjsweb-ui/partials/index/panel-config-codex.html
📜 Review details
🧰 Additional context used
🪛 GitHub Actions: ci
tests/unit/web-ui-behavior-parity.test.mjs
[error] 363-363: AssertionError in web UI behavior parity test: captured bundled app skeleton exposes unexpected extra data keys compared to parity baseline. Unexpected keys: localProxyRunning, localProxyRuntime, localProxySettings, localProxyStatusLoading.
🪛 HTMLHint (1.9.2)
web-ui/partials/index/panel-config-codex.html
[error] 76-76: Special characters must be escaped : [ > ].
(spec-char-escape)
🔇 Additional comments (11)
cli.js (3)
7189-7226:stream: trueis still buffered end-to-end.This still collects the entire upstream
/responsesbody before returning, so the Anthropic SSE path only starts after completion. Long generations will still arrive as one burst and can trip idle timeouts on Claude clients.
7301-7331: Reject unsupported Anthropic blocks instead of inventing placeholder text.Unknown block types are still rewritten into fake text like
[unsupported anthropic block: ...], which silently corrupts multimodal/tool payloads instead of failing the request.
7717-7725: Malformed client requests still collapse into502 api_error.
readJsonRequestBody()andbuildBuiltinClaudeResponsesRequest()can throw here, but this handler still lets those exceptions fall through to the server-level catch. Bad JSON, oversized bodies, and missing required fields should return400/413 invalid_request_error, not an upstream failure.web-ui/modules/app.methods.startup-claude.mjs (1)
71-73: LGTM!The defensive type check and silent flag ensure graceful handling during startup. The placement after
providersListassignment is correct sinceloadBuiltinLocalProxyStatusmay callgetFirstNonLocalProviderName()which depends on the providers list being populated.web-ui/partials/index/panel-config-codex.html (1)
62-92: LGTM!The new local proxy upstream configuration section is well-structured. The HTMLHint warning about the
>character on line 76 is a false positive—the>operator inside Vue's{{ }}interpolation is valid JavaScript/Vue template syntax, not raw HTML that needs escaping.web-ui/app.js (1)
210-220: LGTM!The new reactive state fields are properly initialized with sensible defaults that align with the fallback values used in computed properties (
localProxyListenUrl) and methods. The default port 8318 and host '127.0.0.1' provide consistent behavior before the API response populates runtime data.tests/unit/web-ui-behavior-parity.test.mjs (1)
323-326: LGTM!The whitelist additions correctly account for all new local proxy state, methods, and computed properties introduced in this PR. This resolves the pipeline failure that was occurring due to unexpected extra keys in the parity test.
Also applies to: 378-388, 436-439
tests/unit/provider-share-command.test.mjs (1)
1067-1273: LGTM!The three new tests comprehensively validate the local proxy feature:
- Virtual local provider acceptance without explicit config block
- Read-only/non-editable/non-deletable flags for the local provider
- Proxy restart sequence (stop → start) before applying local provider config
The test at lines 1186-1273 correctly validates the exact API call sequence and state updates, matching the implementation in
applyCodexConfigDirect.web-ui/modules/app.methods.codex-config.mjs (1)
542-592: LGTM!The local proxy handling logic correctly:
- Validates upstream provider exists and isn't local-like
- Stops the existing proxy before starting with new settings
- Updates state (
localProxyRunning,localProxySettings,localProxyRuntime) only on successful start- Early returns on failure preserve the
finallyblock execution to resetcodexApplyingThe defensive type checks on
isLocalLikeProviderare harmless redundancy that ensures robustness if the method order changes.web-ui/modules/app.computed.dashboard.mjs (1)
23-52: LGTM!The computed properties correctly implement:
- Case-insensitive provider name matching using
.toLowerCase()consistently- Defensive fallbacks for
localProxyListenUrl(runtime → settings → defaults)- IPv6 address handling with bracket wrapping
- Sorting to display the local provider first in the list
Note: The context snippet shows
cli.jshas inconsistent normalization (isDefaultLocalProviderdoesn't usetoLowerCase()). The web-ui's consistent use of case-insensitive matching is the safer approach.Also applies to: 59-70
web-ui/modules/app.methods.providers.mjs (1)
170-268: LGTM!The new provider methods are well-implemented:
getFirstNonLocalProviderName(): Provides a sensible fallback for upstream provider selectionloadBuiltinLocalProxyStatus(): Properly handles API response, merges settings, and falls back to first non-local provider when none configuredonLocalProxyUpstreamChange(): Validates upstream selection (not empty, not local-like), saves config, and triggers appropriate follow-up actionsproviderPillState()for local providers: Correctly reflects configuration and running stateThe validation logic in
onLocalProxyUpstreamChange(lines 212-220) mirrors the validation inapplyCodexConfigDirect, ensuring consistent behavior across both code paths.
96035ab to
3b8027b
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (3)
cli.js (3)
7239-7269:⚠️ Potential issue | 🟠 MajorReject unsupported Anthropic blocks instead of fabricating text.
The fallback here silently rewrites any unmapped block into
[unsupported anthropic block: ...]. That corrupts multimodal or future Anthropic payloads and changes model behavior instead of returning a clear client error.Suggested fix
- buffered.push({ - type: textType, - text: `[unsupported anthropic block: ${typeof block.type === 'string' ? block.type : 'unknown'}]` - }); + flushBuffered(); + throw new Error( + `unsupported anthropic content block: ${typeof block.type === 'string' ? block.type : 'unknown'}` + );🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 7239 - 7269, The loop that iterates normalizeAnthropicContentBlocks(message.content) must not fabricate text for unsupported blocks; instead detect when a block is unrecognized (the current fallback that pushes into buffered with type textType and text `[unsupported anthropic block: ...]`) and immediately reject/throw a clear client error including the offending block type and any identifying id; update the logic around flushBuffered/target to throw or return an explicit error when encountering such unsupported blocks (reference the loop using normalizeAnthropicContentBlocks, the buffered/target arrays, flushBuffered, and stringifyAnthropicToolResultContent/function_call handling) so multimodal or future Anthropic payloads are not silently corrupted.
7094-7176:⚠️ Potential issue | 🟠 Major
stream: trueis still synthesized after completion.
buildBuiltinClaudeResponsesRequest()never forwardspayload.stream,requestBuiltinClaudeProxyUpstream()buffers the entire/responsesbody intochunks, andwriteAnthropicStreamEvents()only runs after that promise resolves. Claude clients still get fake SSE after completion instead of incremental output, so long generations can still hit idle timeouts.Quick verification:
#!/bin/bash rg -n "payload\.stream|requestBody\.stream|const chunks = \[\]|chunks\.push|writeAnthropicStreamEvents" cli.jsAlso applies to: 7295-7357, 7655-7678
7026-7038:⚠️ Potential issue | 🟠 MajorBad requests still fall through as upstream
502errors.Malformed JSON, oversized bodies, and request-shape validation failures still bubble into the server-level catch, so callers see
502 api_errorinstead of400/413 invalid_request_error. Thereq.destroy()insidereadJsonRequestBody()also tears down oversized requests before the handler can consistently return a proper 413.Also applies to: 7655-7663, 7691-7698
🧹 Nitpick comments (1)
tests/unit/claude-proxy-adapter.test.mjs (1)
13-53: Brittle region-extraction +Functioneval makes tests refactor-sensitive.The test couples to comment markers and source layout in
cli.js(lines 13–53), so non-behavioral refactors can break the test harness. The four adapter functions (buildBuiltinClaudeResponsesRequest,buildAnthropicMessageFromResponses,buildAnthropicStreamEvents,buildAnthropicModelsPayload) are defined directly within the//#regionbuiltinClaudeProxyAdapterblock incli.js(lines 7178–7564) and are not exported from a separate module. Extract these into a dedicated module and import them directly in bothcli.jsand the test file.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/unit/claude-proxy-adapter.test.mjs` around lines 13 - 53, The test is brittle because it extracts source via extractByRegion and uses Function(...) eval to reconstruct the adapter; instead, move the four functions (buildBuiltinClaudeResponsesRequest, buildAnthropicMessageFromResponses, buildAnthropicStreamEvents, buildAnthropicModelsPayload) out of the // `#region` builtinClaudeProxyAdapter block into a new dedicated module (e.g., claude-proxy-adapter.js) that exports those named functions; update cli.js to import those functions and use them where previously defined, and update tests to directly import the new module and eliminate extractByRegion/Function eval, keeping any small test-only helpers (readNonNegativeInteger/isPlainObject) or import/compose them if needed so tests no longer rely on comment markers or eval.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli.js`:
- Around line 7148-7164: In requestBuiltinClaudeProxyUpstream(), don't treat
invalid upstream JSON as a successful response; when JSON.parse(rawBody) throws,
reject the upstream promise (or resolve with an explicit error) instead of
setting payload = null so the /v1/messages handler cannot build a synthetic
Anthropic success from an empty payload. Change the JSON parsing block inside
the upstreamRes 'end' handler to catch parse errors and call reject(new
Error(...)) or resolve({ error: true, message: 'Invalid upstream JSON',
statusCode: upstreamRes.statusCode, rawBody }) so callers (e.g., the
/v1/messages handling logic) can surface a protocol/parsing error rather than
generating a fake success; apply the same change to the other identical block
around the 7675–7687 region.
---
Duplicate comments:
In `@cli.js`:
- Around line 7239-7269: The loop that iterates
normalizeAnthropicContentBlocks(message.content) must not fabricate text for
unsupported blocks; instead detect when a block is unrecognized (the current
fallback that pushes into buffered with type textType and text `[unsupported
anthropic block: ...]`) and immediately reject/throw a clear client error
including the offending block type and any identifying id; update the logic
around flushBuffered/target to throw or return an explicit error when
encountering such unsupported blocks (reference the loop using
normalizeAnthropicContentBlocks, the buffered/target arrays, flushBuffered, and
stringifyAnthropicToolResultContent/function_call handling) so multimodal or
future Anthropic payloads are not silently corrupted.
---
Nitpick comments:
In `@tests/unit/claude-proxy-adapter.test.mjs`:
- Around line 13-53: The test is brittle because it extracts source via
extractByRegion and uses Function(...) eval to reconstruct the adapter; instead,
move the four functions (buildBuiltinClaudeResponsesRequest,
buildAnthropicMessageFromResponses, buildAnthropicStreamEvents,
buildAnthropicModelsPayload) out of the // `#region` builtinClaudeProxyAdapter
block into a new dedicated module (e.g., claude-proxy-adapter.js) that exports
those named functions; update cli.js to import those functions and use them
where previously defined, and update tests to directly import the new module and
eliminate extractByRegion/Function eval, keeping any small test-only helpers
(readNonNegativeInteger/isPlainObject) or import/compose them if needed so tests
no longer rely on comment markers or eval.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 48b19e89-9fa6-4a9a-af60-8f200f466078
📒 Files selected for processing (15)
cli.jstests/e2e/run.jstests/e2e/test-claude-proxy.jstests/e2e/test-config.jstests/unit/claude-proxy-adapter.test.mjstests/unit/config-tabs-ui.test.mjstests/unit/provider-share-command.test.mjstests/unit/run.mjstests/unit/web-ui-behavior-parity.test.mjsweb-ui/app.jsweb-ui/modules/app.computed.dashboard.mjsweb-ui/modules/app.methods.providers.mjsweb-ui/modules/app.methods.session-actions.mjsweb-ui/partials/index/panel-config-codex.htmlweb-ui/partials/index/panel-settings.html
💤 Files with no reviewable changes (1)
- web-ui/modules/app.computed.dashboard.mjs
✅ Files skipped from review due to trivial changes (1)
- tests/e2e/run.js
🚧 Files skipped from review as they are similar to previous changes (3)
- tests/unit/run.mjs
- tests/unit/web-ui-behavior-parity.test.mjs
- web-ui/modules/app.methods.providers.mjs
📜 Review details
🔇 Additional comments (11)
tests/e2e/test-claude-proxy.js (2)
4-35:requestRawstill lacks a client timeout (can hang the suite).This concern appears unchanged from earlier feedback and remains a reliability risk for stalled connections.
117-225: Claude-proxy E2E scenario coverage is comprehensive.Start/status/models/non-stream/stream/mapping assertions plus cleanup provide solid end-to-end confidence for the new runtime.
web-ui/app.js (1)
207-207: Share-prefix state wiring looks correct.Default state + mounted restore with normalization is consistent with the session-action methods and avoids invalid persisted values.
Also applies to: 363-363
web-ui/partials/index/panel-config-codex.html (1)
196-197: Provider share button gating is aligned with permission logic.The loading/disabled/title bindings now reflect runtime eligibility and give clearer feedback.
Also applies to: 199-199
web-ui/partials/index/panel-settings.html (1)
156-167: New share-command prefix setting is integrated cleanly.The selector values and change handler match the normalization/persistence contract used by command generation.
tests/e2e/test-config.js (1)
314-320: E2E updates correctly coverlocalas a normal provider.Add/list/delete assertions provide good regression protection for the reserved-name policy change.
Also applies to: 1463-1468
tests/unit/config-tabs-ui.test.mjs (1)
158-161: UI parity tests were updated in the right places.The new assertions cover both the device setting and dynamic share-button behavior.
Also applies to: 240-243
web-ui/modules/app.methods.session-actions.mjs (1)
156-175: Prefix-aware share-command implementation is coherent.Normalization, persistence, and command construction paths are consistent and preserve backward-compatible defaults.
Also applies to: 282-282, 286-291, 305-305, 315-315
tests/unit/provider-share-command.test.mjs (1)
56-96: Expanded unit coverage is strong and targeted.The new test paths validate both command-prefix behavior and provider/config edge cases introduced by this PR.
Also applies to: 159-159, 172-202, 1127-1256
tests/unit/claude-proxy-adapter.test.mjs (1)
1-6: Remove this suggestion—it would break the test suite.The repository uses a custom test harness (
run.mjs) that injectstestintoglobalThisand collects tests into an array before execution (lines 7–8 and 55–64 ofrun.mjs). Importingtestfromnode:testwould bypass this harness entirely: tests registered with Node's test module would not be collected or executed by the custom runner, causing test failures.This change would require either (1) migrating the entire test suite to use Node's built-in test runner, or (2) configuring the custom harness to work with both approaches—neither of which is addressed in the current PR.
> Likely an incorrect or invalid review comment.cli.js (1)
13021-13027: Nice shutdown wiring for the new Claude proxy runtime.Stopping the Claude-compatible proxy alongside the existing builtin proxy in both exit paths should prevent orphan listeners and stale local ports after
runor MCP shutdown.Also applies to: 15275-15284
5131ed5 to
31f4e28
Compare
31f4e28 to
c1dc100
Compare
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (5)
cli.js (4)
7655-7663:⚠️ Potential issue | 🟠 MajorReturn 4xx for bad client requests instead of collapsing them into 502.
readJsonRequestBody()andbuildBuiltinClaudeResponsesRequest()can fail on malformed JSON, oversized bodies, or missing required fields, but this path lets those exceptions fall through to the server-level 502 handler. Catch them locally and return400/413 invalid_request_error.Suggested fix
- const payload = await readJsonRequestBody(req); - const upstreamRequestBody = buildBuiltinClaudeResponsesRequest(payload); + let payload; + let upstreamRequestBody; + try { + payload = await readJsonRequestBody(req); + upstreamRequestBody = buildBuiltinClaudeResponsesRequest(payload); + } catch (err) { + const message = err && err.message ? err.message : 'invalid request'; + const statusCode = /too large/i.test(message) ? 413 : 400; + writeAnthropicProxyError(res, statusCode, message, 'invalid_request_error'); + return; + } const upstreamResponse = await requestBuiltinClaudeProxyUpstream(upstream, {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 7655 - 7663, The request handling block allows exceptions thrown by readJsonRequestBody(...) or buildBuiltinClaudeResponsesRequest(...) to bubble up and become a 502; wrap those two calls in a local try/catch, detect client errors (malformed JSON, too-large body, missing fields) and respond with appropriate 4xx (use 413 for oversized, 400 with an "invalid_request_error" code for malformed/missing fields) by setting the response status and error body instead of letting requestBuiltinClaudeProxyUpstream(...) run; ensure errors from requestBuiltinClaudeProxyUpstream still propagate to the existing 502 handler so only parse/validation errors are handled locally.
7239-7269:⚠️ Potential issue | 🟠 MajorFail fast on unsupported Anthropic content blocks.
Anthropic message content is not text-only here: official docs call out
text,image,tool_use, andtool_resultblocks. Rewriting unknown blocks into a literal placeholder string mutates the request and feeds fabricated content to the model instead of returning a clear client error. (docs.anthropic.com)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 7239 - 7269, The code currently converts unsupported Anthropic blocks into a fabricated text placeholder; instead, update the loop that iterates normalizeAnthropicContentBlocks(message.content) to fail fast when encountering any block.type other than the supported ones ('text', 'tool_use', 'tool_result', and any explicitly handled types like 'image' if you intend to support it) by throwing or returning a clear, descriptive error (e.g., include the block.type and block id) rather than pushing a placeholder into buffered/target; adjust the handling around buffered, flushBuffered, and target so you flush buffered content before throwing if needed, and keep uses of safeJsonStringify and stringifyAnthropicToolResultContent unchanged for supported block types.
7148-7156:⚠️ Potential issue | 🟠 MajorReject malformed upstream JSON instead of treating it as success.
If upstream returns a 2xx with invalid JSON, this silently sets
payload = null. The/v1/messagespath then builds a synthetic Anthropic success from{}instead of surfacing a protocol error.Suggested fix
if (rawBody.trim()) { try { payload = JSON.parse(rawBody); - } catch (_) { - payload = null; + } catch (err) { + reject(new Error(`invalid upstream JSON: ${err.message}`)); + return; } }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 7148 - 7156, The current upstreamRes.on('end') handler swallows JSON parse errors by setting payload = null which leads the /v1/messages flow to construct a synthetic success from an empty object; change this so malformed JSON is treated as an error: when JSON.parse(rawBody) throws, propagate a failure (e.g., call the request's reject/error path or emit an error) instead of setting payload = null, and ensure the caller handling the response (the /v1/messages branch that inspects payload / constructs an Anthropic response) receives and surfaces that parse error rather than treating it as a 2xx success; look for rawBody, payload and the upstreamRes.on('end') block to implement this.
7127-7164:⚠️ Potential issue | 🟠 Major
stream: trueis still buffered instead of streamed.This helper always accumulates the full upstream
/responsesbody and resolves only onend, so Claude clients never receive incremental tokens. OpenAI’s Responses API streams over SSE only whenstream=true, and those events need to be forwarded progressively rather than synthesized after completion. (platform.openai.com)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` around lines 7127 - 7164, The current transport.request handler always buffers the entire upstreamRes body (chunks + Buffer.concat in upstreamRes.on('end')) which prevents streaming when requestOptions.stream === true; change the logic in the transport.request callback (the upstreamReq/upstreamRes handling) to detect requestOptions.stream and, when true, forward upstreamRes headers/status immediately and stream upstreamRes 'data' events directly to the client (or caller) as they arrive instead of accumulating them: stop using chunks/Buffer.concat in that path, enforce MAX_API_BODY_SIZE incrementally (destroy upstreamReq if exceeded), handle upstreamRes 'error' and 'close' to end the forwarded stream and only resolve when the stream is finished or aborted, and keep the existing buffered JSON parsing/resolve behavior only for the non-streaming branch.tests/e2e/test-claude-proxy.js (1)
4-34:⚠️ Potential issue | 🟠 MajorAdd a client timeout to
requestRawto prevent E2E hangs.
requestRawcurrently has no timeout, so a stalled socket can block the suite indefinitely.Proposed fix
function requestRaw(port, pathname, options = {}) { return new Promise((resolve, reject) => { + const timeoutMs = Number.isFinite(options.timeoutMs) ? options.timeoutMs : 10000; + let settled = false; + const finish = (fn, value) => { + if (settled) return; + settled = true; + fn(value); + }; const body = options.body !== undefined ? JSON.stringify(options.body) : ''; const req = http.request({ hostname: '127.0.0.1', port, @@ }, (res) => { let responseBody = ''; res.setEncoding('utf-8'); res.on('data', chunk => responseBody += chunk); - res.on('end', () => resolve({ + res.on('end', () => finish(resolve, { statusCode: res.statusCode || 0, headers: res.headers, body: responseBody })); }); - req.on('error', reject); + req.on('error', (err) => finish(reject, err)); + req.setTimeout(timeoutMs, () => { + req.destroy(new Error(`request timeout after ${timeoutMs}ms: ${pathname}`)); + }); if (body) { req.write(body); } req.end(); }); }#!/bin/bash # Verify whether requestRaw currently sets a request timeout. sed -n '1,60p' tests/e2e/test-claude-proxy.js rg -n 'req\.setTimeout|function requestRaw|http\.request' tests/e2e/test-claude-proxy.js🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@tests/e2e/test-claude-proxy.js` around lines 4 - 34, The requestRaw helper lacks a timeout so stalled sockets hang the E2E suite; update function requestRaw to set a client-side timeout on the http request (use req.setTimeout or a manual timer) with a reasonable default (e.g. 30_000 ms), on timeout abort/destroy the request (req.abort() or req.destroy(new Error('timeout'))) and reject the promise with a clear timeout error; ensure you clear the timer and remove/guard against double resolve/reject when the response 'end' or 'error' fires (reference symbols: function requestRaw, local variable req, response handlers res.on('data'/'end'), and req.on('error')).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli.js`:
- Around line 6964-6982: The code currently selects currentProvider before
checking wire_api and can reject a usable alternate provider; fix by filtering
the candidate providers to only those whose normalizeWireApi(provider.wire_api)
=== 'responses' before calling resolveBuiltinProxyProviderName. Use
readConfigOrVirtualDefault()'s providers object, build a filteredProviders map
(keeping keys for which normalizeWireApi(...) returns 'responses'), then call
resolveBuiltinProxyProviderName(settings.provider, filteredProviders,
currentProvider) (or pass an empty currentProvider if the trimmed
currentProvider isn't in filteredProviders) and proceed to validate
providerName, BUILTIN_PROXY_PROVIDER_NAME, and provider from that
filteredProviders set.
In `@tests/unit/web-ui-behavior-parity.test.mjs`:
- Around line 462-465: The test currently asserts strict equality between
missingCurrentComputedKeys and allowedMissingCurrentComputedKeys which is
brittle; instead, mirror the earlier allowlist approach by building a Set from
allowedMissingCurrentComputedKeys, filter missingCurrentComputedKeys to find any
unexpected entries (e.g., unexpectedMissingCurrentComputedKeys =
missingCurrentComputedKeys.filter(k => !allowedMissingSet.has(k))) and assert
that this filtered list is empty; update the assertion that references
missingCurrentComputedKeys/allowedMissingCurrentComputedKeys to use this
allowlist-filter pattern (use the same naming style as
allowedExtraComputedKeySet/unexpectedExtraCurrentComputedKeys).
In `@web-ui/modules/app.methods.session-actions.mjs`:
- Around line 164-167: The helper getShareCommandPrefixInvocation() currently
returns 'npm start' which can drop subsequent CLI args; change its return to
'npm start --' when the normalized prefix isn't 'codexmate' so arguments are
reliably forwarded (use normalizeShareCommandPrefix(this.shareCommandPrefix) as
currently done); also update any other places that build the share command using
the same logic or directly returning 'npm start' (i.e., other usages of
getShareCommandPrefixInvocation / normalizeShareCommandPrefix /
shareCommandPrefix) so they consistently use 'npm start --' for non-'codexmate'
prefixes.
---
Duplicate comments:
In `@cli.js`:
- Around line 7655-7663: The request handling block allows exceptions thrown by
readJsonRequestBody(...) or buildBuiltinClaudeResponsesRequest(...) to bubble up
and become a 502; wrap those two calls in a local try/catch, detect client
errors (malformed JSON, too-large body, missing fields) and respond with
appropriate 4xx (use 413 for oversized, 400 with an "invalid_request_error" code
for malformed/missing fields) by setting the response status and error body
instead of letting requestBuiltinClaudeProxyUpstream(...) run; ensure errors
from requestBuiltinClaudeProxyUpstream still propagate to the existing 502
handler so only parse/validation errors are handled locally.
- Around line 7239-7269: The code currently converts unsupported Anthropic
blocks into a fabricated text placeholder; instead, update the loop that
iterates normalizeAnthropicContentBlocks(message.content) to fail fast when
encountering any block.type other than the supported ones ('text', 'tool_use',
'tool_result', and any explicitly handled types like 'image' if you intend to
support it) by throwing or returning a clear, descriptive error (e.g., include
the block.type and block id) rather than pushing a placeholder into
buffered/target; adjust the handling around buffered, flushBuffered, and target
so you flush buffered content before throwing if needed, and keep uses of
safeJsonStringify and stringifyAnthropicToolResultContent unchanged for
supported block types.
- Around line 7148-7156: The current upstreamRes.on('end') handler swallows JSON
parse errors by setting payload = null which leads the /v1/messages flow to
construct a synthetic success from an empty object; change this so malformed
JSON is treated as an error: when JSON.parse(rawBody) throws, propagate a
failure (e.g., call the request's reject/error path or emit an error) instead of
setting payload = null, and ensure the caller handling the response (the
/v1/messages branch that inspects payload / constructs an Anthropic response)
receives and surfaces that parse error rather than treating it as a 2xx success;
look for rawBody, payload and the upstreamRes.on('end') block to implement this.
- Around line 7127-7164: The current transport.request handler always buffers
the entire upstreamRes body (chunks + Buffer.concat in upstreamRes.on('end'))
which prevents streaming when requestOptions.stream === true; change the logic
in the transport.request callback (the upstreamReq/upstreamRes handling) to
detect requestOptions.stream and, when true, forward upstreamRes headers/status
immediately and stream upstreamRes 'data' events directly to the client (or
caller) as they arrive instead of accumulating them: stop using
chunks/Buffer.concat in that path, enforce MAX_API_BODY_SIZE incrementally
(destroy upstreamReq if exceeded), handle upstreamRes 'error' and 'close' to end
the forwarded stream and only resolve when the stream is finished or aborted,
and keep the existing buffered JSON parsing/resolve behavior only for the
non-streaming branch.
In `@tests/e2e/test-claude-proxy.js`:
- Around line 4-34: The requestRaw helper lacks a timeout so stalled sockets
hang the E2E suite; update function requestRaw to set a client-side timeout on
the http request (use req.setTimeout or a manual timer) with a reasonable
default (e.g. 30_000 ms), on timeout abort/destroy the request (req.abort() or
req.destroy(new Error('timeout'))) and reject the promise with a clear timeout
error; ensure you clear the timer and remove/guard against double resolve/reject
when the response 'end' or 'error' fires (reference symbols: function
requestRaw, local variable req, response handlers res.on('data'/'end'), and
req.on('error')).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9c226a5f-1fd2-44fc-9db6-05c86c38cb70
📒 Files selected for processing (16)
README.mdcli.jstests/e2e/run.jstests/e2e/test-claude-proxy.jstests/e2e/test-config.jstests/unit/claude-proxy-adapter.test.mjstests/unit/config-tabs-ui.test.mjstests/unit/provider-share-command.test.mjstests/unit/run.mjstests/unit/web-ui-behavior-parity.test.mjsweb-ui/app.jsweb-ui/modules/app.computed.dashboard.mjsweb-ui/modules/app.methods.providers.mjsweb-ui/modules/app.methods.session-actions.mjsweb-ui/partials/index/panel-config-codex.htmlweb-ui/partials/index/panel-settings.html
💤 Files with no reviewable changes (1)
- web-ui/modules/app.computed.dashboard.mjs
✅ Files skipped from review due to trivial changes (4)
- README.md
- web-ui/partials/index/panel-config-codex.html
- web-ui/partials/index/panel-settings.html
- tests/unit/claude-proxy-adapter.test.mjs
🚧 Files skipped from review as they are similar to previous changes (7)
- tests/e2e/run.js
- web-ui/app.js
- tests/unit/run.mjs
- tests/e2e/test-config.js
- tests/unit/config-tabs-ui.test.mjs
- web-ui/modules/app.methods.providers.mjs
- tests/unit/provider-share-command.test.mjs
📜 Review details
🧰 Additional context used
🪛 GitHub Actions: ci
tests/unit/web-ui-behavior-parity.test.mjs
[error] 465-465: Assertion failed in web UI behavior parity test: captured bundled app skeleton only exposes expected data key drift versus parity baseline. Expected keys include 'isCurrentLocalProvider', 'localProviderEntry', 'localProxyListenUrl', 'localProxyUpstreamOptions' but actual output omitted them. (deepStrictEqual)
🔇 Additional comments (5)
web-ui/modules/app.methods.session-actions.mjs (2)
156-162: Normalization and persistence behavior is clean and defensive.Allowlisting prefix values and persisting via
localStoragewith a guarded write is solid.Also applies to: 169-175
315-315: Message simplification is consistent with generalized share restrictions.No functional concerns with this wording update.
cli.js (2)
7295-7357: I need the review comment you'd like me to rewrite. Please provide it within<review_comment>tags along with any relevant context about the code changes, file paths, or repository information.
7275-7292: Add explicit handling fortool_choice: "none"to properly disable tool invocation.When
tool_choicereturnsundefined, thetool_choicefield is omitted from the request. If tools are present in the request, the API may invoke them by default. Explicitly returning'none'would disable tool invocation as intended by the caller.Suggested fix
function mapAnthropicToolChoiceToResponses(toolChoice) { if (!toolChoice) return undefined; if (typeof toolChoice === 'string') { if (toolChoice === 'auto') return 'auto'; if (toolChoice === 'any') return 'required'; + if (toolChoice === 'none') return 'none'; return undefined; } if (!toolChoice || typeof toolChoice !== 'object') return undefined; const type = typeof toolChoice.type === 'string' ? toolChoice.type.trim().toLowerCase() : ''; if (type === 'auto') return 'auto'; if (type === 'any') return 'required'; + if (type === 'none') return 'none'; if (type === 'tool' && typeof toolChoice.name === 'string' && toolChoice.name.trim()) { return { type: 'function', name: toolChoice.name.trim() }; } return undefined; }tests/e2e/test-claude-proxy.js (1)
205-213: Nice coverage for adapter request mapping.These assertions validate critical Anthropic→Responses transformations (auth, instructions, token mapping, tools, and tool_choice) and provide solid regression protection.
| getShareCommandPrefixInvocation() { | ||
| const prefix = this.normalizeShareCommandPrefix(this.shareCommandPrefix); | ||
| return prefix === 'codexmate' ? 'codexmate' : 'npm start'; | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🌐 Web query:
In npm CLI, what is the correct way to pass arguments to the start script: npm start fooornpm start -- foo? Please cite npm docs and clarify behavior for both forms.
💡 Result:
The correct way to pass arguments to the start script is npm start -- foo. This uses the -- separator to pass "foo" directly to the underlying command defined in the "start" script in package.json. npm start foo passes "foo" as a positional argument, which works if "foo" does not start with - or -- (treated as plain text passed to the script). However, if "foo" is --flag or -f, npm may interpret it as its own option instead of passing it to the script. Official npm documentation confirms this behavior for both npm start (alias for npm run start) and npm run-script: - Synopsis: npm start [-- ] and npm run-script [-- ] - "Use -- to pass --prefixed flags and options which would otherwise be parsed by npm." - Example: npm run test -- --grep="pattern" passes --grep="pattern" to the test script. This has been standard since npm@2.0.0. For npm start foo (no leading -), the argument reaches the script via process.argv. But -- ensures reliable passing, especially for flags.
Citations:
- 1: https://docs.npmjs.com/cli/v9/commands/npm-run-script/?v=true
- 2: https://docs.npmjs.com/cli/v9/commands/npm-start/
- 3: https://docs.npmjs.com/cli/v8/commands/npm-run-script/
- 4: https://docs.npmjs.com/cli/v8/commands/npm-start/
🏁 Script executed:
cat -n web-ui/modules/app.methods.session-actions.mjs | sed -n '156,175p'Repository: SakuraByteCore/codexmate
Length of output: 995
🏁 Script executed:
cat -n web-ui/modules/app.methods.session-actions.mjs | sed -n '282,310p'Repository: SakuraByteCore/codexmate
Length of output: 1829
Use npm start -- to reliably pass arguments to the start script.
Line 166 should return npm start -- instead of npm start. Per npm documentation (since npm@2.0.0), the -- separator is required to ensure arguments are reliably passed to the underlying script rather than being parsed by npm itself. Commands like npm start switch ..., npm start add ..., and npm start claude ... need this separator for safe, compliant argument forwarding.
Suggested fix
getShareCommandPrefixInvocation() {
const prefix = this.normalizeShareCommandPrefix(this.shareCommandPrefix);
- return prefix === 'codexmate' ? 'codexmate' : 'npm start';
+ return prefix === 'codexmate' ? 'codexmate' : 'npm start --';
},Also applies to: 286, 288, 290, 305
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@web-ui/modules/app.methods.session-actions.mjs` around lines 164 - 167, The
helper getShareCommandPrefixInvocation() currently returns 'npm start' which can
drop subsequent CLI args; change its return to 'npm start --' when the
normalized prefix isn't 'codexmate' so arguments are reliably forwarded (use
normalizeShareCommandPrefix(this.shareCommandPrefix) as currently done); also
update any other places that build the share command using the same logic or
directly returning 'npm start' (i.e., other usages of
getShareCommandPrefixInvocation / normalizeShareCommandPrefix /
shareCommandPrefix) so they consistently use 'npm start --' for non-'codexmate'
prefixes.
21fa16c to
56ace94
Compare
- add a builtin Claude-compatible Responses proxy runtime and coverage\n- remove local builtin provider/config special casing across CLI and Web UI\n- add a persisted share command prefix setting for provider and Claude share commands
56ace94 to
e616edb
Compare
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
web-ui/modules/app.methods.session-actions.mjs (1)
164-167:⚠️ Potential issue | 🟠 MajorUse
npm start --for shared command invocation.At Line 166, returning
npm start(without--) makes downstream commands at Lines 286/288/290/305 less reliable for argument forwarding through npm scripts. This should benpm start --.Suggested fix
getShareCommandPrefixInvocation() { const prefix = this.normalizeShareCommandPrefix(this.shareCommandPrefix); - return prefix === 'codexmate' ? 'codexmate' : 'npm start'; + return prefix === 'codexmate' ? 'codexmate' : 'npm start --'; },In current npm CLI docs for `npm start` / `npm run-script`, is `npm start -- <args>` the recommended form to reliably forward arguments to the underlying start script?Also applies to: 282-282, 286-290, 305-305
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@web-ui/modules/app.methods.session-actions.mjs` around lines 164 - 167, The getShareCommandPrefixInvocation function returns 'npm start' which prevents proper argument forwarding; update it to return 'npm start --' when prefix !== 'codexmate', and also replace other literal uses of 'npm start' related to share command construction (the occurrences around the code that build share invocations at the call sites referenced in the diff, lines near where the share command is assembled) with 'npm start --' so downstream argument forwarding works; look for getShareCommandPrefixInvocation and any direct string 'npm start' used to form share commands and change them to include the '--' separator.
🧹 Nitpick comments (5)
cli/claude-proxy.js (3)
13-15: Duplicate utility function:isPlainObjectis redefined locally.This function is also injected as a dependency in
cli/builtin-proxy.js. For consistency, consider importing it from a shared utility module or receiving it via dependency injection, similar to the sibling controller.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/claude-proxy.js` around lines 13 - 15, The local duplicate utility function isPlainObject should be removed and the module should instead reuse the shared implementation used by the sibling controller; replace the local definition with an import or dependency injection of the canonical isPlainObject (the same source used by cli/builtin-proxy.js) and update any callers in claude-proxy.js to use that imported/DI-provided function to ensure consistency.
596-603: Promise may resolve after rejection when body exceeds size limit.When the body exceeds
maxBytes, the code callsreject()andreq.destroy(). However, the'end'event listener remains attached and may still fire (especially if data was already buffered), potentially callingresolve()afterreject(). While Node.js ignores subsequent promise resolutions, this is a code smell.Consider using a flag or removing listeners:
♻️ Suggested fix
return new Promise((resolve, reject) => { const chunks = []; let total = 0; + let rejected = false; req.on('data', (chunk) => { total += chunk.length; if (total > maxBytes) { + rejected = true; reject(new Error(`request body too large (${maxBytes} bytes max)`)); try { req.destroy(); } catch (_) {} return; } chunks.push(chunk); }); req.on('error', reject); req.on('end', () => { + if (rejected) return; const raw = Buffer.concat(chunks).toString('utf-8').trim();🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/claude-proxy.js` around lines 596 - 603, When the request body exceeds maxBytes in the req.on('data') handler the code calls reject(...) and destroys the request, but the 'end' listener (attached elsewhere) can still run and call resolve; fix this by adding a local aborted flag (e.g., bodyTooLarge/aborted) or by removing the 'data'/'end' listeners immediately after reject to prevent the 'end' handler from resolving the promise; update the handlers that reference chunks/total/maxBytes (the req.on('data') and the corresponding req.on('end') handler) to check that flag (or that listeners were removed) before calling resolve or pushing chunks.
933-961: Consider extracting shared shutdown logic.This shutdown pattern is nearly identical to
stopBuiltinProxyRuntimeincli/builtin-proxy.js. For DRY, consider extracting a sharedstopHttpServerRuntime(currentRuntime)utility. This is optional given the "Chill" review mode.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/claude-proxy.js` around lines 933 - 961, The shutdown sequence in stopBuiltinClaudeProxyRuntime duplicates stopBuiltinProxyRuntime; extract a shared utility stopHttpServerRuntime(currentRuntime) that encapsulates closing currentRuntime.server with the Promise+timeout handshake, destroying sockets in currentRuntime.connections, and clearing connections; update stopBuiltinClaudeProxyRuntime and stopBuiltinProxyRuntime to set their module-level runtime to null, then call await stopHttpServerRuntime(currentRuntime) and return the same { success: true, running: false } result, ensuring the utility references the same symbols (currentRuntime.server, currentRuntime.connections) used in the diff.cli/auth-profiles.js (1)
196-201:Buffer.fromwith base64 does not throw on invalid input.
Buffer.from(string, 'base64')silently ignores invalid characters rather than throwing. The try-catch here won't catch malformed base64 — it will produce a garbage buffer that fails later at JSON.parse. The current behavior is still safe (the error will surface atparseAuthProfileJson), but the error message at line 200 is misleading.Consider removing the try-catch or validating base64 format explicitly if you want accurate error messages.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/auth-profiles.js` around lines 196 - 201, The try-catch around Buffer.from(fileBase64, 'base64') is ineffective because Buffer.from won't throw on malformed base64; update the logic around fileBase64 and buffer to either remove the try-catch and let parseAuthProfileJson handle JSON errors, or add an explicit base64 validation before decoding (e.g., validate fileBase64 with a base64 regex and length%4 check) and only then call Buffer.from; reference the fileBase64 variable, the buffer assignment, and ensure downstream error handling remains via parseAuthProfileJson so the error message accurately reflects an invalid base64 payload.cli.js (1)
3608-3608: UseBUILTIN_PROXY_PROVIDER_NAMEin guard/error text instead of hard-coded literals.This avoids drift if the managed provider name changes and removes duplicated strings.
♻️ Suggested cleanup
- return { error: 'codexmate-proxy 为保留名称,不可编辑' }; + return { error: `${BUILTIN_PROXY_PROVIDER_NAME} 为保留名称,不可编辑` };- const msg = 'codexmate-proxy 为保留名称,不可删除'; + const msg = `${BUILTIN_PROXY_PROVIDER_NAME} 为保留名称,不可删除`;- const msg = 'codexmate-proxy 为保留名称,不可编辑'; + const msg = `${BUILTIN_PROXY_PROVIDER_NAME} 为保留名称,不可编辑`;Also applies to: 3651-3651, 8437-8437
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli.js` at line 3608, Several places return error messages using the hard-coded literal "codexmate-proxy" (for example the return { error: 'codexmate-proxy 为保留名称,不可编辑' } instance); replace those string literals with the shared constant BUILTIN_PROXY_PROVIDER_NAME so the guard/error text reads something like `return { error: \`${BUILTIN_PROXY_PROVIDER_NAME} 为保留名称,不可编辑\` }` and update the other two occurrences (the ones flagged around the same check) to use BUILTIN_PROXY_PROVIDER_NAME instead of the duplicated literal.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/auth-profiles.js`:
- Around line 293-309: When deleting a current profile, if the auto-switch block
for registry.current (the try/catch that reads from AUTH_PROFILES_DIR, calls
parseAuthProfileJson and writeJsonAtomic) fails, ensure AUTH_FILE is also
cleared or removed so it doesn't retain stale credentials; update the catch
branch where registry.current is set to '' to additionally clear AUTH_FILE (e.g.
remove the file or write an empty/clean auth object via writeJsonAtomic) and
keep switchedTo consistent (empty) to avoid leaving an out-of-sync active auth
file.
---
Duplicate comments:
In `@web-ui/modules/app.methods.session-actions.mjs`:
- Around line 164-167: The getShareCommandPrefixInvocation function returns 'npm
start' which prevents proper argument forwarding; update it to return 'npm start
--' when prefix !== 'codexmate', and also replace other literal uses of 'npm
start' related to share command construction (the occurrences around the code
that build share invocations at the call sites referenced in the diff, lines
near where the share command is assembled) with 'npm start --' so downstream
argument forwarding works; look for getShareCommandPrefixInvocation and any
direct string 'npm start' used to form share commands and change them to include
the '--' separator.
---
Nitpick comments:
In `@cli.js`:
- Line 3608: Several places return error messages using the hard-coded literal
"codexmate-proxy" (for example the return { error: 'codexmate-proxy 为保留名称,不可编辑'
} instance); replace those string literals with the shared constant
BUILTIN_PROXY_PROVIDER_NAME so the guard/error text reads something like `return
{ error: \`${BUILTIN_PROXY_PROVIDER_NAME} 为保留名称,不可编辑\` }` and update the other
two occurrences (the ones flagged around the same check) to use
BUILTIN_PROXY_PROVIDER_NAME instead of the duplicated literal.
In `@cli/auth-profiles.js`:
- Around line 196-201: The try-catch around Buffer.from(fileBase64, 'base64') is
ineffective because Buffer.from won't throw on malformed base64; update the
logic around fileBase64 and buffer to either remove the try-catch and let
parseAuthProfileJson handle JSON errors, or add an explicit base64 validation
before decoding (e.g., validate fileBase64 with a base64 regex and length%4
check) and only then call Buffer.from; reference the fileBase64 variable, the
buffer assignment, and ensure downstream error handling remains via
parseAuthProfileJson so the error message accurately reflects an invalid base64
payload.
In `@cli/claude-proxy.js`:
- Around line 13-15: The local duplicate utility function isPlainObject should
be removed and the module should instead reuse the shared implementation used by
the sibling controller; replace the local definition with an import or
dependency injection of the canonical isPlainObject (the same source used by
cli/builtin-proxy.js) and update any callers in claude-proxy.js to use that
imported/DI-provided function to ensure consistency.
- Around line 596-603: When the request body exceeds maxBytes in the
req.on('data') handler the code calls reject(...) and destroys the request, but
the 'end' listener (attached elsewhere) can still run and call resolve; fix this
by adding a local aborted flag (e.g., bodyTooLarge/aborted) or by removing the
'data'/'end' listeners immediately after reject to prevent the 'end' handler
from resolving the promise; update the handlers that reference
chunks/total/maxBytes (the req.on('data') and the corresponding req.on('end')
handler) to check that flag (or that listeners were removed) before calling
resolve or pushing chunks.
- Around line 933-961: The shutdown sequence in stopBuiltinClaudeProxyRuntime
duplicates stopBuiltinProxyRuntime; extract a shared utility
stopHttpServerRuntime(currentRuntime) that encapsulates closing
currentRuntime.server with the Promise+timeout handshake, destroying sockets in
currentRuntime.connections, and clearing connections; update
stopBuiltinClaudeProxyRuntime and stopBuiltinProxyRuntime to set their
module-level runtime to null, then call await
stopHttpServerRuntime(currentRuntime) and return the same { success: true,
running: false } result, ensuring the utility references the same symbols
(currentRuntime.server, currentRuntime.connections) used in the diff.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: dad5e3e1-6f31-4f29-aa15-46c0f4d8cb7d
📒 Files selected for processing (19)
README.mdcli.jscli/auth-profiles.jscli/builtin-proxy.jscli/claude-proxy.jstests/e2e/run.jstests/e2e/test-claude-proxy.jstests/e2e/test-config.jstests/unit/claude-proxy-adapter.test.mjstests/unit/config-tabs-ui.test.mjstests/unit/provider-share-command.test.mjstests/unit/run.mjstests/unit/web-ui-behavior-parity.test.mjsweb-ui/app.jsweb-ui/modules/app.computed.dashboard.mjsweb-ui/modules/app.methods.providers.mjsweb-ui/modules/app.methods.session-actions.mjsweb-ui/partials/index/panel-config-codex.htmlweb-ui/partials/index/panel-settings.html
💤 Files with no reviewable changes (1)
- web-ui/modules/app.computed.dashboard.mjs
✅ Files skipped from review due to trivial changes (5)
- web-ui/partials/index/panel-config-codex.html
- web-ui/partials/index/panel-settings.html
- README.md
- tests/unit/provider-share-command.test.mjs
- tests/unit/claude-proxy-adapter.test.mjs
🚧 Files skipped from review as they are similar to previous changes (7)
- tests/e2e/run.js
- tests/unit/config-tabs-ui.test.mjs
- tests/e2e/test-config.js
- web-ui/modules/app.methods.providers.mjs
- tests/e2e/test-claude-proxy.js
- tests/unit/web-ui-behavior-parity.test.mjs
- web-ui/app.js
📜 Review details
🔇 Additional comments (25)
tests/unit/run.mjs (1)
44-44: Good test registration update.Line 44 follows the existing runner pattern, so
claude-proxy-adapter.test.mjsis now included in the unit suite as intended.web-ui/modules/app.methods.session-actions.mjs (3)
156-162: Prefix normalization is clean and safe.Line 156–162 correctly constrain persisted values to the two supported modes and provide a safe fallback.
169-175: Persistence path looks good.Line 169–175 correctly normalizes before writing to
localStorage, preventing invalid stored state.
315-315: Disallow-share message is now consistent with provider cleanup.Line 315’s generic
不可分享text matches the removal of local-provider special casing.cli/auth-profiles.js (4)
1-27: LGTM - Well-structured dependency injection with proper validation.The factory pattern with explicit dependency validation provides good testability and clear error messages when dependencies are missing.
28-37: LGTM - Proper filename sanitization.The function correctly handles invalid characters and length limits for filesystem-safe profile names.
78-94: LGTM - Proper credential validation.Good security practice requiring at least one valid credential field before accepting an auth profile.
317-336: LGTM - Graceful degradation for token resolution.Returning an empty string on any failure is appropriate for a fallback auth mechanism, though you may want to add debug logging in the catch block for troubleshooting.
cli/builtin-proxy.js (6)
1-65: LGTM - Proper dependency injection with comprehensive validation.The factory pattern and thorough dependency checks ensure all required functions and constants are available before proceeding.
68-98: LGTM - Reasonable port availability checking.The TOCTOU race condition (port could be taken between check and actual listen) is an inherent limitation of this approach, but acceptable since
startBuiltinProxyRuntimewill handle the actual bind failure gracefully.
408-418: LGTM - Correct header handling for streaming proxy.Deleting
host,connection, andcontent-lengthbefore piping is correct. The transport layer will handle Transfer-Encoding properly. Good addition ofx-codexmate-proxyandx-forwarded-forheaders.
512-540: LGTM - Proper shutdown sequence.Setting
runtime = nullbefore closing prevents new operations. The 1-second timeout prevents hanging, and socket cleanup ensures forceful termination of lingering connections.
247-258: LGTM - Correct range-based string manipulation.Sorting ranges in reverse order (line 248) before splicing is the correct approach to avoid offset invalidation. The deduplication via
seenSet prevents removing overlapping ranges twice.
319-327: Field naming mismatch:preferred_auth_methodstores the actual auth token, not a method indicator.Line 323 reads
provider.preferred_auth_methodas a token value. Despite its name suggesting a method/strategy (e.g., "oauth", "api_key"), this field is consistently used throughout the codebase to store the actual authentication credential (e.g., "sk-xxxx"). While the naming is unconventional, this is the established pattern for this codebase's schema and works as designed.cli/claude-proxy.js (5)
87-134: LGTM - Correct message format translation.The buffering pattern for grouping consecutive text content and the fallback for missing
call_idusingcrypto.randomBytesare well implemented.
156-219: LGTM - Proper request transformation.Good validation of required fields and sensible defaults. The tool schema conversion correctly maps
input_schematoparameters.
758-862: LGTM - Proper async error propagation.Errors from
readJsonRequestBodyandbuildBuiltinClaudeResponsesRequestpropagate to the outer.catch()handler increateBuiltinClaudeProxyServer, which returns appropriate 502 errors.
541-545: LGTM - Correct API compatibility check.The Claude proxy requires a
responseswire API upstream. This validation ensures incompatible providers are rejected early with a clear error message.
741-756: Note: This is simulated streaming, not true streaming.The implementation builds all SSE events from a fully-buffered response before writing them. This provides Anthropic streaming API compatibility but doesn't reduce time-to-first-byte. This is consistent with the non-streaming upstream request pattern used in
requestBuiltinClaudeProxyUpstream.cli.js (6)
6120-6166: Controller extraction is clean, but please re-verify Claude upstream compatibility filtering.Because the upstream resolution logic moved behind
createBuiltinClaudeProxyRuntimeController(...), please confirm it still filters towire_api = "responses"candidates before selecting a provider.#!/bin/bash # Verify Claude proxy upstream selection still enforces responses-compatible providers. set -euo pipefail echo "== Locate claude proxy controller ==" fd -i "claude-proxy.js" cli echo echo "== Inspect upstream/provider resolution code paths ==" rg -n -C3 "resolveBuiltinProxyProviderName|normalizeWireApi|wire_api|responses|provider" cli/claude-proxy.js echo echo "== Inspect start/runtime entrypoints for selection flow ==" rg -n -C4 "startBuiltinClaudeProxyRuntime|resolve.*upstream|upstream.*provider|provider.*upstream" cli/claude-proxy.jsExpected result: upstream candidate selection should explicitly constrain providers to Responses-compatible ones (directly or via equivalent guard), not only by name existence.
547-560: Managed-provider boundary is clean after removinglocalspecial-casing.This keeps protection logic centralized and makes
localbehave like a normal provider name.
3185-3205: Virtual-localfallback sanitization is a solid compatibility guard.Good handling for legacy configs where
model_provider = "local"exists without a concretemodel_providers.local.
7506-7513: Managed-provider filtering is consistently applied across export/import and UI/MCP views.Nice consistency win: shared behavior now aligns across data export, data import, and read-only rendering.
Also applies to: 7638-7641, 7672-7675, 10722-10724, 12266-12268
11010-11018: Claude proxy control surface parity (Web + MCP) looks complete.Status/start/stop are exposed consistently across both interfaces.
Also applies to: 13048-13054, 13327-13344, 13354-13360
11388-11390: Shutdown now cleans both runtimes — good reliability improvement.Stopping both proxy runtimes on exit reduces orphan-process risk.
Also applies to: 13644-13647
| if (registry.current === profileName) { | ||
| if (registry.items.length > 0) { | ||
| const next = registry.items[0]; | ||
| try { | ||
| const nextPath = path.join(AUTH_PROFILES_DIR, next.fileName || `${next.name}.json`); | ||
| const raw = fs.readFileSync(nextPath, 'utf-8'); | ||
| const nextData = parseAuthProfileJson(raw, next.fileName || `${next.name}.json`); | ||
| writeJsonAtomic(AUTH_FILE, nextData); | ||
| registry.current = next.name; | ||
| switchedTo = next.name; | ||
| } catch (e) { | ||
| registry.current = ''; | ||
| } | ||
| } else { | ||
| registry.current = ''; | ||
| } | ||
| } |
There was a problem hiding this comment.
Potential inconsistency: AUTH_FILE may remain stale when auto-switching fails.
If the deleted profile was current and loading the next profile fails (line 303-304), registry.current is cleared but AUTH_FILE is not updated or cleared. This leaves the active auth file containing credentials for a profile that's no longer registered, which could cause confusion or unexpected behavior.
Consider clearing or removing AUTH_FILE when auto-switch fails:
♻️ Suggested fix
} catch (e) {
registry.current = '';
+ try { fs.unlinkSync(AUTH_FILE); } catch (_) {}
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/auth-profiles.js` around lines 293 - 309, When deleting a current
profile, if the auto-switch block for registry.current (the try/catch that reads
from AUTH_PROFILES_DIR, calls parseAuthProfileJson and writeJsonAtomic) fails,
ensure AUTH_FILE is also cleared or removed so it doesn't retain stale
credentials; update the catch branch where registry.current is set to '' to
additionally clear AUTH_FILE (e.g. remove the file or write an empty/clean auth
object via writeJsonAtomic) and keep switchedTo consistent (empty) to avoid
leaving an out-of-sync active auth file.
There was a problem hiding this comment.
Actionable comments posted: 3
🧹 Nitpick comments (8)
cli/config-bootstrap.js (3)
282-285: Silent catch block may hide permission errors.If
readFileSyncfails (e.g., permission denied),existingContentstays empty and the config is treated as lacking the managed marker. This could trigger an unintended backup flow. Consider logging the error or propagating it.🔧 Suggested improvement
try { existingContent = fs.readFileSync(CONFIG_FILE, 'utf-8'); - } catch (e) {} + } catch (e) { + console.error(`警告: 读取配置文件内容失败: ${e.message}`); + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/config-bootstrap.js` around lines 282 - 285, The try/catch around fs.readFileSync(CONFIG_FILE, 'utf-8') silently swallows errors so existingContent may stay empty and trigger unintended backup flows; modify the catch to either rethrow the error or log it with context (including CONFIG_FILE and the caught error) before continuing. Update the block that reads CONFIG_FILE (the existingContent variable assignment) to handle permission/IO errors explicitly—e.g., use processLogger.error or throw the caught Error—so callers of the read logic can make correct decisions based on real failures rather than an empty existingContent.
36-61: Consider validating numeric dependencies.The validation is thorough for functions and strings but omits checks for
MAX_RECENT_CONFIGS,DEFAULT_MODEL_CONTEXT_WINDOW, andDEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT. If these areundefinedor non-numeric, it could cause subtle issues (e.g.,slice(0, undefined)returns all elements, and template interpolation would embedundefined).🔧 Proposed validation additions
if (typeof EMPTY_CONFIG_FALLBACK_TEMPLATE !== 'string') throw new Error('createConfigBootstrapController 缺少 EMPTY_CONFIG_FALLBACK_TEMPLATE'); +if (typeof MAX_RECENT_CONFIGS !== 'number' || MAX_RECENT_CONFIGS <= 0) throw new Error('createConfigBootstrapController 缺少有效的 MAX_RECENT_CONFIGS'); +if (typeof DEFAULT_MODEL_CONTEXT_WINDOW !== 'number') throw new Error('createConfigBootstrapController 缺少有效的 DEFAULT_MODEL_CONTEXT_WINDOW'); +if (typeof DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT !== 'number') throw new Error('createConfigBootstrapController 缺少有效的 DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT');🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/config-bootstrap.js` around lines 36 - 61, Add numeric validation for MAX_RECENT_CONFIGS, DEFAULT_MODEL_CONTEXT_WINDOW, and DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT in createConfigBootstrapController: each should be checked (like the other dependency checks) to ensure typeof === 'number' and Number.isFinite(value) (and optionally >= 0 where negative doesn't make sense) and throw a descriptive Error if not; reference the constants MAX_RECENT_CONFIGS, DEFAULT_MODEL_CONTEXT_WINDOW, and DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT when adding these checks so they mirror the existing pattern used for functions and string constants.
142-169: Edge case: empty fallback when no providers remain.When
BUILTIN_PROXY_PROVIDER_NAMEis the only configured provider, after deletionfallbackProviderandfallbackModelwill both be empty strings. This could leave the config in an unusable state.If this edge case is intentional (e.g., handled downstream), this is fine. Otherwise, consider logging a warning or falling back to virtual default config.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/config-bootstrap.js` around lines 142 - 169, sanitizeRemovedBuiltinProxyProvider can leave model_provider/model empty when BUILTIN_PROXY_PROVIDER_NAME was the only provider; to fix, detect when providerNames.length === 0 and handle as an explicit fallback: if readCurrentModels() contains a 'local' entry use 'local' as fallbackProvider and set fallbackModel accordingly, otherwise add a virtual default provider entry to nextProviders (e.g., nextProviders.local = { virtual: true } or similar) and set model_provider/model to that virtual default; also emit a console.warn (or use the repo logger) indicating no providers remained so maintainers can notice the fallback action; update references to BUILTIN_PROXY_PROVIDER_NAME, model_providers, model_provider, model, nextProviders and readCurrentModels() in sanitizeRemovedBuiltinProxyProvider accordingly.cli/openclaw-config.js (1)
90-121: Consider a lookup table for provider aliases.The chain of
ifstatements works but is harder to maintain as providers are added. A static map would be more concise and easier to extend.♻️ Suggested refactor
+const PROVIDER_ALIAS_MAP = { + 'modelstudio': 'qwen', + 'qwencloud': 'qwen', + 'z.ai': 'zai', + 'z-ai': 'zai', + 'opencode-zen': 'opencode', + 'opencode-go-auth': 'opencode-go', + 'kimi': 'kimi', + 'kimi-code': 'kimi', + 'kimi-coding': 'kimi', + 'bedrock': 'amazon-bedrock', + 'aws-bedrock': 'amazon-bedrock', + 'bytedance': 'volcengine', + 'doubao': 'volcengine', + 'volcengine-plan': 'volcengine', + 'byteplus-plan': 'byteplus' +}; function normalizeOpenclawProviderId(provider) { const normalized = typeof provider === 'string' ? provider.trim().toLowerCase() : ''; if (!normalized) return ''; - if (normalized === 'modelstudio' || normalized === 'qwencloud') { - return 'qwen'; - } - // ... rest of if-else chain + return PROVIDER_ALIAS_MAP[normalized] || normalized; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/openclaw-config.js` around lines 90 - 121, Replace the long if-chain in normalizeOpenclawProviderId with a static lookup map: first compute the normalized key (trim + toLowerCase, return '' if empty), then consult an aliases object that maps all known variants ('modelstudio','qwencloud'->'qwen', 'z.ai','z-ai'->'zai', 'opencode-zen'->'opencode', 'opencode-go-auth'->'opencode-go', 'kimi','kimi-code','kimi-coding'->'kimi', 'bedrock','aws-bedrock'->'amazon-bedrock', 'bytedance','doubao','volcengine-plan'->'volcengine', 'byteplus-plan'->'byteplus', etc.), and return aliases[normalized] if present otherwise return normalized; keep function name normalizeOpenclawProviderId unchanged.cli/archive-helpers.js (3)
270-276: Base64 validation may not catch all invalid inputs.
Buffer.from(data, 'base64')is lenient and silently ignores invalid characters rather than throwing. The try-catch won't reliably catch malformed base64. Invalid input will produce an unexpected buffer that will fail during ZIP extraction with a less clear error.Consider validating the base64 format upfront if clearer error messages are desired:
const base64Regex = /^[A-Za-z0-9+/]*={0,2}$/; if (!base64Regex.test(base64.replace(/\s/g, ''))) { return { error: '备份文件内容不是有效的 base64 编码' }; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/archive-helpers.js` around lines 270 - 276, The writeUploadZip function currently relies on Buffer.from(..., 'base64') which is permissive; add an explicit base64 format check before creating the buffer: strip whitespace from the input, validate with a base64 regex (allow A-Za-z0-9+/ and up to two trailing =) and ensure length is a multiple of 4 (or otherwise invalid), and return the existing error object if validation fails so malformed input is rejected with a clear message; then proceed to call Buffer.from only after this validation in writeUploadZip.
420-421: Consider including backup path in error response for recovery.If
copyDirRecursivefails aftertargetDiris deleted (line 410), the user has no easy way to locate the backup. IncludingbackupPathin the error response would aid manual recovery.💡 Suggested improvement
} catch (e) { - return { error: `导入失败:${e.message}` }; + return { error: `导入失败:${e.message}`, backupPath }; } finally {🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/archive-helpers.js` around lines 420 - 421, The catch block currently returns only an error message, leaving out the backup location; update the error return in the catch (where copyDirRecursive may fail after targetDir is deleted) to include the backup path so users can recover manually—i.e., return an object like { error: `导入失败:${e.message}`, backupPath: backupPath } (use the existing backupPath variable used earlier around copyDirRecursive/targetDir) so callers receive the path for recovery.
343-345: Refactor shell command construction to use safer APIs.The shell command is constructed via string interpolation with
execSync. While current callers (prepareDirectoryDownloadandbackupDirectoryIfExists) only use hardcoded internal paths (CLAUDE_DIR, CONFIG_DIR), this pattern is fragile. If these functions are called with untrusted input in the future, shell metacharacters could be interpreted.Consider using
spawnSyncwith array arguments instead:Safer alternative using spawnSync
-const cmd = `"${zipTool.cmd}" -0 -q -r "${zipFilePath}" "${dirPath}"`; -execSync(cmd, { stdio: 'ignore' }); +const { spawnSync } = require('child_process'); +spawnSync(zipTool.cmd, ['-0', '-q', '-r', zipFilePath, dirPath], { stdio: 'ignore' });This approach avoids shell interpretation entirely and is more robust for code reuse.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/archive-helpers.js` around lines 343 - 345, The code builds a shell string and calls execSync in the zip branch (zipTool.type === 'zip') which risks shell injection; update the zip creation to use child_process.spawnSync with an argument array instead of execSync and string interpolation: locate the branch using zipTool.type and zipTool.cmd (referenced from functions prepareDirectoryDownload and backupDirectoryIfExists), call spawnSync(zipTool.cmd, ['-0','-q','-r', zipFilePath, dirPath], { stdio: 'ignore' }), and remove the quoted/concatenated command string so paths are passed as separate args to avoid shell interpretation.cli/zip-commands.js (1)
119-119: Dead code:resolveUnzipTool()result is unused.The return value is discarded and the function always returns a fixed value. Consider removing the call or adding a comment explaining why it's invoked (e.g., for future extensibility).
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@cli/zip-commands.js` at line 119, The call to resolveUnzipTool() in the top-level flow is dead because its return value is discarded and the function returns a fixed value; either remove the call entirely or use its result (e.g., store into a variable like unzipTool = resolveUnzipTool() and pass it where needed) or replace it with a short explanatory comment above the call stating it's intentionally invoked for side-effects/future extensibility — update the call site around resolveUnzipTool() accordingly and ensure no linter warnings remain.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@cli/openclaw-config.js`:
- Around line 592-594: Replace the non-atomic fs.writeFileSync call that writes
OPENCLAW_CONFIG_FILE with an atomic write to avoid corruption: after
ensureDir(OPENCLAW_DIR) and backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE) use the
existing writeJsonAtomic behavior (or extract a writeFileAtomic helper from
writeJsonAtomic) to write the string in variable normalized to
OPENCLAW_CONFIG_FILE atomically; update the block around ensureDir,
backupFileIfNeededOnce and OPENCLAW_CONFIG_FILE to call that atomic helper
instead of fs.writeFileSync.
- Around line 562-573: The current try/catch uses fs.writeFileSync(filePath,
finalContent, 'utf-8') which is non-atomic; replace that direct write with the
project's atomic writer (e.g., writeJsonAtomic or a writeFileAtomic helper) to
write finalContent to a temp file and then rename to filePath to avoid partial
writes. Update the error return to preserve the same message shape on failure;
locate the write in the same block that currently references filePath,
finalContent and workspaceInfo.configError and swap the fs.writeFileSync call
for the atomic writer call (or add a small temp-file+fs.rename implementation if
a helper is not available).
In `@cli/zip-commands.js`:
- Around line 77-80: The shell command built in the zip flow is vulnerable to
injection because it interpolates user-controlled paths into a single string
passed to execSync; instead, call the zip binary directly with an argument array
(e.g., use child_process.execFileSync or spawnSync) and pass zipTool.cmd as the
executable and ["-<compressionLevel>", "-q", "-r", outputPath, absPath]
(constructed values, not a single concatenated string), ensuring
compressionLevel, outputPath and absPath are validated/normalized before use;
update the block that uses useZipCmd and execSync to use execFileSync/ spawnSync
with args and {stdio:'ignore'} to eliminate shell interpretation.
---
Nitpick comments:
In `@cli/archive-helpers.js`:
- Around line 270-276: The writeUploadZip function currently relies on
Buffer.from(..., 'base64') which is permissive; add an explicit base64 format
check before creating the buffer: strip whitespace from the input, validate with
a base64 regex (allow A-Za-z0-9+/ and up to two trailing =) and ensure length is
a multiple of 4 (or otherwise invalid), and return the existing error object if
validation fails so malformed input is rejected with a clear message; then
proceed to call Buffer.from only after this validation in writeUploadZip.
- Around line 420-421: The catch block currently returns only an error message,
leaving out the backup location; update the error return in the catch (where
copyDirRecursive may fail after targetDir is deleted) to include the backup path
so users can recover manually—i.e., return an object like { error:
`导入失败:${e.message}`, backupPath: backupPath } (use the existing backupPath
variable used earlier around copyDirRecursive/targetDir) so callers receive the
path for recovery.
- Around line 343-345: The code builds a shell string and calls execSync in the
zip branch (zipTool.type === 'zip') which risks shell injection; update the zip
creation to use child_process.spawnSync with an argument array instead of
execSync and string interpolation: locate the branch using zipTool.type and
zipTool.cmd (referenced from functions prepareDirectoryDownload and
backupDirectoryIfExists), call spawnSync(zipTool.cmd, ['-0','-q','-r',
zipFilePath, dirPath], { stdio: 'ignore' }), and remove the quoted/concatenated
command string so paths are passed as separate args to avoid shell
interpretation.
In `@cli/config-bootstrap.js`:
- Around line 282-285: The try/catch around fs.readFileSync(CONFIG_FILE,
'utf-8') silently swallows errors so existingContent may stay empty and trigger
unintended backup flows; modify the catch to either rethrow the error or log it
with context (including CONFIG_FILE and the caught error) before continuing.
Update the block that reads CONFIG_FILE (the existingContent variable
assignment) to handle permission/IO errors explicitly—e.g., use
processLogger.error or throw the caught Error—so callers of the read logic can
make correct decisions based on real failures rather than an empty
existingContent.
- Around line 36-61: Add numeric validation for MAX_RECENT_CONFIGS,
DEFAULT_MODEL_CONTEXT_WINDOW, and DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT in
createConfigBootstrapController: each should be checked (like the other
dependency checks) to ensure typeof === 'number' and Number.isFinite(value) (and
optionally >= 0 where negative doesn't make sense) and throw a descriptive Error
if not; reference the constants MAX_RECENT_CONFIGS,
DEFAULT_MODEL_CONTEXT_WINDOW, and DEFAULT_MODEL_AUTO_COMPACT_TOKEN_LIMIT when
adding these checks so they mirror the existing pattern used for functions and
string constants.
- Around line 142-169: sanitizeRemovedBuiltinProxyProvider can leave
model_provider/model empty when BUILTIN_PROXY_PROVIDER_NAME was the only
provider; to fix, detect when providerNames.length === 0 and handle as an
explicit fallback: if readCurrentModels() contains a 'local' entry use 'local'
as fallbackProvider and set fallbackModel accordingly, otherwise add a virtual
default provider entry to nextProviders (e.g., nextProviders.local = { virtual:
true } or similar) and set model_provider/model to that virtual default; also
emit a console.warn (or use the repo logger) indicating no providers remained so
maintainers can notice the fallback action; update references to
BUILTIN_PROXY_PROVIDER_NAME, model_providers, model_provider, model,
nextProviders and readCurrentModels() in sanitizeRemovedBuiltinProxyProvider
accordingly.
In `@cli/openclaw-config.js`:
- Around line 90-121: Replace the long if-chain in normalizeOpenclawProviderId
with a static lookup map: first compute the normalized key (trim + toLowerCase,
return '' if empty), then consult an aliases object that maps all known variants
('modelstudio','qwencloud'->'qwen', 'z.ai','z-ai'->'zai',
'opencode-zen'->'opencode', 'opencode-go-auth'->'opencode-go',
'kimi','kimi-code','kimi-coding'->'kimi',
'bedrock','aws-bedrock'->'amazon-bedrock',
'bytedance','doubao','volcengine-plan'->'volcengine',
'byteplus-plan'->'byteplus', etc.), and return aliases[normalized] if present
otherwise return normalized; keep function name normalizeOpenclawProviderId
unchanged.
In `@cli/zip-commands.js`:
- Line 119: The call to resolveUnzipTool() in the top-level flow is dead because
its return value is discarded and the function returns a fixed value; either
remove the call entirely or use its result (e.g., store into a variable like
unzipTool = resolveUnzipTool() and pass it where needed) or replace it with a
short explanatory comment above the call stating it's intentionally invoked for
side-effects/future extensibility — update the call site around
resolveUnzipTool() accordingly and ensure no linter warnings remain.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: c72d619c-b65f-4763-a5f9-a15ed23e47c2
📒 Files selected for processing (5)
cli.jscli/archive-helpers.jscli/config-bootstrap.jscli/openclaw-config.jscli/zip-commands.js
📜 Review details
🔇 Additional comments (30)
cli/config-bootstrap.js (8)
1-34: LGTM!Clean dependency injection pattern. The factory function properly destructures all required dependencies, enabling good testability and separation of concerns.
63-114: LGTM!The recent configs management functions are well-implemented:
- Proper deduplication using composite keys
- Clean MRU (most-recently-used) ordering by prepending new entries and filtering duplicates
- Defensive validation of input types
116-140: LGTM!The config content builders are straightforward. The default OpenAI configuration with sensible defaults aligns with the PR's bootstrap behavior.
171-211: LGTM!Well-structured error handling with explicit metadata (
isVirtual,reason,detail,errorType). The distinction between 'missing' and 'parse'/'read' errors with different fallback behaviors is appropriate for the bootstrap use case.
213-224: LGTM!Good practice using
process.exitCode = 1instead of immediate exit. User-friendly error messages with actionable suggestions.
226-253: LGTM!The support files initialization is well-implemented:
- Efficient merge of default models without unnecessary writes
- Proper initialization of auth file when missing
- Clean idempotent design
338-366: LGTM!
resetConfigToDefaultcorrectly mirrors the bootstrap pattern with proper backup handling.consumeInitNoticeprovides a clean consume-once interface for init messages.
368-384: LGTM!Clean public API surface with appropriate exports. The factory pattern with dependency injection enables good testability.
cli/openclaw-config.js (13)
1-51: LGTM — Thorough dependency validation with testable design.The dependency injection pattern with explicit validation is good for testability and catches misconfiguration early. The validation coverage is comprehensive.
52-67: LGTM!The workspace directory resolution logic correctly handles the configuration hierarchy, path resolution, and fallback to defaults.
69-88: LGTM!Good defensive validation against path traversal attacks with layered checks: null bytes, path separators,
..sequences, andpath.basenamecomparison.
123-137: LGTM!The lookup correctly normalizes both the search key and stored keys, handling provider alias variations properly.
139-153: LGTM!Environment variable resolution with proper fallback logic. The support for
PI_CODING_AGENT_DIRprovides backward compatibility.
155-218: LGTM!The helper functions correctly handle the different credential types with appropriate fallbacks. The type ranking (OAuth > token > API key) is a sensible default preference.
220-288: LGTM!The profile selection logic with prioritization (explicit order → lastGood → type rank → alphabetical) is well-structured. The grouping and filtering correctly handles edge cases.
290-301: LGTM!Properly sanitizes sensitive data (
resolvedValue) before sending to client, following good security practices.
303-314: LGTM!Good input validation with explicit whitelist of allowed fields (
key,token,access), preventing arbitrary field injection.
385-428: LGTM!Config file reading with proper BOM stripping, line ending detection, and JSON5 parsing for comment/trailing comma support. Error handling is appropriate.
430-487: LGTM!Workspace info resolution and agents file operations correctly delegate to injected dependencies while propagating configuration errors for transparency.
617-629: LGTM!Clean module interface with well-encapsulated internals. The exported controller exposes a focused set of operations for config and workspace management.
346-354: No action needed. The code correctly handles credential type updates without inconsistency.The original concern assumed OAuth credentials should have an
accessReffield likekeyRefandtokenRef. However, a search of the codebase showsaccessRefdoes not exist anywhere, indicating OAuth credentials do not use a ref-based source field. The code is correct: it deletes ref fields only for credential types that actually have them.> Likely an incorrect or invalid review comment.cli/archive-helpers.js (5)
361-383: Same command injection concern applies here.Line 374 uses the same shell command pattern as
prepareDirectoryDownload. The fix suggested above should be applied consistently to both functions.
1-26: LGTM - Well-structured dependency injection with comprehensive validation.The factory pattern with explicit dependency validation ensures the controller fails fast with clear error messages if any required dependency is missing.
70-135: LGTM - Robust recursive copy with symlink safety.The implementation correctly handles:
- Symlink cycle detection via
visitedRealPathstracking- Path escape protection using
isPathInsidechecks- Clean separation between dereference and preserve-symlink modes
The
finallyblock correctly removes paths fromvisitedRealPathsto allow the same real path to be reached through different legitimate symlink chains while preventing infinite loops within a single recursion stack.
137-198: LGTM - Proper async ZIP inspection with resource cleanup.Good use of the
settledflag pattern to handle the yauzl event-driven API safely, preventing double-resolution races betweenentry,end, anderrorevents.
200-268: LGTM - Comprehensive upload stream handling with proper cleanup.The implementation correctly:
- Validates Content-Length upfront before streaming
- Enforces size limits during data ingestion
- Handles all error/abort/close scenarios
- Cleans up temp files on failure via the
fail()helpercli/zip-commands.js (4)
189-218: LGTM - Robust duplicate path resolution with bounded iteration.The 100,000 iteration limit prevents runaway loops while being generous enough for realistic scenarios. The descriptive error at line 217 aids debugging if the limit is ever reached.
251-353: LGTM - Well-implemented selective ZIP extraction.The implementation correctly:
- Uses lazy entry iteration for memory efficiency
- Normalizes path separators for cross-platform compatibility
- Cleans up partial files on extraction errors (lines 322-326)
- Properly manages yauzl's event-driven lifecycle with the
settledflag pattern
355-448: LGTM - Solid batch extraction with safeguards.Good defensive programming:
- Pre-validates each ZIP against entry count and size limits before extraction
- Continues processing remaining ZIPs after individual failures
- Provides clear summary with failure details
- Returns non-zero exit code when any extraction fails
465-496: LGTM - Flexible argument parsing.The parser correctly handles multiple
--extsyntaxes (:,=, and space-separated) and properly accumulates multiple extension values. The--recursive/--no-recursivetoggle is intuitive.
| try { | ||
| fs.writeFileSync(filePath, finalContent, 'utf-8'); | ||
| return { | ||
| success: true, | ||
| path: filePath, | ||
| workspaceDir: baseDir, | ||
| configError: workspaceInfo.configError | ||
| }; | ||
| } catch (e) { | ||
| return { error: `写入 OpenClaw 工作区文件失败: ${e.message}` }; | ||
| } | ||
| } |
There was a problem hiding this comment.
Non-atomic write may cause data loss on interrupted writes.
Using fs.writeFileSync directly risks partial writes if the process is interrupted. The codebase has writeJsonAtomic available (via dependency injection) which uses temp-file + rename for atomicity.
While less critical for workspace .md files than config/auth files, consistent use of atomic writes would improve reliability.
🛡️ Suggested atomic write pattern
If a general writeFileAtomic helper exists or can be added alongside writeJsonAtomic:
try {
- fs.writeFileSync(filePath, finalContent, 'utf-8');
+ writeFileAtomic(filePath, finalContent);
return {Otherwise, consider inline implementation using temp file + rename pattern.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/openclaw-config.js` around lines 562 - 573, The current try/catch uses
fs.writeFileSync(filePath, finalContent, 'utf-8') which is non-atomic; replace
that direct write with the project's atomic writer (e.g., writeJsonAtomic or a
writeFileAtomic helper) to write finalContent to a temp file and then rename to
filePath to avoid partial writes. Update the error return to preserve the same
message shape on failure; locate the write in the same block that currently
references filePath, finalContent and workspaceInfo.configError and swap the
fs.writeFileSync call for the atomic writer call (or add a small
temp-file+fs.rename implementation if a helper is not available).
| ensureDir(OPENCLAW_DIR); | ||
| const backupPath = backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE); | ||
| fs.writeFileSync(OPENCLAW_CONFIG_FILE, normalized, 'utf-8'); |
There was a problem hiding this comment.
Config file write should use atomic pattern for data integrity.
The config file is critical for application startup. Using fs.writeFileSync (line 594) risks corruption on interrupted writes, potentially rendering the app inoperable. Note that applyOpenclawAuthProfileUpdates correctly uses writeJsonAtomic (line 366) for the auth store.
Since backup is already created (line 593), the impact is mitigated, but atomic write would be more robust.
🛡️ Suggested fix using atomic write pattern
ensureDir(OPENCLAW_DIR);
const backupPath = backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE);
- fs.writeFileSync(OPENCLAW_CONFIG_FILE, normalized, 'utf-8');
+ // Use temp-file + rename pattern for atomicity
+ const tmpPath = `${OPENCLAW_CONFIG_FILE}.tmp-${process.pid}-${Date.now()}`;
+ fs.writeFileSync(tmpPath, normalized, 'utf-8');
+ fs.renameSync(tmpPath, OPENCLAW_CONFIG_FILE);Alternatively, if a generic writeFileAtomic utility exists or can be extracted from writeJsonAtomic.
📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| ensureDir(OPENCLAW_DIR); | |
| const backupPath = backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE); | |
| fs.writeFileSync(OPENCLAW_CONFIG_FILE, normalized, 'utf-8'); | |
| ensureDir(OPENCLAW_DIR); | |
| const backupPath = backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE); | |
| // Use temp-file + rename pattern for atomicity | |
| const tmpPath = `${OPENCLAW_CONFIG_FILE}.tmp-${process.pid}-${Date.now()}`; | |
| fs.writeFileSync(tmpPath, normalized, 'utf-8'); | |
| fs.renameSync(tmpPath, OPENCLAW_CONFIG_FILE); |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/openclaw-config.js` around lines 592 - 594, Replace the non-atomic
fs.writeFileSync call that writes OPENCLAW_CONFIG_FILE with an atomic write to
avoid corruption: after ensureDir(OPENCLAW_DIR) and
backupFileIfNeededOnce(OPENCLAW_CONFIG_FILE) use the existing writeJsonAtomic
behavior (or extract a writeFileAtomic helper from writeJsonAtomic) to write the
string in variable normalized to OPENCLAW_CONFIG_FILE atomically; update the
block around ensureDir, backupFileIfNeededOnce and OPENCLAW_CONFIG_FILE to call
that atomic helper instead of fs.writeFileSync.
| try { | ||
| if (useZipCmd) { | ||
| const cmd = `"${zipTool.cmd}" -${compressionLevel} -q -r "${outputPath}" "${absPath}"`; | ||
| execSync(cmd, { stdio: 'ignore' }); |
There was a problem hiding this comment.
Command injection risk in shell command construction.
Same issue as in archive-helpers.js - the shell command at line 79 interpolates user-provided paths without sanitization. A malicious path like ./foo; rm -rf / could execute arbitrary commands.
🛡️ Safer alternative
if (useZipCmd) {
- const cmd = `"${zipTool.cmd}" -${compressionLevel} -q -r "${outputPath}" "${absPath}"`;
- execSync(cmd, { stdio: 'ignore' });
+ // Use spawnSync to avoid shell interpretation
+ const { spawnSync } = require('child_process');
+ const result = spawnSync(zipTool.cmd, [`-${compressionLevel}`, '-q', '-r', outputPath, absPath], { stdio: 'ignore' });
+ if (result.status !== 0) {
+ throw new Error(`zip command failed with status ${result.status}`);
+ }
}🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@cli/zip-commands.js` around lines 77 - 80, The shell command built in the zip
flow is vulnerable to injection because it interpolates user-controlled paths
into a single string passed to execSync; instead, call the zip binary directly
with an argument array (e.g., use child_process.execFileSync or spawnSync) and
pass zipTool.cmd as the executable and ["-<compressionLevel>", "-q", "-r",
outputPath, absPath] (constructed values, not a single concatenated string),
ensuring compressionLevel, outputPath and absPath are validated/normalized
before use; update the block that uses useZipCmd and execSync to use
execFileSync/ spawnSync with args and {stdio:'ignore'} to eliminate shell
interpretation.
…onses-proxy feat: add Claude proxy, local-provider cleanup, share settings, and split CLI controllers
Summary
/v1/messagesand/v1/modelshandling plus start/stop/status entrypointslocalbuiltin provider/config special casing solocalbehaves like a normal provider name and the Web UI no longer shows builtin-local config controlsnpm start/codexmate) and add a session trash toggle that can switch session-browser deletes between trash-first and permanent deletecli.jssmaller without changing behaviorres/focused on runtime web assets by moving the unused screenshot into docs static assets and dropping the unused non-production Vue browser build from the shipped runtime assetsTesting