Skip to content

feat: OpenAI-compatible provider bridge + Codex UX improvements#90

Merged
awsl233777 merged 12 commits intomainfrom
feat/codex-provider-transform
Apr 17, 2026
Merged

feat: OpenAI-compatible provider bridge + Codex UX improvements#90
awsl233777 merged 12 commits intomainfrom
feat/codex-provider-transform

Conversation

@ymkiux
Copy link
Copy Markdown
Collaborator

@ymkiux ymkiux commented Apr 15, 2026

Overview

This PR adds an optional OpenAI-compatible bridge mode for Codex providers.
When enabled, Codex traffic (Responses API) is routed through a local bridge,
which forwards to an OpenAI-compatible upstream (Chat Completions), while
keeping the existing default behavior unchanged.

It also includes several robustness and UX improvements discovered during
real-world provider testing (DashScope/Qwen, etc.).

Key changes

Provider bridging (optional)

  • Adds a "Use transform (OpenAI-compatible)" toggle when creating a Codex
    provider.
  • When enabled, config.toml is rewritten to point to the builtin bridge:
    • http://127.0.0.1:<port>/bridge/openai/<provider>/v1
  • The Codex provider token in config.toml is set to a fixed value:
    • codexmate
  • The upstream URL and API key are stored in:
    • ~/.codex/codexmate-openai-bridge.json

OpenAI bridge compatibility improvements

  • Supports per-provider custom upstream headers (stored in the bridge JSON).
    Security-sensitive headers (e.g. Authorization, Host, hop-by-hop
    headers) are ignored.
  • Prefers calling upstream POST /v1/responses when supported and falls back
    to POST /v1/chat/completions when the upstream does not implement
    /responses (404/405/501, or "not implemented" / convert_request_failed).
  • Emits a more spec-aligned Responses SSE sequence so Codex reliably consumes
    streamed output:
    • response.created
    • response.output_item.added
    • response.output_text.delta / response.output_text.done
    • response.output_item.done
    • response.completed
    • [DONE]
  • Maps Chat Completions tool_calls into Responses-style function_call items
    to keep tool chaining compatible.

Web UI UX fixes

  • Provider switching no longer auto-reverts when model list loading fails.
  • Provider switching is now single-click even when /models is slow/unavailable
    (apply first, load models in the background).
  • Config template editor now has a 2-step "Confirm → Apply" flow with a diff
    preview (mirrors the AGENTS.md editor workflow):
    • new backend action: preview-config-template-diff

Testing

  • npm test (all unit + e2e)
  • Added unit coverage for:
    • upstream /responses preference + fallback behavior
    • 500 "not implemented" fallback behavior
    • SSE event structure expectations
    • provider switch regression behavior
    • config template diff confirmation flow

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 15, 2026

📝 Walkthrough

Walkthrough

Adds an OpenAI-compatible HTTP bridge module with persistent provider settings, integrates bridge provider semantics into CLI/server/provider UI flows (add/update/share/switch), updates health-check UI/behavior, and adds unit tests covering bridge streaming and chat-completions fallback behavior.

Changes

Cohort / File(s) Summary
OpenAI Bridge Core
cli/openai-bridge.js
New module: JSON settings store, provider upsert/resolve, path matching, auth checks, body-size limits, Responses↔ChatCompletions conversion, proxying for /models and /responses, SSE streaming, and exported handler factory.
CLI Integration & Server
cli.js
Integrates openaiBridgeHandler to short-circuit bridge requests; adds openai-bridge-get-provider action; extends add/update flows and provider persistence to support useTransform/--bridge openai, rewrite local TOML fields for transform providers, and manage upstream mapping/resolution.
Frontend: Provider Add/Edit UI
web-ui/app.js, web-ui/modules/app.methods.providers.mjs, web-ui/partials/index/modals-basic.html
Add useTransform to provider state and add checkbox in add modal; include useTransform in add/update payloads; detect transform providers in edit modal and fetch normalized upstream baseUrl via new server action.
Provider Share / CLI Command
web-ui/modules/app.methods.session-actions.mjs, tests/unit/provider-share-command.test.mjs
Share payload/command handle bridge hint; resolve upstream baseUrl/apiKey for OpenAI bridge providers and include bridge: 'openai'; buildProviderShareCommand appends --bridge <value> when present; tests updated/added.
Health-check UI & Behavior
web-ui/modules/app.methods.codex-config.mjs, web-ui/partials/index/panel-config-codex.html, web-ui/partials/index/modal-health-check.html
Shortened default prompt and copy; removed standalone selector; per-provider health-check button disabled when not current; added locked-provider guard with an info notification; updated dialog labels and strings.
Unit Tests & Adjustments
tests/unit/... (providers-validation.test.mjs, agents-modal-guards.test.mjs, config-tabs-ui.test.mjs, provider-switch-regression.test.mjs, openai-bridge-upstream-responses.test.mjs, etc.)
Fixtures and assertions updated to include useTransform; health-check and DOM assertions adjusted; new tests added to verify bridge streaming and chat/completions fallback behavior.
Small UI/Text Edits
web-ui/app.js, various partials
Minor copy tweaks (shortened health-check prompt and modal label changes).

Sequence Diagram

sequenceDiagram
    participant Client as HTTP Client
    participant Server as App HTTP Server
    participant Handler as OpenAI Bridge Handler
    participant Settings as Bridge Settings File
    participant Upstream as Upstream OpenAI API

    Client->>Server: request /bridge/openai/:provider/(models|responses)
    Server->>Handler: openaiBridgeHandler(req,res)
    Handler->>Handler: validate Authorization (token or loopback)
    alt invalid token
        Handler-->>Client: 401 JSON
    else valid token
        Handler->>Settings: resolveOpenaiBridgeUpstream(provider)
        Settings-->>Handler: { baseUrl, apiKey } or error
        alt provider missing/invalid
            Handler-->>Client: 404
        else provider resolved
            alt GET /models
                Handler->>Upstream: GET ${baseUrl}/models (Authorization: Bearer apiKey)
                Upstream-->>Handler: JSON
                Handler-->>Client: proxied JSON
            else POST /responses
                Handler->>Handler: read & normalize body (Responses→messages)
                Handler->>Upstream: POST ${baseUrl}/responses
                alt upstream /responses OK
                    Upstream-->>Handler: JSON or SSE
                    Handler-->>Client: Responses JSON or SSE events (response.* + done)
                else upstream /responses failure
                    Handler->>Upstream: POST ${baseUrl}/chat/completions (fallback)
                    Upstream-->>Handler: response
                    Handler-->>Client: converted Responses JSON or SSE
                end
            end
        end
    end
Loading

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~60 minutes

Possibly related PRs

Suggested reviewers

  • awsl233777
  • SurviveM

Poem

🐰 I tunneled through JSON and streams,
Mapped responses into chatty dreams,
A checkbox hops and bridges play,
Tokens kept safe along the way,
Hooray — the proxy leaps today!

🚥 Pre-merge checks | ✅ 2 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 10.26% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and accurately describes the main changes: addition of an OpenAI-compatible provider bridge feature and related UX improvements to Codex.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
📝 Generate docstrings
  • Create stacked PR
  • Commit on current branch
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Commit unit tests in branch feat/codex-provider-transform

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@ymkiux ymkiux force-pushed the feat/codex-provider-transform branch 2 times, most recently from 87c69ed to e6d69a7 Compare April 15, 2026 16:25
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

🧹 Nitpick comments (1)
tests/unit/providers-validation.test.mjs (1)

56-88: 建议补一条 useTransform=true 的提交用例。

当前这组断言主要覆盖默认/关闭路径。建议再加一个开启转换的 case,明确断言 add-provider 的 payload 包含 useTransform: true,防止后续回归。

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/providers-validation.test.mjs` around lines 56 - 88, Add a new
unit test alongside "addProvider normalizes trimmed values and submits sanitized
payload" that sets context.newProvider.useTransform = true (or initialize
newProvider with useTransform: true), calls context.addProvider(), and asserts
the apiCalls contains an 'add-provider' action whose params include
useTransform: true (in addition to normalized name/url/key); also assert
context.showAddModal is false, newProvider reset to
{name:'',url:'',key:'',useTransform:false}, loadAllCalls includes 'loadAll', and
a success message is emitted — reference the test name, context.addProvider,
context.newProvider and the 'add-provider' params when adding assertions.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli.js`:
- Around line 1897-1906: The transformed-provider branch currently writes the
bridge sidecar via upsertOpenaiBridgeProvider(...) but subsequent update/delete
flows only modify config.toml (writeConfig) leaving the sidecar stale; fix by
ensuring the same sidecar operations are applied whenever a transformed provider
is edited or removed: call upsertOpenaiBridgeProvider(name, url, key, ...) on
provider updates and call the corresponding removal function (e.g.,
deleteOpenaiBridgeProvider or a new removeOpenaiBridgeProvider) when a
transformed provider is deleted, and make these sidecar operations atomic with
writeConfig (if writeConfig fails, roll back the sidecar change or only persist
the sidecar after successful writeConfig) so functions to touch are
upsertOpenaiBridgeProvider and writeConfig (and add/remove sidecar helper if
missing).
- Around line 269-275: The OpenAI bridge instantiated via
createOpenaiBridgeHttpHandler is currently protected only by a fixed public
token ('codexmate') and must be restricted to loopback clients; update the
cli.js instantiation to enforce remote-address checks (e.g., wrap or augment
createOpenaiBridgeHttpHandler to reject requests whose req.socket.remoteAddress
is not 127.0.0.1 or ::1) or add an allowedIps/allowedHosts option passed into
createOpenaiBridgeHttpHandler, and ensure the handler returns 403 for
non-loopback addresses before forwarding upstream; locate the
createOpenaiBridgeHttpHandler call and replace or wrap it so requests to
/bridge/openai/* validate loopback client addresses (and keep the token check)
to prevent external access.

In `@cli/openai-bridge.js`:
- Around line 57-65: The read-modify-write around
readOpenaiBridgeSettings(filePath) and writeJsonAtomic(filePath, next) is racy;
serialize concurrent updates by acquiring an exclusive file lock on filePath for
the entire cycle (open file, lock, readOpenaiBridgeSettings, compute next,
writeJsonAtomic, unlock/close) so two add-provider requests cannot interleave
and drop entries. Alternatively implement a CAS-style retry: read the file
contents, compute next, write to a temp file and rename only if original file
hasn't changed (retry loop) — apply this locking or CAS logic around the block
that builds next using SETTINGS_VERSION, name, baseUrl, key to ensure updates
are serialized.
- Around line 95-101: The request handler currently calls req.destroy() inside
the req.on('data') size-check branch (where aborted is set and resolve({ error:
'请求体过大' }) is returned), which closes the shared socket before the code later
writes the 413 response; remove the immediate req.destroy() call (or defer any
socket destruction until after the response is written) so the handler can send
the 413 response successfully—keep the aborted flag and resolve({ error: '请求体过大'
}) to stop further processing and only destroy the socket if/when necessary
after sending the response.
- Around line 265-305: proxyRequestJson is missing request timeout and proper
error cleanup and the top-level async IIFE is unprotected from unhandled
rejections; update proxyRequestJson to accept/derive a timeout (e.g.,
options.timeout or a sensible default), pass that into transport.request (or set
req.setTimeout(timeout)) and add a timeout handler that aborts the request and
resolves with ok: false and a timeout error; also ensure socket/error handlers
clean up timers and avoid multiple resolves. Then wrap the top-level void (async
() => { ... })() block in a try/catch that logs/returns a safe error response so
any exception inside the async IIFE cannot produce an unhandled rejection.
- Around line 125-145: The current extractChatCompletionText function only pulls
plain text from message.content and ignores message.tool_calls, which causes
tool-enabled flows to drop upstream tool_calls; update extractChatCompletionText
to also detect and return any message.tool_calls (preserve their structure,
e.g., message.tool_calls or message.tool_calls[0]) alongside text, and then
modify buildResponsesPayloadFromChatText to include the tool_calls field in the
returned Responses payload (propagate tool_calls through the same response
object where output_text is set so consumers receive both output_text and
tool_calls); reference extractChatCompletionText, message.tool_calls, and
buildResponsesPayloadFromChatText when making these changes.

---

Nitpick comments:
In `@tests/unit/providers-validation.test.mjs`:
- Around line 56-88: Add a new unit test alongside "addProvider normalizes
trimmed values and submits sanitized payload" that sets
context.newProvider.useTransform = true (or initialize newProvider with
useTransform: true), calls context.addProvider(), and asserts the apiCalls
contains an 'add-provider' action whose params include useTransform: true (in
addition to normalized name/url/key); also assert context.showAddModal is false,
newProvider reset to {name:'',url:'',key:'',useTransform:false}, loadAllCalls
includes 'loadAll', and a success message is emitted — reference the test name,
context.addProvider, context.newProvider and the 'add-provider' params when
adding assertions.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 241d9e82-3ff9-45a5-ad52-93730a26b732

📥 Commits

Reviewing files that changed from the base of the PR and between e688552 and 2bba456.

📒 Files selected for processing (6)
  • cli.js
  • cli/openai-bridge.js
  • tests/unit/providers-validation.test.mjs
  • web-ui/app.js
  • web-ui/modules/app.methods.providers.mjs
  • web-ui/partials/index/modals-basic.html
📜 Review details
🔇 Additional comments (5)
tests/unit/providers-validation.test.mjs (1)

14-14: newProvider 默认结构已与 UI 状态对齐,赞。

这里补上 useTransform: false 能避免测试上下文和实际页面状态漂移,改动正确。

web-ui/modules/app.methods.providers.mjs (2)

144-152: 按需透传 useTransform 的方式合理。

仅在启用时发送 useTransform: true,不会影响未启用场景的既有接口语义。


344-347: 关闭弹窗时重置 useTransform 很关键。

这能避免上一次选择污染下一次新增流程,状态管理是正确的。

web-ui/partials/index/modals-basic.html (1)

32-40: 新增开关与文案说明清晰,交互接线正确。

v-model="newProvider.useTransform" 与新增逻辑闭环一致,用户能理解开启后的核心效果。

web-ui/app.js (1)

243-243: 全局初始状态补齐 useTransform,实现一致性良好。

这让数据模型在初始化、提交流程和关闭重置之间保持统一。

Comment thread cli.js
Comment thread cli.js
Comment on lines +1897 to +1906
if (useTransform) {
const saveRes = upsertOpenaiBridgeProvider(OPENAI_BRIDGE_SETTINGS_FILE, name, normalizedUrl, key);
if (saveRes && saveRes.error) {
return { error: String(saveRes.error) };
}
const port = resolveWebPort();
baseUrlForConfig = `http://${DEFAULT_WEB_OPEN_HOST}:${port}/bridge/openai/${encodeURIComponent(name)}/v1`;
authKeyForConfig = 'codexmate';
extraLines.push(`codexmate_bridge = "openai"`);
}
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Keep the bridge sidecar in sync with provider updates and deletes.

upsertOpenaiBridgeProvider(...) makes codexmate-openai-bridge.json the bridge source of truth, but the later update/delete paths in this file only touch config.toml. After a transformed provider is edited or removed, the old upstream URL/key stay resolvable by the bridge, and a failed writeConfig() here also leaves the sidecar ahead of the main config.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 1897 - 1906, The transformed-provider branch currently
writes the bridge sidecar via upsertOpenaiBridgeProvider(...) but subsequent
update/delete flows only modify config.toml (writeConfig) leaving the sidecar
stale; fix by ensuring the same sidecar operations are applied whenever a
transformed provider is edited or removed: call upsertOpenaiBridgeProvider(name,
url, key, ...) on provider updates and call the corresponding removal function
(e.g., deleteOpenaiBridgeProvider or a new removeOpenaiBridgeProvider) when a
transformed provider is deleted, and make these sidecar operations atomic with
writeConfig (if writeConfig fails, roll back the sidecar change or only persist
the sidecar after successful writeConfig) so functions to touch are
upsertOpenaiBridgeProvider and writeConfig (and add/remove sidecar helper if
missing).

Comment thread cli/openai-bridge.js
Comment on lines +57 to +65
const settings = readOpenaiBridgeSettings(filePath);
const next = {
version: SETTINGS_VERSION,
providers: {
...(settings.providers || {}),
[name]: { baseUrl, apiKey: key }
}
};
writeJsonAtomic(filePath, next);
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Serialize updates to the bridge settings file.

Line 57 reads the current JSON and Line 65 writes a merged copy back, but nothing makes that read-modify-write cycle atomic. Two concurrent add-provider requests can race here and drop one provider entry even though each individual write is atomic.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/openai-bridge.js` around lines 57 - 65, The read-modify-write around
readOpenaiBridgeSettings(filePath) and writeJsonAtomic(filePath, next) is racy;
serialize concurrent updates by acquiring an exclusive file lock on filePath for
the entire cycle (open file, lock, readOpenaiBridgeSettings, compute next,
writeJsonAtomic, unlock/close) so two add-provider requests cannot interleave
and drop entries. Alternatively implement a CAS-style retry: read the file
contents, compute next, write to a temp file and rename only if original file
hasn't changed (retry loop) — apply this locking or CAS logic around the block
that builds next using SETTINGS_VERSION, name, baseUrl, key to ensure updates
are serialized.

Comment thread cli/openai-bridge.js
Comment on lines +95 to +101
req.on('data', (chunk) => {
if (aborted) return;
size += chunk.length;
if (Number.isFinite(maxBytes) && maxBytes > 0 && size > maxBytes) {
aborted = true;
try { req.destroy(); } catch (_) {}
resolve({ error: '请求体过大' });
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot Apr 15, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🏁 Script executed:

fd cli/openai-bridge.js

Repository: SakuraByteCore/codexmate

Length of output: 50


🏁 Script executed:

wc -l cli/openai-bridge.js && cat -n cli/openai-bridge.js | head -120 | tail -35

Repository: SakuraByteCore/codexmate

Length of output: 1360


🏁 Script executed:

cat -n cli/openai-bridge.js | sed -n '95,110p'

Repository: SakuraByteCore/codexmate

Length of output: 781


🏁 Script executed:

cat -n cli/openai-bridge.js | sed -n '395,415p'

Repository: SakuraByteCore/codexmate

Length of output: 1172


🏁 Script executed:

cat -n cli/openai-bridge.js | sed -n '402,420p'

Repository: SakuraByteCore/codexmate

Length of output: 1120


🏁 Script executed:

# Check if there's any try-catch or error handling around the res.writeHead/end calls
cat -n cli/openai-bridge.js | sed -n '395,425p'

Repository: SakuraByteCore/codexmate

Length of output: 1701


🏁 Script executed:

# Look for any listener registration on res for error events near this code
cat -n cli/openai-bridge.js | sed -n '350,410p'

Repository: SakuraByteCore/codexmate

Length of output: 3263


Don't destroy the request socket before sending the 413 response.

Line 100 calls req.destroy() immediately when the body size limit is exceeded, which closes the underlying socket shared by both the request and response. When the handler then attempts to send a 413 response on lines 404-405, the socket is already destroyed, causing the response to fail silently. Clients will see a connection reset instead of the intended error response.

Remove req.destroy() or defer destruction until after the response is sent. The promise resolution with the error is sufficient to stop further body processing.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/openai-bridge.js` around lines 95 - 101, The request handler currently
calls req.destroy() inside the req.on('data') size-check branch (where aborted
is set and resolve({ error: '请求体过大' }) is returned), which closes the shared
socket before the code later writes the 413 response; remove the immediate
req.destroy() call (or defer any socket destruction until after the response is
written) so the handler can send the 413 response successfully—keep the aborted
flag and resolve({ error: '请求体过大' }) to stop further processing and only destroy
the socket if/when necessary after sending the response.

Copy link
Copy Markdown
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Resolved in latest head (a72072a). Fixed: bridge now preserves tool_calls from upstream chat/completions and includes them in the Responses-shaped payload (extractChatCompletionResult + buildResponsesPayloadFromChatResult).

Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
git show a72072a2f2a3a8cf68a6b0969553f3a6402d6961 -- cli/openai-bridge.js | grep -A 10 -B 5 'destroy\|aborted\|请求体过大\|413'

Length of output: 2091


@ymkiux, thanks for the update on the tool_calls fix! However, based on the content of commit a72072a2f2a3a8cf68a6b0969553f3a6402d6961, the original req.destroy() issue appears to still be present:

// cli/openai-bridge.js — readRequestBody
if (Number.isFinite(maxBytes) && maxBytes > 0 && size > maxBytes) {
    aborted = true;
    try { req.destroy(); } catch (_) {}   // ← socket destroyed here
    resolve({ error: '请求体过大' });
    return;
}

Then the caller immediately tries to write on the now-destroyed socket:

const { body, error: bodyErr } = await readRequestBody(req, maxBodySize);
if (bodyErr) {
    res.writeHead(413, { ... });   // ← socket already gone
    res.end(JSON.stringify({ error: bodyErr }));
}

In Node.js HTTP, req and res share the same underlying TCP socket. Calling req.destroy() tears down that socket, so the subsequent res.writeHead(413) / res.end() will silently fail and the client sees a connection reset instead of a 413.

The fix is to remove the req.destroy() call from readRequestBody entirely — the aborted flag and resolve({ error: ... }) are sufficient to stop further processing:

if (Number.isFinite(maxBytes) && maxBytes > 0 && size > maxBytes) {
    aborted = true;
    // Do NOT call req.destroy() here — the shared socket is still needed to send the 413 response.
    resolve({ error: '请求体过大' });
    return;
}

If you need to abort the incoming data stream after sending the response (e.g. to free resources), you could destroy the socket after res.end() returns in the caller:

if (bodyErr) {
    res.writeHead(413, { 'Content-Type': 'application/json; charset=utf-8' });
    res.end(JSON.stringify({ error: bodyErr }), () => {
        try { req.destroy(); } catch (_) {}
    });
    return;
}

Could you take another look at this? The tool_calls improvement sounds great — that part appears to be a separate, independent change.


🧠 Learnings used
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Comment thread cli/openai-bridge.js Outdated
Comment thread cli/openai-bridge.js
@ymkiux ymkiux changed the title feat(codex): provider OpenAI 兼容转换开关 feat(codex): add OpenAI-compatible provider transform toggle Apr 15, 2026
@ymkiux ymkiux force-pushed the feat/codex-provider-transform branch 2 times, most recently from 43d443f to 4dc9740 Compare April 15, 2026 17:21
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@web-ui/partials/index/panel-config-codex.html`:
- Around line 180-182: The button uses displayCurrentProvider for its
disabled/title bindings while the dialog guard checks currentProvider, causing a
mismatch during provider switching; unify to a single source of truth by using
the same reactive variable in both places—either change the button bindings
(:disabled, :title, :aria-label) to use currentProvider instead of
displayCurrentProvider, or update the dialog guard to validate
displayCurrentProvider—so the button state and the dialog open check reference
the identical symbol (displayCurrentProvider or currentProvider) consistently.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 25bb2573-1044-4cde-9e58-e983755303d8

📥 Commits

Reviewing files that changed from the base of the PR and between 2bba456 and 4dc9740.

📒 Files selected for processing (10)
  • cli.js
  • cli/openai-bridge.js
  • tests/unit/agents-modal-guards.test.mjs
  • tests/unit/config-tabs-ui.test.mjs
  • tests/unit/providers-validation.test.mjs
  • web-ui/app.js
  • web-ui/modules/app.methods.codex-config.mjs
  • web-ui/modules/app.methods.providers.mjs
  • web-ui/partials/index/modals-basic.html
  • web-ui/partials/index/panel-config-codex.html
✅ Files skipped from review due to trivial changes (2)
  • tests/unit/providers-validation.test.mjs
  • web-ui/partials/index/modals-basic.html
🚧 Files skipped from review as they are similar to previous changes (4)
  • web-ui/modules/app.methods.providers.mjs
  • web-ui/app.js
  • cli.js
  • cli/openai-bridge.js
📜 Review details
🔇 Additional comments (3)
web-ui/modules/app.methods.codex-config.mjs (1)

340-345: Locked-provider guard behavior looks correct.

This early return cleanly blocks mismatched-provider health checks and avoids mutating dialog state on the rejected path.

tests/unit/config-tabs-ui.test.mjs (1)

319-322: Coverage update is aligned with the new button contract.

Good assertion update for the conditional :disabled and dynamic :title bindings.

tests/unit/agents-modal-guards.test.mjs (1)

248-267: Nice guard-path test hardening.

The new assertions correctly validate both user feedback (info message) and no unintended state mutation for the blocked path.

Comment thread web-ui/partials/index/panel-config-codex.html Outdated
@ymkiux ymkiux force-pushed the feat/codex-provider-transform branch 4 times, most recently from 7796f13 to 320e682 Compare April 16, 2026 03:07
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 6

♻️ Duplicate comments (3)
web-ui/modules/app.methods.codex-config.mjs (1)

340-345: ⚠️ Potential issue | 🟡 Minor

Unify provider guard source-of-truth with UI state.

At Line 340, the guard checks currentProvider, while the provider action button enablement in web-ui/partials/index/panel-config-codex.html (Lines 180-182) is based on displayCurrentProvider. During switch transitions, this can allow clicking but still reject opening with the info toast.

Consider comparing against the same displayed provider source (or disabling while switching) to avoid this mismatch.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web-ui/modules/app.methods.codex-config.mjs` around lines 340 - 345, The
guard that blocks opening health chat compares providerName to
this.currentProvider but the UI enablement uses displayCurrentProvider, causing
a mismatch during switch transitions; update the check in the function that
contains the guarded block (the if (locked && providerName && providerName !==
String(this.currentProvider || '').trim()) branch) to compare against the same
UI state (this.displayCurrentProvider) or first ensure displayCurrentProvider is
canonicalized (trimmed) and use that for the comparison, and/or make the action
button disabled while switching so the showMessage call
(this.showMessage('请先切换到该提供商再进行健康聊天测试','info')) cannot be triggered by a stale
UI selection.
cli.js (2)

8329-8331: ⚠️ Potential issue | 🔴 Critical

Enforce loopback access for /bridge/openai/* before forwarding.

Line 8329 forwards bridge traffic without a local-client gate. With default host 0.0.0.0, remote callers can hit bridge routes if they know the fixed token.

🔒 Minimal guard in `createWebServer` dispatch
 const server = http.createServer((req, res) => {
     const requestPath = (req.url || '/').split('?')[0];
+    if (requestPath.startsWith('/bridge/openai/')) {
+        const remote = String((req.socket && req.socket.remoteAddress) || '').replace(/^::ffff:/, '');
+        const isLoopback = remote === '127.0.0.1' || remote === '::1';
+        if (!isLoopback) {
+            writeJsonResponse(res, 403, { error: 'Forbidden' });
+            return;
+        }
+    }
     if (typeof openaiBridgeHandler === 'function' && openaiBridgeHandler(req, res)) {
         return;
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 8329 - 8331, The bridge route is forwarded unguarded;
modify the dispatch so that before calling openaiBridgeHandler(req, res) you
verify the request is from loopback (e.g., check req.ip /
req.socket.remoteAddress for 127.0.0.1 and ::1 or map of allowed local
addresses) and reject non-local callers with a 403/401 response; place this
check immediately prior to the existing openaiBridgeHandler invocation (the
symbol to modify is openaiBridgeHandler) so only loopback clients can reach the
bridge even when server binds to 0.0.0.0.

1900-1913: ⚠️ Potential issue | 🟠 Major

Bridge sidecar can drift from config.toml state.

upsertOpenaiBridgeProvider(...) is persisted independently in add/update flows, but deletion path does not remove sidecar entries, and add flow can leave sidecar ahead if writeConfig(...) fails.

Also applies to: 7103-7113

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 1900 - 1913, The sidecar (upsertOpenaiBridgeProvider) is
being persisted independently of the main config write, causing drift when
writeConfig fails or on deletion; change the flow so you only call
upsertOpenaiBridgeProvider(OPENAI_BRIDGE_SETTINGS_FILE, ...) after a successful
writeConfig, or if you must persist first then implement a rollback: call
writeConfig(...) and if it fails, remove the sidecar entry via a new
removeOpenaiBridgeProvider(OPENAI_BRIDGE_SETTINGS_FILE, name) call; also ensure
the delete path invokes removeOpenaiBridgeProvider so deletions remove sidecar
entries. Reference the functions upsertOpenaiBridgeProvider, writeConfig, and
add a removeOpenaiBridgeProvider helper to keep sidecar state in sync
atomically.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli.js`:
- Around line 7087-7119: For transform providers (isTransformProvider true) we
must not copy the upstreamApiKey into persistent auth storage: when you set
preferred_auth_method to "codexmate" and codexmate_bridge to "openai" (via
replaceTomlStringField), ensure the auth.json sync path that writes provider
credentials (the code that currently persists upstreamApiKey after
upsertOpenaiBridgeProvider/resolveOpenaiBridgeUpstream) skips storing the raw
upstreamApiKey; instead leave the existing auth entry unchanged or remove/omit
the upstream apiKey and only record the transform auth method/bridge. Update the
auth.json write logic to check isTransformProvider and, if true, avoid writing
upstreamApiKey for that provider while still updating the toml fields
(preferred_auth_method, codexmate_bridge) and calling
upsertOpenaiBridgeProvider/resolution as needed.

In `@cli/openai-bridge.js`:
- Around line 372-379: The bridge currently treats a missing Authorization as
acceptable even when expectedToken is set, allowing unauthorized access; update
the authorization check in the request-handling logic that uses
extractAuthorizationToken and expectedToken so that when expectedToken is
configured (non-empty) you reject requests that either lack a token or provide a
mismatched token—i.e., if expectedToken is truthy then require token ===
expectedToken, otherwise respond with 401 Unauthorized (using the existing
res.writeHead/res.end flow).
- Around line 183-212: The loop that builds messages currently only handles
items with role/content and array content, so it silently skips tool result
items; update the parsing inside the for (const item of input) loop to first
check if item.type === 'function_call_output' and, when true, push a chat
message with role 'tool', set tool_call_id from item.call_id, and set content
from item.output (coerce non-string outputs to a string representation); keep
the existing role/content handling for other items and ensure this new branch
runs before the generic role/content logic so function_call_output items are not
dropped.

In `@web-ui/modules/app.methods.providers.mjs`:
- Around line 255-270: The edit modal can accidentally preserve the bridge
sentinel token when switching a provider from transform to direct; in the
handler that sets this.editingProvider (and in the isTransformProvider branch
that calls api('openai-bridge-get-provider')), capture and propagate the bridge
response's hasApiKey flag into the UI model (e.g.,
this.editingProvider.hasApiKey = res.hasApiKey) and explicitly set
this.editingProvider.key to an empty/undefined value if hasApiKey is false;
additionally, prevent saving updates in updateProvider() when toggling
useTransform from true to false while editingProvider.hasApiKey is true and
editingProvider.key is empty — require the user to enter a real key (or clear
the sentinel) before allowing the save, so the bridge sentinel added by
addProviderToConfig() cannot be preserved silently.
- Around line 255-262: The nonEditable flag on editingProvider is being set from
isNonDeletableProvider(provider) but updateProvider() uses nonEditable as the
save/edit guard; change the assignment to use the provider's editability flag
instead (i.e., set editingProvider.nonEditable based on provider.editable /
provider.readOnly semantics rather than isNonDeletableProvider), and optionally
fall back to isNonDeletableProvider(provider) only if the provider lacks an
explicit editability flag so existing deletability-based behavior remains
backward-compatible; update references to editingProvider.nonEditable and ensure
updateProvider() now correctly enforces the provider.editable logic.
- Around line 265-270: The async bridge lookup can overwrite a stale modal
state: after awaiting api('openai-bridge-get-provider', { name: provider.name })
you must verify the modal is still editing the same provider before assigning
this.editingProvider.url. Fix by capturing identity (e.g., const currentName =
this.editingProvider?.name or a reference to this.editingProvider) before the
await and, after the await and before calling normalizeProviderUrl, check that
this.editingProvider still matches (same name or same object) and that the modal
is open; only then assign this.editingProvider.url using
normalizeProviderUrl(res.baseUrl).

---

Duplicate comments:
In `@cli.js`:
- Around line 8329-8331: The bridge route is forwarded unguarded; modify the
dispatch so that before calling openaiBridgeHandler(req, res) you verify the
request is from loopback (e.g., check req.ip / req.socket.remoteAddress for
127.0.0.1 and ::1 or map of allowed local addresses) and reject non-local
callers with a 403/401 response; place this check immediately prior to the
existing openaiBridgeHandler invocation (the symbol to modify is
openaiBridgeHandler) so only loopback clients can reach the bridge even when
server binds to 0.0.0.0.
- Around line 1900-1913: The sidecar (upsertOpenaiBridgeProvider) is being
persisted independently of the main config write, causing drift when writeConfig
fails or on deletion; change the flow so you only call
upsertOpenaiBridgeProvider(OPENAI_BRIDGE_SETTINGS_FILE, ...) after a successful
writeConfig, or if you must persist first then implement a rollback: call
writeConfig(...) and if it fails, remove the sidecar entry via a new
removeOpenaiBridgeProvider(OPENAI_BRIDGE_SETTINGS_FILE, name) call; also ensure
the delete path invokes removeOpenaiBridgeProvider so deletions remove sidecar
entries. Reference the functions upsertOpenaiBridgeProvider, writeConfig, and
add a removeOpenaiBridgeProvider helper to keep sidecar state in sync
atomically.

In `@web-ui/modules/app.methods.codex-config.mjs`:
- Around line 340-345: The guard that blocks opening health chat compares
providerName to this.currentProvider but the UI enablement uses
displayCurrentProvider, causing a mismatch during switch transitions; update the
check in the function that contains the guarded block (the if (locked &&
providerName && providerName !== String(this.currentProvider || '').trim())
branch) to compare against the same UI state (this.displayCurrentProvider) or
first ensure displayCurrentProvider is canonicalized (trimmed) and use that for
the comparison, and/or make the action button disabled while switching so the
showMessage call (this.showMessage('请先切换到该提供商再进行健康聊天测试','info')) cannot be
triggered by a stale UI selection.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 38015afc-148e-480f-9fd1-3ac7f4906556

📥 Commits

Reviewing files that changed from the base of the PR and between 4dc9740 and 320e682.

📒 Files selected for processing (14)
  • cli.js
  • cli/openai-bridge.js
  • tests/unit/agents-modal-guards.test.mjs
  • tests/unit/config-tabs-ui.test.mjs
  • tests/unit/provider-share-command.test.mjs
  • tests/unit/provider-switch-regression.test.mjs
  • tests/unit/providers-validation.test.mjs
  • web-ui/app.js
  • web-ui/modules/app.methods.codex-config.mjs
  • web-ui/modules/app.methods.providers.mjs
  • web-ui/modules/app.methods.session-actions.mjs
  • web-ui/partials/index/modal-health-check.html
  • web-ui/partials/index/modals-basic.html
  • web-ui/partials/index/panel-config-codex.html
✅ Files skipped from review due to trivial changes (3)
  • web-ui/partials/index/modal-health-check.html
  • tests/unit/providers-validation.test.mjs
  • web-ui/partials/index/modals-basic.html
🚧 Files skipped from review as they are similar to previous changes (2)
  • web-ui/app.js
  • tests/unit/config-tabs-ui.test.mjs
📜 Review details
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-04-04T01:36:26.964Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli.js
🔇 Additional comments (8)
web-ui/modules/app.methods.codex-config.mjs (1)

332-332: Copy and fallback message updates are consistent.

These text updates are coherent with the new “健康聊天测试” UX flow and keep failure/empty-state wording aligned.

Also applies to: 385-386, 405-406, 422-423, 435-436

tests/unit/agents-modal-guards.test.mjs (1)

249-252: Guard-behavior test update looks correct.

Nice update: it now verifies both the info notification and unchanged dialog selection/lock state when locked provider differs from active provider.

Also applies to: 262-267

tests/unit/provider-switch-regression.test.mjs (1)

251-251: Good regression coverage for the reset shape.

This assertion now matches closeEditModal() and protects the new useTransform default from regressing.

web-ui/modules/app.methods.session-actions.mjs (1)

297-308: Bridge flag handling looks solid.

Quoting bridge and threading it through both command shapes keeps the new share path consistent with the existing shell-escaping behavior.

tests/unit/provider-share-command.test.mjs (1)

152-183: Nice bridge-path regression coverage.

These cases exercise both sides of the feature: exporting upstream bridge config from cli.js and emitting the corresponding --bridge share command from the web UI.

Also applies to: 230-243

cli.js (2)

5965-6004: Bridge provider sharing logic is correctly upstream-aware.

Good handling here: transformed providers resolve/share upstream baseUrl/apiKey instead of leaking local bridge URL, with compatibility fallback.


8569-8583: openai-bridge-get-provider avoids key leakage correctly.

Returning { baseUrl, hasApiKey } without raw apiKey is the right privacy boundary for edit flows.

cli/openai-bridge.js (1)

19-44: Base-URL normalization looks solid.

Stripping pasted /chat/completions, /responses, and /models suffixes while preserving provider-specific path prefixes matches the transform-toggle goal well.

Comment thread cli.js
Comment thread cli/openai-bridge.js
Comment thread cli/openai-bridge.js
Comment thread web-ui/modules/app.methods.providers.mjs
Comment thread web-ui/modules/app.methods.providers.mjs
Comment thread web-ui/modules/app.methods.providers.mjs
@ymkiux ymkiux force-pushed the feat/codex-provider-transform branch from 320e682 to ab04e04 Compare April 16, 2026 03:49
Add a UI toggle for providers to route through the builtin OpenAI bridge.\n\nWhen enabled, the provider base_url points to /bridge/openai/<provider>/v1 and Codex auth token is fixed to 'codexmate'; upstream base URL and API key are stored in ~/.codex/codexmate-openai-bridge.json.
@ymkiux ymkiux force-pushed the feat/codex-provider-transform branch from ab04e04 to a72072a Compare April 16, 2026 06:17
@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Re-reviewed and addressed all existing CodeRabbit � Major findings for the OpenAI-compatible transform bridge:

  • Tool calls: Responses -> Chat -> Responses now preserves both tool_calls and function_call_output items end-to-end.
  • Bridge handler: added request timeout, top-level try/catch, and stricter auth (loopback-only when token is missing).
  • Provider edit modal: fixed nonEditable flag source and guarded async upstream URL lookup against stale modal state.

All changes are in commit a72072a on feat/codex-provider-transform; npm test passes locally and CI for this SHA is green.

@coderabbitai please re-review this PR on the latest head.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 16, 2026

@ymkiux Thanks for the thorough update! I've noted the fixes across all three areas:

  • Tool calls round-triptool_calls and function_call_output items preserved end-to-end through Responses → Chat → Responses.
  • Bridge handler hardening – request timeout, top-level try/catch, and loopback-only enforcement when no token is configured.
  • Provider edit modalnonEditable flag sourced correctly and the async upstream URL lookup is now guarded against stale modal state.

Kicking off a full re-review of the PR against the latest head now.


🧠 Learnings used
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.
✅ Actions performed

Full review triggered.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
web-ui/partials/index/panel-config-codex.html (1)

179-182: ⚠️ Potential issue | 🟡 Minor

Unify health-check button gating with dialog guard.

Line 180 and Line 182 use displayCurrentProvider, but openHealthCheckDialog validates against currentProvider (web-ui/modules/app.methods.codex-config.mjs, Line 340). During provider switching, the button can look enabled yet immediately reject on click.

Suggested fix
-:disabled="displayCurrentProvider !== provider.name"
+:disabled="currentProvider !== provider.name"
 ...
-:title="displayCurrentProvider === provider.name ? '健康聊天测试' : '请先切换到该提供商'"
+:title="currentProvider === provider.name ? '健康聊天测试' : '请先切换到该提供商'"
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@web-ui/partials/index/panel-config-codex.html` around lines 179 - 182, The
health-check button gating is inconsistent: the template bindings use
displayCurrentProvider while openHealthCheckDialog (in
web-ui/modules/app.methods.codex-config.mjs) validates against currentProvider,
causing a button that appears enabled but the dialog rejects clicks during
provider switches. Update the button to use currentProvider everywhere (replace
displayCurrentProvider with currentProvider for :disabled, :aria-label/:title
logic and the click guard) so the UI gating matches the dialog guard
(openHealthCheckDialog), ensuring the button is only interactive when
currentProvider equals provider.name.
cli.js (3)

7139-7143: ⚠️ Potential issue | 🟠 Major

Do not sync upstream API key into auth.json for transform providers.

Line 7142 writes apiKey directly for the active provider, which can store upstream secrets in auth.json even when transform mode is enabled.

✅ Keep auth sync aligned with transform mode
-    // 如果更新了 API Key 且该提供商是当前激活的,同步更新 auth.json
+    // 同步 auth.json:transform provider 必须保持 codexmate
     const currentProvider = config.model_provider;
-    if (apiKey !== undefined && name === currentProvider) {
-        updateAuthJson(apiKey);
+    if (name === currentProvider) {
+        if (isTransformProvider) {
+            updateAuthJson('codexmate');
+        } else if (apiKey !== undefined) {
+            updateAuthJson(apiKey);
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 7139 - 7143, The current code unconditionally writes
apiKey into auth.json for the active provider (variables: apiKey, name,
currentProvider/config.model_provider, updateAuthJson), which can leak upstream
secrets for transform-mode providers; change the guard so updateAuthJson is only
called when the provider is not a transform provider (e.g. add a predicate like
!isTransformProvider(name) or !providerIsTransform(name) / check
transformProviders list) — i.e. call updateAuthJson(apiKey) only if apiKey !==
undefined && name === currentProvider && the provider is not a transform
provider.

273-273: ⚠️ Potential issue | 🔴 Critical

Restrict OpenAI bridge access to loopback clients.

/bridge/openai/* is reachable on the same server that defaults to 0.0.0.0, and the token is a fixed public string (codexmate). This allows remote callers to invoke upstream requests if they know the token.

🔒 Minimal guard
 const server = http.createServer((req, res) => {
     const requestPath = (req.url || '/').split('?')[0];
+    if (requestPath.startsWith('/bridge/openai/')) {
+        const remote = String((req.socket && req.socket.remoteAddress) || '').replace(/^::ffff:/, '');
+        const isLoopback = remote === '127.0.0.1' || remote === '::1';
+        if (!isLoopback) {
+            writeJsonResponse(res, 403, { error: 'Forbidden' });
+            return;
+        }
+    }
     if (typeof openaiBridgeHandler === 'function' && openaiBridgeHandler(req, res)) {
         return;
     }

Also applies to: 8329-8331

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` at line 273, The current OpenAI bridge uses a fixed public token
(expectedToken: 'codexmate') and accepts requests from any host; replace this
with a proper guard: stop using the hard-coded string and read the token from a
secure env/config value (e.g., process.env.OPENAI_BRIDGE_TOKEN) and enforce that
the incoming request originates from a loopback address (check req.ip /
req.connection.remoteAddress / socket.remoteAddress for 127.0.0.1 and ::1)
before validating the token in the route handler that serves /bridge/openai/*
(and the duplicate handler referenced at lines near 8329-8331); reject
non-loopback clients with 403 and only then compare the provided token to the
configured secret.

1900-1904: ⚠️ Potential issue | 🟠 Major

Make transform sidecar writes atomic with config.toml writes.

Line 1901 and Line 7103 persist sidecar state before config persistence. If Line 1933 or Line 7137 fails, bridge settings become ahead of config.toml, leaving stale upstream credentials active.

Also applies to: 1932-1935, 7102-7106, 7137-7137

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 1900 - 1904, The upsertOpenaiBridgeProvider call
persists sidecar state before the config.toml write, which can leave bridge
settings ahead of the main config on failure; change the sequence so the
config.toml persistence and the sidecar write are committed atomically: either
(A) perform the config.toml write first and only call
upsertOpenaiBridgeProvider(name, normalizedUrl, key) after the config write
succeeds, or (B) write both artifacts to temp files and then atomically rename
both into place (ensure upsertOpenaiBridgeProvider uses a temp+rename pattern),
so that failure of the config write prevents the sidecar from becoming live.
Reference: upsertOpenaiBridgeProvider and the code path that writes config.toml;
implement one of these atomic commit strategies and add error handling to roll
back or avoid partial commits.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli.js`:
- Around line 6776-6778: The code currently computes bridgeType and
useOpenaiBridge but silently ignores unknown --bridge values; update the logic
to fail fast by validating options.bridge: after computing const bridgeType =
... and const useOpenaiBridge = ..., add a check that if options && typeof
options.bridge === 'string' and bridgeType is non-empty and bridgeType !==
'openai' then print a clear error (including the invalid bridge value) and
exit/throw (e.g., console.error(...) + process.exit(1) or throw new Error).
Apply the same validation pattern to the other occurrence that uses the same
variables (the block that defines bridgeType/useOpenaiBridge elsewhere) so typos
like --bridge opneai are rejected rather than ignored.

In `@cli/openai-bridge.js`:
- Around line 204-210: The current conditional drops tool messages when content
is empty/whitespace; change the check to only require toolCallId and always push
a message with content coerced to a string (e.g., String(content) or '' for
null/undefined) so empty outputs are preserved. Update the block around
messages.push (the tool message construction using toolCallId and content) to
remove the String(...).trim() guard and ensure content is still included even
when empty.

---

Duplicate comments:
In `@cli.js`:
- Around line 7139-7143: The current code unconditionally writes apiKey into
auth.json for the active provider (variables: apiKey, name,
currentProvider/config.model_provider, updateAuthJson), which can leak upstream
secrets for transform-mode providers; change the guard so updateAuthJson is only
called when the provider is not a transform provider (e.g. add a predicate like
!isTransformProvider(name) or !providerIsTransform(name) / check
transformProviders list) — i.e. call updateAuthJson(apiKey) only if apiKey !==
undefined && name === currentProvider && the provider is not a transform
provider.
- Line 273: The current OpenAI bridge uses a fixed public token (expectedToken:
'codexmate') and accepts requests from any host; replace this with a proper
guard: stop using the hard-coded string and read the token from a secure
env/config value (e.g., process.env.OPENAI_BRIDGE_TOKEN) and enforce that the
incoming request originates from a loopback address (check req.ip /
req.connection.remoteAddress / socket.remoteAddress for 127.0.0.1 and ::1)
before validating the token in the route handler that serves /bridge/openai/*
(and the duplicate handler referenced at lines near 8329-8331); reject
non-loopback clients with 403 and only then compare the provided token to the
configured secret.
- Around line 1900-1904: The upsertOpenaiBridgeProvider call persists sidecar
state before the config.toml write, which can leave bridge settings ahead of the
main config on failure; change the sequence so the config.toml persistence and
the sidecar write are committed atomically: either (A) perform the config.toml
write first and only call upsertOpenaiBridgeProvider(name, normalizedUrl, key)
after the config write succeeds, or (B) write both artifacts to temp files and
then atomically rename both into place (ensure upsertOpenaiBridgeProvider uses a
temp+rename pattern), so that failure of the config write prevents the sidecar
from becoming live. Reference: upsertOpenaiBridgeProvider and the code path that
writes config.toml; implement one of these atomic commit strategies and add
error handling to roll back or avoid partial commits.

In `@web-ui/partials/index/panel-config-codex.html`:
- Around line 179-182: The health-check button gating is inconsistent: the
template bindings use displayCurrentProvider while openHealthCheckDialog (in
web-ui/modules/app.methods.codex-config.mjs) validates against currentProvider,
causing a button that appears enabled but the dialog rejects clicks during
provider switches. Update the button to use currentProvider everywhere (replace
displayCurrentProvider with currentProvider for :disabled, :aria-label/:title
logic and the click guard) so the UI gating matches the dialog guard
(openHealthCheckDialog), ensuring the button is only interactive when
currentProvider equals provider.name.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: e4953ef7-f221-480c-9fe2-d8d488f0df4a

📥 Commits

Reviewing files that changed from the base of the PR and between 320e682 and a72072a.

📒 Files selected for processing (14)
  • cli.js
  • cli/openai-bridge.js
  • tests/unit/agents-modal-guards.test.mjs
  • tests/unit/config-tabs-ui.test.mjs
  • tests/unit/provider-share-command.test.mjs
  • tests/unit/provider-switch-regression.test.mjs
  • tests/unit/providers-validation.test.mjs
  • web-ui/app.js
  • web-ui/modules/app.methods.codex-config.mjs
  • web-ui/modules/app.methods.providers.mjs
  • web-ui/modules/app.methods.session-actions.mjs
  • web-ui/partials/index/modal-health-check.html
  • web-ui/partials/index/modals-basic.html
  • web-ui/partials/index/panel-config-codex.html
✅ Files skipped from review due to trivial changes (3)
  • web-ui/partials/index/modals-basic.html
  • tests/unit/provider-switch-regression.test.mjs
  • web-ui/partials/index/modal-health-check.html
🚧 Files skipped from review as they are similar to previous changes (6)
  • tests/unit/providers-validation.test.mjs
  • tests/unit/provider-share-command.test.mjs
  • web-ui/modules/app.methods.codex-config.mjs
  • web-ui/modules/app.methods.providers.mjs
  • tests/unit/config-tabs-ui.test.mjs
  • tests/unit/agents-modal-guards.test.mjs
📜 Review details
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2026-04-04T01:36:26.964Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli.js
🔇 Additional comments (15)
web-ui/app.js (2)

213-213: Prompt text update is consistent and safe.

Line 213 aligns the default health-check prompt wording without affecting logic flow.


243-243: newProvider.useTransform default is correctly initialized.

Line 243 ensures the add-provider form starts with transform disabled, matching the new toggle behavior and preventing stale state.

web-ui/modules/app.methods.session-actions.mjs (1)

297-308: Bridge flag wiring in share command looks correct.

Lines 297-308 correctly propagate payload.bridge into the generated codexmate add ... --bridge ... command for both key/no-key cases.

cli/openai-bridge.js (12)

84-92: Race condition on concurrent provider updates remains unaddressed.

The read-modify-write cycle at lines 84-92 has no synchronization. Two concurrent upsertOpenaiBridgeProvider calls can interleave, causing one provider entry to be lost when both reads happen before either write completes.


125-129: Socket destruction before 413 response remains unaddressed.

Line 127 destroys the request socket when the body size limit is exceeded, but the caller at lines 496-500 then attempts to write a 413 response on the destroyed socket. The response will fail silently and clients will see a connection reset instead of the error message.


431-444: Authorization bypass for loopback connections is a documented design decision.

The current logic allows loopback requests without a token (lines 435-438) while rejecting non-loopback requests without auth and any request with an incorrect token. The inline comment at lines 432-433 documents this as intentional for backwards compatibility with clients that don't send Authorization headers on custom base URLs.

The past review comment about requiring the token when expectedToken is configured would make the bridge stricter. The current behavior is acceptable for a local development bridge, but note that any process on the same machine can access the stored upstream API keys without authentication.


1-8: LGTM!

Imports and constants are well-organized. Using crypto.randomBytes for ID generation and versioned settings is appropriate.


19-44: LGTM!

The URL normalization logic correctly strips common endpoint suffixes (/v1/chat/completions, /v1/responses, /v1/models) back to base URLs, enabling users to paste full endpoint URLs without issues. The case-insensitive regex replacements and fallback to the original normalized value on parse errors are good defensive choices.


152-178: LGTM — Tool calls extraction has been addressed.

The function now correctly extracts both text content and tool_calls from the upstream response, returning them together. This enables proper tool-enabled workflow support.


192-212: LGTM — function_call_output handling has been addressed.

The conversion from Responses-format tool outputs to Chat Completions role: "tool" messages is now implemented, enabling multi-step tool workflows.


244-289: LGTM!

The Responses → Chat Completions conversion handles standard fields comprehensively, including tools/tool_choice passthrough. Setting stream: false for upstream is correct since the bridge implements minimal SSE simulation after receiving the complete response.


291-318: LGTM — Tool calls now included in Responses payload.

The payload builder correctly propagates toolCalls when present, completing the end-to-end tool support through the bridge.


337-391: LGTM — Timeout and error handling have been addressed.

The proxy function now includes configurable timeout (default 30s), uses a settled flag to prevent multiple resolutions, and properly handles timeout/error/success races.


429-571: LGTM — Top-level error handling has been addressed.

The async block is now wrapped in a try-catch that returns a 500 response on unexpected exceptions, preventing unhandled promise rejections.


580-585: LGTM!

Module exports are clean and expose only the necessary public API.

Comment thread cli.js
Comment thread cli/openai-bridge.js Outdated
@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Status update (head dc0ef7e):

  • CI: all checks green
  • CodeRabbit: success
  • Review threads: 0 unresolved

This PR should be ready for maintainer review/approval + merge.

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Update (head bcf0674):

  • CI: success
  • CodeRabbit: success
  • No open review threads.

PR still shows mergeable_state=blocked / review_decision=None, likely awaiting required maintainer approval.

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

@coderabbitai If the current suggestion is not adopted, will it lead to disruptive changes?

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 16, 2026

Tip

For best results, initiate chat on the files or code changes.

@ymkiux There are currently no open review threads from me on this PR — the slate is clean. So there is nothing left to "not adopt."

More broadly: all suggestions I make in this repository are intentionally scoped to correctness, safety, and quality within the existing design (no API/interface/behavior changes). Not adopting a suggestion from me would at most leave a minor correctness or safety nuance unaddressed — it would not cause disruptive or breaking changes to the codebase or its users.

With CI green, CodeRabbit showing success, and zero unresolved threads, this PR looks ready for maintainer review and merge.


🧠 Learnings used
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Added per-provider custom upstream headers for OpenAI-bridge providers (stored in ~/.codex/codexmate-openai-bridge.json under providers..headers).

  • Headers are merged into upstream requests for /models and /chat/completions.
  • Authorization/Host/Connection/etc are blocked from being set via custom headers to avoid security/HTTP breakage.
  • Existing providers remain compatible; upsert preserves existing headers when URL/key are updated.

Head: 64fabf9

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Implemented Maxx-style Codex->OpenAI conversion details in our Responses->Chat bridge (head 5854290):

  • Map responses.instructions into a leading system message (so upstream sees the same instruction semantics as Codex/OpenClaw).
  • Support responses input items of type function_call -> assistant tool_calls (in addition to function_call_output).

This should help with upstream endpoints that route behavior based on instruction/tool semantics rather than only headers.

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Attempt to address 405 Method Not Allowed from some upstreams by following Maxx's Codex guidance: prefer calling upstream /v1/responses directly (non-stream) and re-wrap as SSE when client requested stream; fallback to /v1/chat/completions only when upstream /responses returns 404/405.

Head: 9ad6d0b

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

♻️ Duplicate comments (4)
cli.js (2)

1900-1904: ⚠️ Potential issue | 🟠 Major

Avoid updating the bridge sidecar before config.toml is durable.

upsertOpenaiBridgeProvider(...) runs before writeConfig(). If the TOML write fails, codexmate-openai-bridge.json is already updated while config.toml still reflects the old state, so the bridge source of truth gets ahead of the main config. Please persist the sidecar only after the config write succeeds, or roll it back on failure.

Also applies to: 1932-1935

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 1900 - 1904, The sidecar is being updated before the
main config is durably written: move the call to upsertOpenaiBridgeProvider (and
any similar calls around lines 1932-1935) so it runs only after writeConfig
successfully completes, or if you prefer to keep the current order, add a
rollback that deletes/reverts the sidecar (codexmate-openai-bridge.json /
OPENAI_BRIDGE_SETTINGS_FILE) when writeConfig fails; ensure you reference
upsertOpenaiBridgeProvider, writeConfig and the bridge settings file so the
sidecar and config remain consistent.

7145-7148: ⚠️ Potential issue | 🟠 Major

Keep auth.json pinned to codexmate for active transform providers.

When isTransformProvider is true, Line 7148 still writes the upstream API key into auth.json. That both persists the raw upstream secret locally and breaks the active transformed provider immediately, because the local bridge expects codexmate, not the upstream key.

🔒 Minimal fix
-    if (apiKey !== undefined && name === currentProvider) {
-        updateAuthJson(apiKey);
+    if (name === currentProvider) {
+        if (isTransformProvider) {
+            updateAuthJson('codexmate');
+        } else if (apiKey !== undefined) {
+            updateAuthJson(apiKey);
+        }
     }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 7145 - 7148, The code is writing the upstream API key
into auth.json for providers that are transform providers, which leaks the
upstream secret and breaks the local bridge; modify the logic around
updateAuthJson so that when isTransformProvider is true (or when name identifies
a transform provider) you do NOT write the upstream apiKey into auth.json but
instead ensure auth.json remains pinned to the local bridge value (e.g.,
"codexmate") or skip calling updateAuthJson entirely; update the conditional
that checks apiKey, name, and config.model_provider (the block referencing
currentProvider, apiKey, name and calling updateAuthJson) to either call
updateAuthJson with the pinned "codexmate" token or avoid persisting the
upstream secret when isTransformProvider is true.
cli/openai-bridge.js (2)

116-132: ⚠️ Potential issue | 🟠 Major

Serialize this settings read-modify-write.

Line 116 reads the current JSON and Line 132 writes a merged copy back, but nothing prevents two concurrent add/update requests from interleaving and dropping one provider entry. writeJsonAtomic() only makes the final write atomic; it does not protect the whole merge cycle.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/openai-bridge.js` around lines 116 - 132, The read-modify-write around
readOpenaiBridgeSettings(...) and writeJsonAtomic(...) is vulnerable to races:
wrap the merge so concurrent runs cannot clobber each other by either (a)
acquiring a file lock for filePath for the whole read-modify-write cycle (use a
lock library or flock before calling readOpenaiBridgeSettings, perform the merge
that builds next using SETTINGS_VERSION, existingHeaders/normalizeHeadersMap and
name/baseUrl/key/nextHeaders, then writeJsonAtomic and release the lock), or (b)
implement a compare-and-swap retry loop: read current settings, compute next,
write to a temp file and atomically rename only if the on-disk content/mtime
matches the value you read; if it changed, retry the read/merge/write until
success. Use the existing symbols (readOpenaiBridgeSettings, writeJsonAtomic,
normalizeHeadersMap, SETTINGS_VERSION, providers, name) to locate and protect
the critical section.

165-168: ⚠️ Potential issue | 🟠 Major

Don't destroy the socket before the 413 is written.

Line 167 tears down the shared request/response socket, so the res.writeHead(413) / res.end() path at Lines 593-594 can degrade into a reset instead of returning the JSON error body.

🐛 Minimal fix
             if (Number.isFinite(maxBytes) && maxBytes > 0 && size > maxBytes) {
                 aborted = true;
-                try { req.destroy(); } catch (_) {}
                 resolve({ error: '请求体过大' });
                 return;
             }
In Node.js HTTP servers, do `IncomingMessage` (`req`) and `ServerResponse` (`res`) share the same socket, and can calling `req.destroy()` before `res.writeHead()/res.end()` prevent a 413 response from being delivered?
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli/openai-bridge.js` around lines 165 - 168, The code currently calls
req.destroy() when request body is too large, which can tear down the shared
socket before the 413 response is written; remove the req.destroy() call and
instead stop consuming the body without destroying the socket (e.g., call
req.pause() or simply set aborted = true and return resolve({ error: '请求体过大' }))
so the later res.writeHead(413)/res.end() path can send the JSON error. Update
the block around the maxBytes check (refs: variables req, res, aborted,
maxBytes, size, resolve) to avoid destroying the request socket.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@cli.js`:
- Around line 6913-6920: The current isTransformProvider detection incorrectly
treats any provider whose base_url contains "/bridge/openai/" as transformed and
causes cmdUpdate() to rewrite real upstream configs; change the fallback so
isTransformProvider returns true only when providerConfig.codexmate_bridge ===
'openai' or when base_url exactly equals the local loopback bridge pattern
(e.g., the precise local bridge host/path used by this app), rather than using
url.includes('/bridge/openai/'); update the closure that defines
isTransformProvider (and any logic in cmdUpdate()) to check for the exact string
match of base_url or rely solely on codexmate_bridge to avoid corrupting
legitimate upstream configs.

In `@cli/openai-bridge.js`:
- Around line 363-384: The payload builder currently nests toolCalls under
messageItem.tool_calls which prevents normalizeResponsesInputToChatMessages()
from seeing top-level function_call items; update the return payload
construction in openai-bridge.js so that for each tool call in toolCalls you
also push a separate top-level item with type: 'function_call' (preserving the
existing messageItem.tool_calls for backward compatibility), e.g., build and
append function_call entries to the output array alongside the messageItem
before assigning payload.output; ensure references to responseId, model, usage,
and the existing messageItem remain unchanged.

---

Duplicate comments:
In `@cli.js`:
- Around line 1900-1904: The sidecar is being updated before the main config is
durably written: move the call to upsertOpenaiBridgeProvider (and any similar
calls around lines 1932-1935) so it runs only after writeConfig successfully
completes, or if you prefer to keep the current order, add a rollback that
deletes/reverts the sidecar (codexmate-openai-bridge.json /
OPENAI_BRIDGE_SETTINGS_FILE) when writeConfig fails; ensure you reference
upsertOpenaiBridgeProvider, writeConfig and the bridge settings file so the
sidecar and config remain consistent.
- Around line 7145-7148: The code is writing the upstream API key into auth.json
for providers that are transform providers, which leaks the upstream secret and
breaks the local bridge; modify the logic around updateAuthJson so that when
isTransformProvider is true (or when name identifies a transform provider) you
do NOT write the upstream apiKey into auth.json but instead ensure auth.json
remains pinned to the local bridge value (e.g., "codexmate") or skip calling
updateAuthJson entirely; update the conditional that checks apiKey, name, and
config.model_provider (the block referencing currentProvider, apiKey, name and
calling updateAuthJson) to either call updateAuthJson with the pinned
"codexmate" token or avoid persisting the upstream secret when
isTransformProvider is true.

In `@cli/openai-bridge.js`:
- Around line 116-132: The read-modify-write around
readOpenaiBridgeSettings(...) and writeJsonAtomic(...) is vulnerable to races:
wrap the merge so concurrent runs cannot clobber each other by either (a)
acquiring a file lock for filePath for the whole read-modify-write cycle (use a
lock library or flock before calling readOpenaiBridgeSettings, perform the merge
that builds next using SETTINGS_VERSION, existingHeaders/normalizeHeadersMap and
name/baseUrl/key/nextHeaders, then writeJsonAtomic and release the lock), or (b)
implement a compare-and-swap retry loop: read current settings, compute next,
write to a temp file and atomically rename only if the on-disk content/mtime
matches the value you read; if it changed, retry the read/merge/write until
success. Use the existing symbols (readOpenaiBridgeSettings, writeJsonAtomic,
normalizeHeadersMap, SETTINGS_VERSION, providers, name) to locate and protect
the critical section.
- Around line 165-168: The code currently calls req.destroy() when request body
is too large, which can tear down the shared socket before the 413 response is
written; remove the req.destroy() call and instead stop consuming the body
without destroying the socket (e.g., call req.pause() or simply set aborted =
true and return resolve({ error: '请求体过大' })) so the later
res.writeHead(413)/res.end() path can send the JSON error. Update the block
around the maxBytes check (refs: variables req, res, aborted, maxBytes, size,
resolve) to avoid destroying the request socket.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 04f2079c-373d-4f3c-b5e9-d2e7ffde07c6

📥 Commits

Reviewing files that changed from the base of the PR and between a72072a and 9ad6d0b.

📒 Files selected for processing (2)
  • cli.js
  • cli/openai-bridge.js
📜 Review details
🧰 Additional context used
🧠 Learnings (7)
📓 Common learnings
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.
📚 Learning: 2026-03-30T04:47:15.352Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli/openai-bridge.js
  • cli.js
📚 Learning: 2026-03-31T12:37:51.023Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli/openai-bridge.js
  • cli.js
📚 Learning: 2026-04-04T01:36:26.964Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli/openai-bridge.js
  • cli.js
📚 Learning: 2026-04-03T10:42:22.368Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Applied to files:

  • cli/openai-bridge.js
  • cli.js
📚 Learning: 2026-03-29T02:42:32.785Z
Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.

Applied to files:

  • cli/openai-bridge.js
  • cli.js
📚 Learning: 2026-04-05T03:58:26.787Z
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-05T03:58:26.787Z
Learning: For SakuraByteCore/codexmate reviews, only provide non-breaking feedback (correctness, safety, quality). Avoid API, behavior, or refactor suggestions. This rule applies across PRs `#55`, `#59`, `#67`, and `#68`.

Applied to files:

  • cli/openai-bridge.js

Comment thread cli.js
Comment on lines +6913 to +6920
const isTransformProvider = (() => {
if (forceUseTransform) return true;
if (!providerConfig || typeof providerConfig !== 'object') return false;
const bridge = typeof providerConfig.codexmate_bridge === 'string' ? providerConfig.codexmate_bridge.trim() : '';
if (bridge === 'openai') return true;
const url = typeof providerConfig.base_url === 'string' ? providerConfig.base_url : '';
return url.includes('/bridge/openai/');
})();
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Don't infer transform mode from any /bridge/openai/ substring.

Lines 6916-6919 will classify any provider whose upstream URL happens to contain /bridge/openai/ as a transformed provider. cmdUpdate() then rewrites that provider to the local bridge URL and forces the token to codexmate, which corrupts a legitimate upstream config. Please restrict the fallback to the exact local loopback bridge shape, or rely only on codexmate_bridge === "openai".

🔧 Minimal guard
-        const url = typeof providerConfig.base_url === 'string' ? providerConfig.base_url : '';
-        return url.includes('/bridge/openai/');
+        const url = typeof providerConfig.base_url === 'string' ? providerConfig.base_url.trim() : '';
+        try {
+            const parsed = new URL(url);
+            const expectedPath = `/bridge/openai/${encodeURIComponent(name)}/v1`;
+            const isLoopback = parsed.hostname === '127.0.0.1'
+                || parsed.hostname === '::1'
+                || parsed.hostname === 'localhost';
+            return isLoopback && parsed.pathname.replace(/\/+$/g, '') === expectedPath;
+        } catch (_) {
+            return false;
+        }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@cli.js` around lines 6913 - 6920, The current isTransformProvider detection
incorrectly treats any provider whose base_url contains "/bridge/openai/" as
transformed and causes cmdUpdate() to rewrite real upstream configs; change the
fallback so isTransformProvider returns true only when
providerConfig.codexmate_bridge === 'openai' or when base_url exactly equals the
local loopback bridge pattern (e.g., the precise local bridge host/path used by
this app), rather than using url.includes('/bridge/openai/'); update the closure
that defines isTransformProvider (and any logic in cmdUpdate()) to check for the
exact string match of base_url or rely solely on codexmate_bridge to avoid
corrupting legitimate upstream configs.

Comment thread cli/openai-bridge.js Outdated
@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Added regression tests for the new upstream /responses-first behavior in openai-bridge:

  • When upstream supports POST /v1/responses, bridge uses it and can rewrap SSE for Codex.
  • When upstream returns 405 for /v1/responses, bridge falls back to /v1/chat/completions conversion.

Head: 191868c

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Inline comments:
In `@tests/unit/openai-bridge-upstream-responses.test.mjs`:
- Around line 47-69: Add explicit route hit counters and order assertions to the
test "openai-bridge prefers upstream /responses and rewraps SSE when stream
requested": instrument the local test server (variable upstream) to increment
counters (e.g., responsesHits, chatCompletionsHits) and push route names to an
array (e.g., hitOrder) when each route handler for '/v1/responses' and
'/v1/chat/completions' runs, then after exercising the bridge assert that
responsesHits > 0 and that the first entry in hitOrder is '/v1/responses' (and
chatCompletionsHits > 0 only if falling back), updating the same pattern in the
other tests mentioned so the assertions prove the bridge tried /v1/responses
first and only hit /v1/chat/completions on fallback.
- Around line 108-110: Tests currently call await bridge.close() and await
upstream.close() which don't return Promises and skip cleanup if assertions
throw; create a helper like closeServer(server) that returns a Promise
resolving/rejecting inside server.close(callback) and replace direct awaits with
await closeServer(bridge) and await closeServer(upstream); also wrap each test
body in try/finally so the finally block awaits closeServer(bridge),
closeServer(upstream) and awaits rm(tmpDir, { recursive: true, force: true }) to
ensure servers are closed and temp dirs removed even on failure.
🪄 Autofix (Beta)

Fix all unresolved CodeRabbit comments on this PR:

  • Push a commit to this branch (recommended)
  • Create a new PR with the fixes

ℹ️ Review info
⚙️ Run configuration

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Run ID: 84d80e74-2e1d-4515-b1c4-2a4b8065029a

📥 Commits

Reviewing files that changed from the base of the PR and between 9ad6d0b and 191868c.

📒 Files selected for processing (1)
  • tests/unit/openai-bridge-upstream-responses.test.mjs
📜 Review details
🧰 Additional context used
🧠 Learnings (1)
📓 Common learnings
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.
Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.
🔇 Additional comments (1)
tests/unit/openai-bridge-upstream-responses.test.mjs (1)

21-45: Good reusable request helper for bridge tests.

requestText keeps transport/response capture centralized and makes both test cases concise and consistent.

Comment on lines +47 to +69
test('openai-bridge prefers upstream /responses and rewraps SSE when stream requested', async () => {
const upstream = http.createServer((req, res) => {
if (req.url === '/v1/responses' && req.method === 'POST') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({
id: 'resp_upstream',
model: 'gpt-test',
output: [{
type: 'message',
role: 'assistant',
content: [{ type: 'output_text', text: 'hello-from-upstream' }]
}]
}));
return;
}
if (req.url === '/v1/models' && req.method === 'GET') {
res.writeHead(200, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ object: 'list', data: [] }));
return;
}
res.writeHead(404, { 'Content-Type': 'application/json' });
res.end(JSON.stringify({ error: 'not found' }));
});
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Assert upstream call path explicitly for “prefer /responses then fallback”.

Current assertions verify output shape/status, but they don’t prove /v1/responses was attempted first (or that fallback hit /v1/chat/completions only after 404/405). Add route hit counters/order assertions to lock regression intent.

Also applies to: 103-107, 114-131, 165-170

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/openai-bridge-upstream-responses.test.mjs` around lines 47 - 69,
Add explicit route hit counters and order assertions to the test "openai-bridge
prefers upstream /responses and rewraps SSE when stream requested": instrument
the local test server (variable upstream) to increment counters (e.g.,
responsesHits, chatCompletionsHits) and push route names to an array (e.g.,
hitOrder) when each route handler for '/v1/responses' and '/v1/chat/completions'
runs, then after exercising the bridge assert that responsesHits > 0 and that
the first entry in hitOrder is '/v1/responses' (and chatCompletionsHits > 0 only
if falling back), updating the same pattern in the other tests mentioned so the
assertions prove the bridge tried /v1/responses first and only hit
/v1/chat/completions on fallback.

Comment on lines +108 to +110
await bridge.close();
await upstream.close();
await rm(tmpDir, { recursive: true, force: true });
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

🌐 Web query:

In Node.js current LTS, does http.Server.close() return a Promise, or does it require callback/event-based completion handling?

💡 Result:

In the current Node.js LTS version (v24.x "Krypton"), http.Server.close does not return a Promise. It is a callback-based method that accepts an optional callback function executed when the server has closed. Completion must be handled via the callback or the 'close' event. Usage example: server.close((err) => { if (err) { // handle error return; } console.log('Server closed'); }); To use it with Promises, wrap it manually: const closePromise = new Promise((resolve, reject) => { server.close((err) => { if (err) reject(err); else resolve; }); }); Note: https.Server.close behaves the same way, explicitly documented as taking a callback and returning the behavior from http.Server.close. A separate asyncDispose symbol (added v20.4.0) exists for Promise-based disposal, but it's not the primary close method.

Citations:


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | sed -n '100,115p'

Repository: SakuraByteCore/codexmate

Length of output: 755


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | sed -n '165,180p'

Repository: SakuraByteCore/codexmate

Length of output: 472


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | sed -n '40,75p'

Repository: SakuraByteCore/codexmate

Length of output: 1771


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | sed -n '47,111p'

Repository: SakuraByteCore/codexmate

Length of output: 3007


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | sed -n '113,174p'

Repository: SakuraByteCore/codexmate

Length of output: 2867


🏁 Script executed:

cat -n tests/unit/openai-bridge-upstream-responses.test.mjs | head -40

Repository: SakuraByteCore/codexmate

Length of output: 1723


🏁 Script executed:

grep -n "beforeEach\|afterEach\|before\|after" tests/unit/openai-bridge-upstream-responses.test.mjs

Repository: SakuraByteCore/codexmate

Length of output: 50


🏁 Script executed:

cat > /tmp/test_close.js << 'EOF'
import http from 'node:http';

const server = http.createServer((req, res) => {
  res.end('test');
});

server.listen(0, '127.0.0.1', () => {
  console.log('Server listening');
  
  // This will NOT wait for actual close completion
  const result = server.close();
  console.log('Type of server.close() return value:', typeof result);
  console.log('Is it a Promise?', result instanceof Promise);
  console.log('server.close() returns the server itself:', result === server);
});

EOF
node /tmp/test_close.js

Repository: SakuraByteCore/codexmate

Length of output: 198


🏁 Script executed:

python3 << 'EOF'
# When you call server.close() in Node.js:
# 1. It returns the server instance (not a Promise)
# 2. When you await a non-Promise value, it resolves immediately to that value
# 3. The actual close happens asynchronously via callback/event

# So await server.close() is equivalent to:
# const result = await Promise.resolve(server);
# Which returns immediately, NOT waiting for the server to actually close

print("Analysis of await server.close():")
print("- server.close() returns: http.Server instance (not Promise)")
print("- await <non-Promise> resolves immediately")
print("- Therefore: await server.close() does NOT wait for actual closure")
print("- Result: sockets may still be open when next line executes")
print("\nAdditional concern:")
print("- No try/finally block means cleanup is skipped if assertions fail")
print("- This could cause test flakiness and resource leaks")
EOF

Repository: SakuraByteCore/codexmate

Length of output: 473


Wrap test cleanup in try/finally and properly await server closure with a Promise wrapper.

At lines 108–110 and 171–173, await bridge.close() and await upstream.close() do not actually wait for the servers to close. Node's http.Server.close() is callback-based and returns the server instance itself, not a Promise. Awaiting it resolves immediately without waiting for actual closure. Additionally, if any assertion before cleanup throws, the cleanup code is skipped entirely, leaving sockets open and temporary directories on disk.

Wrap each test body in try/finally and create a helper function that properly wraps server.close() in a Promise:

Suggested fix
+function closeServer(server) {
+    return new Promise((resolve, reject) => {
+        server.close((err) => (err ? reject(err) : resolve()));
+    });
+}
+
 test('openai-bridge prefers upstream /responses and rewraps SSE when stream requested', async () => {
     const upstream = http.createServer((req, res) => {
         if (req.url === '/v1/responses' && req.method === 'POST') {
@@ -70,6 +76,7 @@
     const { port: upstreamPort } = await listen(upstream);
 
     const tmpDir = await mkdtemp(path.join(os.tmpdir(), 'codexmate-bridge-test-'));
+    let bridge;
+    try {
     const settingsFile = path.join(tmpDir, 'bridge.json');
     await writeFile(settingsFile, JSON.stringify({
         version: 1,
@@ -102,9 +109,12 @@
     assert.equal(sse.status, 200);
     assert.match(sse.headers['content-type'], /text\/event-stream/i);
     assert.match(sse.text, /event: response\.completed/);
     assert.match(sse.text, /data: \[DONE\]/);
-
-    await bridge.close();
-    await upstream.close();
-    await rm(tmpDir, { recursive: true, force: true });
+    } finally {
+        if (bridge) await closeServer(bridge);
+        await closeServer(upstream);
+        await rm(tmpDir, { recursive: true, force: true });
+    }
 });

Also applies to lines 113–174 (second test).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@tests/unit/openai-bridge-upstream-responses.test.mjs` around lines 108 - 110,
Tests currently call await bridge.close() and await upstream.close() which don't
return Promises and skip cleanup if assertions throw; create a helper like
closeServer(server) that returns a Promise resolving/rejecting inside
server.close(callback) and replace direct awaits with await closeServer(bridge)
and await closeServer(upstream); also wrap each test body in try/finally so the
finally block awaits closeServer(bridge), closeServer(upstream) and awaits
rm(tmpDir, { recursive: true, force: true }) to ensure servers are closed and
temp dirs removed even on failure.

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Fix for provider auto-reverting after selection: we no longer roll back provider switch when /models fetch fails.

Some OpenAI-compatible endpoints (including DashScope compatible-mode) may not support /models or may require different auth, so model-list failures should not block applying the provider. UI now keeps the selected provider and shows an error toast asking to set model/check key.

Head: c2a7c2e

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Handle DashScope-style upstream behavior where POST /v1/responses returns 500 {code: convert_request_failed, message: not implemented}.

Bridge now treats this as '/responses unsupported' and falls back to /v1/chat/completions conversion (same as 404/405/501). Added unit test coverage.

Head: 6c22f6d

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

UI provider switch UX improvement for 'special' providers where /models is slow/unavailable: switch is now single-click.

  • Provider switch no longer blocks on models fetch; it does a short wait (250ms) for fast cases, applies the provider immediately, then completes models loading in the background.
  • If models later arrive and current model is not in the list, UI auto-selects the first model and re-applies silently.

Head: 0c827ab

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Fix for 'Codex client receives no output' while health-check works: our SSE stream was missing Responses-API anchoring events (response.output_item.added / output_item.done + required indices), so Codex could ignore output_text deltas.

Bridge now emits a more spec-aligned Responses SSE sequence:

  • response.created
  • response.output_item.added
  • response.output_text.delta/done (with item_id/output_index/content_index/sequence_number)
  • response.output_item.done
  • response.completed
  • [DONE]

Also converts chat.completions tool_calls into Responses-style function_call output items.

Head: f6eb578

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

Added 2-step diff confirmation for the Config template editor (mirrors AGENTS.md diff workflow).

  • First click: generate and show diff preview of current ~/.codex/config.toml -> edited template.
  • Second click: apply/write the template.
  • Includes server-side preview endpoint (preview-config-template-diff) and UI state handling.

Head: bc7fa17

@ymkiux
Copy link
Copy Markdown
Collaborator Author

ymkiux commented Apr 16, 2026

@coderabbitai 如果我直接合并会带来回归性异常吗

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai bot commented Apr 16, 2026

Tip

For best results, initiate chat on the files or code changes.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash
# Get latest PR status including checks and review status
gh pr view 90 --repo SakuraByteCore/codexmate --json state,mergeable,mergeStateStatus,reviewDecision,statusCheckRollup,commits | jq '.'

Length of output: 8382


@ymkiux 从技术角度看,直接合并引发回归性异常的风险极低。 以下是逐项评估:


✅ CI 状态

最新 head(bc7fa17)的所有 CI 检查均通过:Node.js 18 / 20 / 22 全部 SUCCESS,CodeRabbit 状态 SUCCESS,无未解决的 review 线程。

PR 当前被 BLOCKED 的唯一原因是 REVIEW_REQUIRED(需要 maintainer 批准),与代码质量或测试无关。


📋 变更回归风险分级

变更范围 风险 说明
新增 cli/openai-bridge.js 全新文件,不触碰已有逻辑
新增 bridge 端点 /bridge/openai/… 纯增量,老路径完全不受影响
useTransform 开关(默认 false 默认关闭,已有 provider 行为零变化
新增 per-provider 自定义 headers 仅对 bridge provider 生效
新增 preview-config-template-diff 端点 新增 API,不修改现有端点
performProviderSwitch 不再因 /models 失败而回滚 这是对已知 bug 的修复(DashScope 等不支持 /models 的场景),对应测试已更新
openHealthCheckDialog 新增 locked-provider guard 纯增量守卫,对未传 locked 的调用无影响,测试已更新
健康检测按钮禁用逻辑 & 文案调整 极低 纯 UI 体验优化
Config template 编辑器改为 2-step 确认流程 极低 仅改变操作步骤,不影响配置读写逻辑

🔑 结论

合并后不存在已知回归风险。 唯一有实质行为变化的是 performProviderSwitch——但那本来就是在修复一个 bug,测试套件已同步覆盖新行为。

目前唯一的阻碍是需要 maintainer(@SurviveM)点击批准后才能合并。


🧠 Learnings used
Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-30T04:47:15.352Z
Learning: In PR `#55` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-31T12:37:51.023Z
Learning: In PR `#59` for SakuraByteCore/codexmate, github-actions[bot] repeatedly flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-04T01:36:26.964Z
Learning: In PR `#68` for SakuraByteCore/codexmate, github-actions[bot] again flagged that CodeRabbit was introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`, `#68`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: github-actions[bot]
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-04-03T10:42:22.368Z
Learning: In PR `#67` for SakuraByteCore/codexmate, github-actions[bot] flagged that CodeRabbit was again introducing breaking changes. The maintainer's absolute constraint (reinforced across PRs `#55`, `#59`, `#67`) is: no breaking changes in any review suggestion; limit all feedback to correctness, safety, and quality only within the existing design, without refactoring or altering APIs, interfaces, or behavior.

Learnt from: SurviveM
Repo: SakuraByteCore/codexmate PR: 0
File: :0-0
Timestamp: 2026-03-29T02:42:32.785Z
Learning: The repository maintainer SurviveM (GitHub: SurviveM, repo: SakuraByteCore/codexmate) prefers that CodeRabbit review suggestions do not introduce breaking changes. Reviews should focus on correctness, safety, and quality without recommending refactors or changes that would break existing APIs, interfaces, or behavior.

@ymkiux ymkiux changed the title feat(codex): add OpenAI-compatible provider transform toggle feat: OpenAI-compatible provider bridge + Codex UX improvements Apr 16, 2026
Copy link
Copy Markdown
Collaborator

@awsl233777 awsl233777 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Checks passed, approving for merge.

@awsl233777 awsl233777 merged commit eebac86 into main Apr 17, 2026
7 checks passed
@awsl233777 awsl233777 deleted the feat/codex-provider-transform branch April 17, 2026 01:43
ymkiux added a commit that referenced this pull request Apr 18, 2026
feat: OpenAI-compatible provider bridge + Codex UX improvements
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants