Skip to content

fix: 修复 OpenAI provider (gpt-5.4/gpt-5.3-codex等模型)下 内建mcp__plugin_weixin_weixin__reply 微信工具不可见的问题#359

Merged
claude-code-best merged 8 commits intoclaude-code-best:mainfrom
XavierWangHX:fix/openai-mcp-tool-visibility
Apr 26, 2026
Merged

fix: 修复 OpenAI provider (gpt-5.4/gpt-5.3-codex等模型)下 内建mcp__plugin_weixin_weixin__reply 微信工具不可见的问题#359
claude-code-best merged 8 commits intoclaude-code-best:mainfrom
XavierWangHX:fix/openai-mcp-tool-visibility

Conversation

@XavierWangHX
Copy link
Copy Markdown
Contributor

@XavierWangHX XavierWangHX commented Apr 25, 2026

Summary

  • 补充:本次pr修复了使用OpenAI Provider时,模型无法发现内建mcp__plugin_weixin_weixin__reply微信工具的问题
  • 保留完整工具池供 OpenAI 分支搜索 MCP 工具,同时只向 API 发送已加载工具
  • 补充 OpenAI isolated 回归测试,覆盖微信 MCP 工具名可见性

Tests

  • bun test ./src/services/api/openai/tests/queryModelOpenAI.isolated.ts

Summary by CodeRabbit

  • New Features

    • Deferred tools are now announced and discoverable for OpenAI-compatible models, so tools that appear later are searchable and usable without a reload.
  • Bug Fixes

    • Injected tool-announcement content is preserved and recorded in tracing for OpenAI-bound requests.
    • Message-to-tracing conversion now retains tool identifiers and associated metadata.
  • Tests

    • Added tests validating deferred tool visibility, preservation through conversion, and tracing of the injected announcement.

@coderabbitai
Copy link
Copy Markdown
Contributor

coderabbitai Bot commented Apr 25, 2026

Note

Reviews paused

It looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the reviews.auto_review.auto_pause_after_reviewed_commits setting.

Use the following commands to manage reviews:

  • @coderabbitai resume to resume automatic reviews.
  • @coderabbitai review to trigger a single review.

Use the checkboxes below for quick actions:

  • ▶️ Resume reviews
  • 🔍 Trigger review
📝 Walkthrough

Walkthrough

Claude's OpenAI-provider path now calls queryModelOpenAI with the full tools set; queryModelOpenAI conditionally injects a <available-deferred-tools>...</available-deferred-tools> meta user message (when tool search is enabled and deferred-tools-delta is disabled) and records the converted OpenAI messages in Langfuse. Langfuse message conversion now accepts already-Langfuse/OpenAI-style inputs and preserves tool metadata.

Changes

Cohort / File(s) Summary
OpenAI Request Building
src/services/api/openai/index.ts
Adds preprocessing in queryModelOpenAI to build and prepend a sorted <available-deferred-tools> meta user message from tools when tool search is enabled and deferred-tools-delta is disabled; runs OpenAI conversion on the augmented messages and records openaiMessages as Langfuse input.
Claude API Integration
src/services/api/claude.ts
Calls queryModelOpenAI(messagesForAPI, systemPrompt, tools, signal, options) (full tools) instead of filteredTools, exposing deferred MCP tools to the OpenAI adapter.
OpenAI Tests
src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts
Expands mocks to control streams/conversions, adds a _toolSearchEnabled toggle, and adds a test asserting OpenAI-bound messages include <available-deferred-tools> listing deferred MCP tool names when tool search is enabled.
Langfuse Conversion & Tests
src/services/langfuse/convert.ts, src/services/langfuse/__tests__/langfuse.test.ts
convertMessagesToLangfuse now accepts LangfuseInputMessage[], detects wrapped messages, preserves roles (including tool) and propagates tool_call_id/tool_calls for string/array content; tests ensure conversion preserves content and tool metadata.

Sequence Diagram(s)

sequenceDiagram
  participant Claude as Claude Service
  participant OpenAIAdapter as queryModelOpenAI
  participant Tools as Tools Registry/MCP
  participant OpenAI as OpenAI API

  Claude->>OpenAIAdapter: call(messagesForAPI, systemPrompt, tools, signal, options)
  OpenAIAdapter->>Tools: evaluate toolSearchEnabled & deferred-tools-delta flag
  alt toolSearch enabled and delta disabled
    OpenAIAdapter->>OpenAIAdapter: build meta user message rgba(100,150,240,0.5) "<available-deferred-tools>...</available-deferred-tools>"
    OpenAIAdapter->>OpenAI: send messages (including injected meta message) + systemPrompt
  else
    OpenAIAdapter->>OpenAI: send original messagesForAPI + systemPrompt
  end
  OpenAI-->>OpenAIAdapter: return response
  OpenAIAdapter-->>Claude: return model response
Loading

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly related PRs

Suggested reviewers

  • KonghaYao

Poem

🐇 I found the shy tools, listed them in a row,

a tiny hop, a whisper so models now know.
Names in a line, a gentle burrowed light,
deferred friends waving from shadow to sight.
Request and reply — the hollow hums with delight.

🚥 Pre-merge checks | ✅ 4 | ❌ 1

❌ Failed checks (1 warning)

Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 60.00% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (4 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title is written in Chinese and refers to fixing WeChat tool visibility under OpenAI provider, which directly matches the changeset's focus on making deferred MCP tools visible to OpenAI models.
Linked Issues check ✅ Passed Check skipped because no linked issues were found for this pull request.
Out of Scope Changes check ✅ Passed Check skipped because no linked issues were found for this pull request.

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing Touches
🧪 Generate unit tests (beta)
  • Create PR with unit tests

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts (1)

325-328: ⚠️ Potential issue | 🟡 Minor

Mock path doesn't match the actual import; mock is silently inert.

src/services/api/openai/index.ts imports isDeferredTool, formatDeferredToolLine, and TOOL_SEARCH_TOOL_NAME from @claude-code-best/builtin-tools/tools/ToolSearchTool/prompt.js, but the mock registers '../../../../tools/ToolSearchTool/prompt.js', which resolves to a non-existent local path. The mock never overrides the real module.

The test "prepends available deferred MCP tools to OpenAI messages" passes only because the real isDeferredTool returns true for tools with isMcp: true (line 68 of prompt.ts). If the real implementation's behavior ever changes, this test will silently break or pass for the wrong reason.

Additionally, isDeferredTool is a pure function and should not be mocked per coding guidelines. Recommend deleting this mock and adding a comment that the test deliberately exercises the real implementation.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts` around lines
325 - 328, Remove the incorrect mock registration and stop mocking the pure
helper: delete the mock.module call that registers
'../../../../tools/ToolSearchTool/prompt.js' so the test uses the real
implementations of isDeferredTool, formatDeferredToolLine and
TOOL_SEARCH_TOOL_NAME; instead add a short comment above the test stating it
intentionally exercises the real ToolSearchTool helpers
(isDeferredTool/formatDeferredToolLine/TOOL_SEARCH_TOOL_NAME) to make the test
reliable and avoid shadowing the package import.
🧹 Nitpick comments (3)
src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts (1)

607-661: Good regression coverage; minor robustness suggestions.

The test correctly validates the fix end-to-end (request body contains the <available-deferred-tools> header listing the MCP tool name). A few small nits:

  1. _toolSearchEnabled = true is set outside the try. While there's nothing throwable between the assignment and the try, moving it inside the try block guarantees the finally cleanup runs even if a future refactor inserts code before the try.
  2. The string assertion at lines 654-656 uses '\\n' (literal backslash-n) which correctly matches JSON.stringify's output of a real newline. Worth a brief comment so a future reader doesn't "fix" it to '\n' and silently break the test.
  3. _toolSearchEnabled is module-scoped mutable state — consider resetting it in a beforeEach/afterEach at the describe level for symmetry with _lastCreateArgs/_nextEvents and to make ordering issues impossible if more tests are added.
♻️ Suggested tightening
 describe('queryModelOpenAI — deferred MCP tool visibility', () => {
+  afterEach(() => {
+    _toolSearchEnabled = false
+  })
+
   test('prepends available deferred MCP tools to OpenAI messages', async () => {
-    _toolSearchEnabled = true
     _nextEvents = [makeMessageStart(), makeMessageStop()]

     try {
+      _toolSearchEnabled = true
       const { queryModelOpenAI } = await import('../index.js')
       ...
       expect(_lastCreateArgs).not.toBeNull()
+      // NB: '\\n' is the literal 2-char sequence \n produced by JSON.stringify
+      // for embedded newlines — do not "fix" to '\n'.
       expect(JSON.stringify(_lastCreateArgs!.messages)).toContain(
         '<available-deferred-tools>\\nmcp__wechat__send_message\\n</available-deferred-tools>',
       )
     } finally {
       _toolSearchEnabled = false
     }
   })
 })
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts` around lines
607 - 661, Move the module flag setup and cleanup into structured test lifecycle
and clarify the newline assertion: set _toolSearchEnabled = true inside the
test's try block (so the finally always resets it) or better yet add a
describe-level beforeEach that sets _toolSearchEnabled = true and an afterEach
that resets it to false (alongside resetting _lastCreateArgs and _nextEvents) to
avoid ordering issues; also add a brief comment next to the expect that checks
JSON.stringify(...).toContain('<available-deferred-tools>\\nmcp__wechat__send_message\\n</available-deferred-tools>')
to explain the use of the literal '\\n' escape (so future editors don’t replace
it with '\n').
src/services/api/openai/index.ts (1)

50-73: Helper looks good; consider deduplicating with claude.ts.

The <available-deferred-tools> block is now constructed both here and in src/services/api/claude.ts (around lines 1382-1397) using the same filter → formatDeferredToolLine → sort → join → createUserMessage({ isMeta: true }) pipeline. Extracting a small shared utility (e.g., buildDeferredToolListMessage(tools, deferredToolNames) exported from src/utils/toolSearch.ts or similar) would keep the two providers in lock-step if the wire format ever changes (e.g., delimiter, header tag, or sort key).

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/index.ts` around lines 50 - 73, The duplicated
construction of the <available-deferred-tools> user message should be extracted
to a shared helper so both providers stay in sync: create and export a function
like buildDeferredToolListMessage(tools, deferredToolNames) (eg. in
src/utils/toolSearch.ts) that performs the filter → formatDeferredToolLine →
sort → join steps and returns the createUserMessage({ content:
`<available-deferred-tools>…</available-deferred-tools>`, isMeta: true })
result; then replace the inline pipeline in prependDeferredToolListIfNeeded and
the analogous code in claude.ts to call this new helper and return messages
unchanged when the helper yields no message.
src/services/api/claude.ts (1)

1341-1348: LGTM — root-cause fix.

Passing the full tools pool to the OpenAI adapter is correct: it lets queryModelOpenAI rebuild its own filteredTools and, more importantly, render the full <available-deferred-tools> listing so ToolSearchTool can discover deferred MCP tools that were previously hidden by the upstream filter. The inline comment makes the intent clear.

Minor note: the message preprocessing pipeline above (normalizeMessagesForAPI, ensureToolResultPairing, stripAdvisorBlocks, stripExcessMediaItems) is applied to messagesForAPI and then re-normalized inside queryModelOpenAI (line 146 of openai/index.ts). The double normalization is idempotent today, but worth a follow-up to either skip the OpenAI-bound normalization here or skip re-normalization there to avoid drift if either side acquires non-idempotent steps.

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/claude.ts` around lines 1341 - 1348, The messages passed to
queryModelOpenAI are being normalized here via normalizeMessagesForAPI /
ensureToolResultPairing / stripAdvisorBlocks / stripExcessMediaItems and then
queryModelOpenAI re-normalizes them, causing redundant work and a potential
future drift; update queryModelOpenAI (in openai/index.ts) to accept an
indicator that messages are already normalized (e.g., add a skipNormalization
boolean parameter or a options flag) or remove its internal normalization, and
then call queryModelOpenAI(messagesForAPI, systemPrompt, tools, signal, {
...options, skipNormalization: true }) from this code path so only one
normalization pipeline runs; refer to the functions normalizeMessagesForAPI,
ensureToolResultPairing, stripAdvisorBlocks, stripExcessMediaItems and the
queryModelOpenAI function when making the change.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts`:
- Around line 325-328: Remove the incorrect mock registration and stop mocking
the pure helper: delete the mock.module call that registers
'../../../../tools/ToolSearchTool/prompt.js' so the test uses the real
implementations of isDeferredTool, formatDeferredToolLine and
TOOL_SEARCH_TOOL_NAME; instead add a short comment above the test stating it
intentionally exercises the real ToolSearchTool helpers
(isDeferredTool/formatDeferredToolLine/TOOL_SEARCH_TOOL_NAME) to make the test
reliable and avoid shadowing the package import.

---

Nitpick comments:
In `@src/services/api/claude.ts`:
- Around line 1341-1348: The messages passed to queryModelOpenAI are being
normalized here via normalizeMessagesForAPI / ensureToolResultPairing /
stripAdvisorBlocks / stripExcessMediaItems and then queryModelOpenAI
re-normalizes them, causing redundant work and a potential future drift; update
queryModelOpenAI (in openai/index.ts) to accept an indicator that messages are
already normalized (e.g., add a skipNormalization boolean parameter or a options
flag) or remove its internal normalization, and then call
queryModelOpenAI(messagesForAPI, systemPrompt, tools, signal, { ...options,
skipNormalization: true }) from this code path so only one normalization
pipeline runs; refer to the functions normalizeMessagesForAPI,
ensureToolResultPairing, stripAdvisorBlocks, stripExcessMediaItems and the
queryModelOpenAI function when making the change.

In `@src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts`:
- Around line 607-661: Move the module flag setup and cleanup into structured
test lifecycle and clarify the newline assertion: set _toolSearchEnabled = true
inside the test's try block (so the finally always resets it) or better yet add
a describe-level beforeEach that sets _toolSearchEnabled = true and an afterEach
that resets it to false (alongside resetting _lastCreateArgs and _nextEvents) to
avoid ordering issues; also add a brief comment next to the expect that checks
JSON.stringify(...).toContain('<available-deferred-tools>\\nmcp__wechat__send_message\\n</available-deferred-tools>')
to explain the use of the literal '\\n' escape (so future editors don’t replace
it with '\n').

In `@src/services/api/openai/index.ts`:
- Around line 50-73: The duplicated construction of the
<available-deferred-tools> user message should be extracted to a shared helper
so both providers stay in sync: create and export a function like
buildDeferredToolListMessage(tools, deferredToolNames) (eg. in
src/utils/toolSearch.ts) that performs the filter → formatDeferredToolLine →
sort → join steps and returns the createUserMessage({ content:
`<available-deferred-tools>…</available-deferred-tools>`, isMeta: true })
result; then replace the inline pipeline in prependDeferredToolListIfNeeded and
the analogous code in claude.ts to call this new helper and return messages
unchanged when the helper yields no message.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: b58913d8-0038-47c2-af87-579802904dd6

📥 Commits

Reviewing files that changed from the base of the PR and between d03af7b and 23e1257.

📒 Files selected for processing (3)
  • src/services/api/claude.ts
  • src/services/api/openai/__tests__/queryModelOpenAI.isolated.ts
  • src/services/api/openai/index.ts

@XavierWangHX
Copy link
Copy Markdown
Contributor Author

补充:本次pr修复了用OpenAI Provider时,模型无法发现内建mcp__plugin_weixin_weixin__reply微信工具的问题

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/services/api/openai/index.ts (1)

399-415: ⚠️ Potential issue | 🟡 Minor

Langfuse trace input no longer reflects what was actually sent to the model.

convertMessagesToLangfuse(messagesForAPI, systemPrompt) uses the pre-prepend list, so the recorded trace input will not include the <available-deferred-tools> meta message that was actually transmitted. This makes debugging deferred-tool issues from Langfuse harder and diverges from the source of truth used in openaiMessages.

📊 Suggested fix
-      input: convertMessagesToLangfuse(messagesForAPI, systemPrompt),
+      input: convertMessagesToLangfuse(messagesWithDeferredToolList, systemPrompt),
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/api/openai/index.ts` around lines 399 - 415, Record uses
convertMessagesToLangfuse(messagesForAPI, systemPrompt) which reflects the
pre-prepend list, not the actual messages sent (openaiMessages) that include the
<available-deferred-tools> meta message; update the recordLLMObservation call to
pass the actual transmitted messages (openaiMessages) to
convertMessagesToLangfuse (or otherwise ensure the converter receives the final
openaiMessages payload) so the Langfuse trace input matches what was sent to the
model; keep other fields (systemPrompt, model, output, usage, times, tools)
unchanged and only swap the input source.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/services/api/openai/index.ts`:
- Around line 399-415: Record uses convertMessagesToLangfuse(messagesForAPI,
systemPrompt) which reflects the pre-prepend list, not the actual messages sent
(openaiMessages) that include the <available-deferred-tools> meta message;
update the recordLLMObservation call to pass the actual transmitted messages
(openaiMessages) to convertMessagesToLangfuse (or otherwise ensure the converter
receives the final openaiMessages payload) so the Langfuse trace input matches
what was sent to the model; keep other fields (systemPrompt, model, output,
usage, times, tools) unchanged and only swap the input source.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 415c7496-c52e-4486-98a7-33228c141e30

📥 Commits

Reviewing files that changed from the base of the PR and between 23e1257 and b3c9af4.

📒 Files selected for processing (1)
  • src/services/api/openai/index.ts

@XavierWangHX XavierWangHX changed the title fix: 修复 OpenAI provider 下 MCP 工具不可见 fix: 修复 OpenAI provider (gpt-5.4/gpt-5.3-codex等模型)下 内建mcp__plugin_weixin_weixin__reply 微信工具不可见的问题 Apr 25, 2026
Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/services/langfuse/convert.ts (1)

147-163: Logic is correct; minor optional cleanup on the role cast.

The wrapped-vs-direct unwrapping and the tool_call_id / tool_calls passthrough on the string-content branch correctly preserve OpenAI tool-protocol fields (e.g. {role:'tool', tool_call_id, content:'...'} and assistant messages carrying tool_calls). Behavior matches what the new test in langfuse.test.ts expects.

Optional nit on line 150: (inner.role as 'user' | 'assistant' | 'system' | 'tool' | undefined) is a widening cast that could be expressed as a small type guard or by tightening the inner type after the 'message' in msg discrimination — not blocking, just slightly more idiomatic. As per coding guidelines, "Use type guards (type narrowing) with union types instead of forced type casting".

🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/langfuse/convert.ts` around lines 147 - 163, The current role
assignment uses a widening cast on inner.role; replace that forced cast with a
small type guard that checks whether inner.role is one of the allowed strings
('user','assistant','system','tool') and use it if valid, otherwise fall back to
('message' in msg ? toRole(msg) : 'user'); update the role variable assignment
where inner is derived (the const inner = 'message' in msg ? msg.message : msg
block) so the narrowing happens after the 'message' in msg discrimination and
avoids the (inner.role as ...) cast.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/services/langfuse/convert.ts`:
- Around line 147-163: The current role assignment uses a widening cast on
inner.role; replace that forced cast with a small type guard that checks whether
inner.role is one of the allowed strings ('user','assistant','system','tool')
and use it if valid, otherwise fall back to ('message' in msg ? toRole(msg) :
'user'); update the role variable assignment where inner is derived (the const
inner = 'message' in msg ? msg.message : msg block) so the narrowing happens
after the 'message' in msg discrimination and avoids the (inner.role as ...)
cast.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 5f83858f-e7ec-42e1-84a4-612a8156acef

📥 Commits

Reviewing files that changed from the base of the PR and between b3c9af4 and 67337d5.

📒 Files selected for processing (3)
  • src/services/api/openai/index.ts
  • src/services/langfuse/__tests__/langfuse.test.ts
  • src/services/langfuse/convert.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧹 Nitpick comments (1)
src/services/langfuse/convert.ts (1)

159-175: Role derivation looks correct; minor downstream nit at line 199.

The new inner.role-first fallback chain is sound for OpenAI-style messages, and propagating tool_call_id / tool_calls for the string/non-array content branch is exactly what Langfuse needs to preserve OpenAI tool semantics.

One small downstream note: role can now legitimately be 'tool' or 'system' for non-wrapped inputs. If such an input ever arrives with array content, it will fall into the else branch and line 199 hardcodes role: 'user', silently dropping the original role:

result.push({ role: 'user', content: collapseContent(parts) })

In practice OpenAI tool/system messages carry string content and hit the changed branch instead, so this is an edge case rather than an active bug — but worth using the role variable there for consistency the next time this area is touched.

♻️ Optional tweak at line 199
-      if (parts.length > 0 || toolMessages.length === 0) {
-        result.push({ role: 'user', content: collapseContent(parts) })
-      }
+      if (parts.length > 0 || toolMessages.length === 0) {
+        result.push({ role: role === 'assistant' ? 'user' : role, content: collapseContent(parts) })
+      }
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/langfuse/convert.ts` around lines 159 - 175, The else-branch
that handles array content currently hardcodes role: 'user', which drops
legitimate roles computed earlier (role variable can be 'tool' or 'system' for
non-wrapped inputs); update the object pushed to result in the branch that calls
collapseContent(parts) so it uses the previously derived role variable instead
of the literal 'user', keeping preservation of tool/system roles and still
including any tool_call_id/tool_calls logic as appropriate.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Nitpick comments:
In `@src/services/langfuse/convert.ts`:
- Around line 159-175: The else-branch that handles array content currently
hardcodes role: 'user', which drops legitimate roles computed earlier (role
variable can be 'tool' or 'system' for non-wrapped inputs); update the object
pushed to result in the branch that calls collapseContent(parts) so it uses the
previously derived role variable instead of the literal 'user', keeping
preservation of tool/system roles and still including any
tool_call_id/tool_calls logic as appropriate.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: a50a394d-7db0-4b8d-90af-955d8cabfada

📥 Commits

Reviewing files that changed from the base of the PR and between 67337d5 and 3acd375.

📒 Files selected for processing (1)
  • src/services/langfuse/convert.ts

Copy link
Copy Markdown
Contributor

@coderabbitai coderabbitai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
src/services/langfuse/convert.ts (1)

179-190: ⚠️ Potential issue | 🟡 Minor

Assistant branch with array content drops inner.tool_calls from already-Langfuse/OpenAI-style messages.

Unlike the string-content branch (Lines 169-174) and the user branch (Lines 205-207), this branch only sources tool_calls from Anthropic-style tool_use blocks inside rawContent. If an already-Langfuse/OpenAI assistant message arrives with array content (e.g., [{ type: 'text', text: '…' }]) and a top-level tool_calls, those tool_calls are silently dropped from the Langfuse trace. This contradicts the AI summary's stated intent that the converter "preserves tool metadata" for already-Langfuse inputs and is asymmetric with the user/tool path.

♻️ Proposed fix: merge inner.tool_calls with extracted ones
     if (role === 'assistant') {
       // Extract tool_use → tool_calls at message level
       const { tool_calls, rest } = extractToolCalls(rawContent)
       const parts = rest
         .filter((b): b is Record<string, unknown> => b != null && typeof b === 'object')
         .map(b => toContentPart(b))
         .filter((p): p is LangfuseContentPart => p !== null)
+      const innerToolCalls =
+        'tool_calls' in inner && Array.isArray(inner.tool_calls)
+          ? (inner.tool_calls as LangfuseToolCall[])
+          : []
+      const mergedToolCalls = [...tool_calls, ...innerToolCalls]
       result.push({
         role: 'assistant',
         content: collapseContent(parts),
-        ...(tool_calls.length > 0 && { tool_calls }),
+        ...(mergedToolCalls.length > 0 && { tool_calls: mergedToolCalls }),
       })
     } else {
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.

In `@src/services/langfuse/convert.ts` around lines 179 - 190, The assistant
branch in convert.ts only uses tool_calls returned by
extractToolCalls(rawContent) and ignores any tool_calls already present inside
array content parts, so merge inner tool_calls from parsed parts with the
extracted ones: when mapping rawContent to parts via toContentPart, collect any
part.tool_calls (or inner.tool_calls) and combine them (deduplicate if
necessary) with the extracted tool_calls before pushing the result object
returned by collapseContent; update the block around extractToolCalls,
toContentPart, collapseContent and the result.push so the final pushed object
includes the union of tool_calls from extractToolCalls(rawContent) and any
tool_calls found inside the individual parts.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.

Outside diff comments:
In `@src/services/langfuse/convert.ts`:
- Around line 179-190: The assistant branch in convert.ts only uses tool_calls
returned by extractToolCalls(rawContent) and ignores any tool_calls already
present inside array content parts, so merge inner tool_calls from parsed parts
with the extracted ones: when mapping rawContent to parts via toContentPart,
collect any part.tool_calls (or inner.tool_calls) and combine them (deduplicate
if necessary) with the extracted tool_calls before pushing the result object
returned by collapseContent; update the block around extractToolCalls,
toContentPart, collapseContent and the result.push so the final pushed object
includes the union of tool_calls from extractToolCalls(rawContent) and any
tool_calls found inside the individual parts.

ℹ️ Review info
⚙️ Run configuration

Configuration used: defaults

Review profile: CHILL

Plan: Pro

Run ID: 276565e6-934e-44c1-9087-55e54fc89339

📥 Commits

Reviewing files that changed from the base of the PR and between 3acd375 and 8f65339.

📒 Files selected for processing (2)
  • src/services/langfuse/__tests__/langfuse.test.ts
  • src/services/langfuse/convert.ts
🚧 Files skipped from review as they are similar to previous changes (1)
  • src/services/langfuse/tests/langfuse.test.ts

@XavierWangHX XavierWangHX force-pushed the fix/openai-mcp-tool-visibility branch from 6a9cc62 to ecb5339 Compare April 25, 2026 10:09
@claude-code-best
Copy link
Copy Markdown
Owner

@XavierWangHX 修复一下 CI 的问题

@XavierWangHX
Copy link
Copy Markdown
Contributor Author

@XavierWangHX 修复一下 CI 的问题

好的劳大

@XavierWangHX XavierWangHX force-pushed the fix/openai-mcp-tool-visibility branch from d19add2 to 267bd6a Compare April 25, 2026 15:18
@claude-code-best claude-code-best merged commit 901628b into claude-code-best:main Apr 26, 2026
3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants