Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
📝 WalkthroughWalkthroughAdds a hidden ai-sdk runtime mode and a shared AI SDK runtime implementation, wires conditional provider routing to the new runtime (preserving a legacy fallback), threads per-message provider options through streaming/tool calls/context reconstruction, and adds tests and docs for phased rollout and rollback gating. Changes
Sequence Diagram(s)sequenceDiagram
participant App as Application
participant Config as ConfigPresenter
participant Runtime as AI SDK Runtime
participant ProviderFactory as createAiSdkProviderContext
participant Mapper as Message/ProviderOptions Mapper
participant SDK as AI SDK (generate/stream/embed)
participant Adapter as Stream Adapter
App->>Config: resolveLlmRuntimeMode()
Config-->>App: 'ai-sdk' | 'legacy'
alt ai-sdk
App->>Runtime: runAiSdkCoreStream(messages, modelId, modelConfig, tools)
Runtime->>ProviderFactory: createAiSdkProviderContext(providerKind, params)
ProviderFactory-->>Runtime: { languageModel, embeddingModel, endpoint, headers }
Runtime->>Mapper: mapMessagesToModelMessages(messages) + buildProviderOptions(...)
Mapper-->>Runtime: ModelMessage[], providerOptions
Runtime->>SDK: streamText(model, ModelMessage[], options)
SDK-->>Runtime: AsyncIterable<TextStreamPart>
Runtime->>Adapter: adaptAiSdkStream(stream, {supportsNativeTools, cacheImage})
Adapter-->>App: AsyncGenerator<LLMCoreStreamEvent> (text/reasoning/tool_call/image/usage/stop)
else legacy
App->>App: execute legacy provider path
end
Estimated Code Review Effort🎯 4 (Complex) | ⏱️ ~75 minutes Possibly related PRs
Poem
🚥 Pre-merge checks | ✅ 2 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (2 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
There was a problem hiding this comment.
Actionable comments posted: 7
🧹 Nitpick comments (7)
src/main/presenter/agentRuntimePresenter/contextBuilder.ts (1)
296-299: Avoid parsing provider options twice per tool block.
getBlockProviderOptions(block)is invoked twice in the same object construction; caching it once is cleaner and avoids redundant JSON parsing.♻️ Proposed refactor
for (const block of toolCallBlocks) { const toolCall = block.tool_call if (!toolCall?.id || !toolCall.name) { continue } + const providerOptions = getBlockProviderOptions(block) toolCalls.push({ id: toolCall.id, type: 'function', function: { name: toolCall.name, arguments: toolCall.params || '{}' }, - ...(getBlockProviderOptions(block) - ? { provider_options: getBlockProviderOptions(block) } - : {}) + ...(providerOptions ? { provider_options: providerOptions } : {}) }) }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/agentRuntimePresenter/contextBuilder.ts` around lines 296 - 299, The object construction calls getBlockProviderOptions(block) twice causing redundant work; capture its result once (e.g., const providerOptions = getBlockProviderOptions(block)) before building the object, then use providerOptions in the spread conditional instead of calling getBlockProviderOptions(block) again so the function (and any JSON parsing inside it) runs only once when assembling the function: { name: toolCall.name, arguments: toolCall.params || '{}' } and the ...(providerOptions ? { provider_options: providerOptions } : {}).src/main/presenter/agentRuntimePresenter/dispatch.ts (1)
86-101: Consider extractingparseProviderOptionsJsonto a shared utility.This function is duplicated in
contextBuilder.ts(lines 32-47 per relevant snippets). While the duplication is acceptable for module isolation, extracting it to a shared utility undersrc/main/presenter/agentRuntimePresenter/would improve maintainability.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/agentRuntimePresenter/dispatch.ts` around lines 86 - 101, The function parseProviderOptionsJson is duplicated between dispatch.ts and contextBuilder.ts; extract it into a shared utility (e.g., create a new module under src/main/presenter/agentRuntimePresenter such as providerOptionsUtil.ts) that exports parseProviderOptionsJson, update dispatch.ts to import and use the shared parseProviderOptionsJson, and update contextBuilder.ts to import the same utility instead of its local copy; ensure the exported function signature and behavior (returns ChatMessageProviderOptions | undefined) remain identical and update any imports/exports accordingly.test/main/presenter/llmProviderPresenter/aiSdkProviderOptionsMapper.test.ts (1)
1-122: Move this suite undertest/main/presenter/llmProviderPresenter/aiSdk/.The source file lives at
src/main/presenter/llmProviderPresenter/aiSdk/providerOptionsMapper.ts, so the test should mirror that nestedaiSdk/path as well.Based on learnings Applies to test/**/*.{test,spec}.ts : Vitest test suites should mirror the source structure under
test/main/**andtest/renderer/**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/aiSdkProviderOptionsMapper.test.ts` around lines 1 - 122, Move the test file for providerOptionsMapper so its directory structure mirrors the source module's aiSdk nesting (i.e., place the test alongside other tests for the aiSdk submodule), update any imports if needed so buildProviderOptions (from providerOptionsMapper) is resolved from the new location, and ensure the test suite filename and describe block remain unchanged so the existing tests (references to buildProviderOptions and providerOptions?.anthropic/vertex assertions) continue to run.test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts (1)
1-181: Mirror the nestedaiSdk/streamAdapter.tspath in the test location.This suite should live under
test/main/presenter/llmProviderPresenter/aiSdk/so the test tree matchessrc/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts.Based on learnings Applies to test/**/*.{test,spec}.ts : Vitest test suites should mirror the source structure under
test/main/**andtest/renderer/**🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts` around lines 1 - 181, The test file is in the wrong folder — it must mirror the source aiSdk path; move the test file test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts into a new aiSdk subfolder (test/main/presenter/llmProviderPresenter/aiSdk/) so its path mirrors src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts, and ensure the test still imports adaptAiSdkStream from '@/presenter/llmProviderPresenter/aiSdk/streamAdapter' (and that any helper paths like collectEvents remain correct after the move).src/main/presenter/llmProviderPresenter/aiSdk/providerOptionsMapper.ts (1)
163-202: Inconsistent handling of empty config between Google and Vertex.For
providerOptionsif it has keys. Forvertex(lines 200-201), the config is always added even whenstreamFunctionCallArguments: falseand no thinking config exists. This creates an unnecessary empty-ish object in the options for Vertex when there are no tools and no thinking config.💡 Optional: Align Vertex with Google's conditional pattern
case 'vertex': { const config: Record<string, unknown> = { - streamFunctionCallArguments: params.tools.length > 0 } + if (params.tools.length > 0) { + config.streamFunctionCallArguments = true + } if (params.modelConfig.thinkingBudget !== undefined || params.modelConfig.reasoningEffort) { config.thinkingConfig = { // ... } } - providerOptions[params.providerOptionsKey] = config + if (Object.keys(config).length > 0) { + providerOptions[params.providerOptionsKey] = config + } break }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/providerOptionsMapper.ts` around lines 163 - 202, The Vertex branch currently always adds a config object (with streamFunctionCallArguments possibly false) to providerOptions, causing empty-ish entries; change the 'vertex' case to mirror the 'google' case by building config (only adding streamFunctionCallArguments when params.tools.length > 0 or by adding it then checking), and only assign providerOptions[params.providerOptionsKey] = config when Object.keys(config).length > 0 so no empty config is inserted; update the code around the 'vertex' case where config and thinkingConfig are constructed to perform this conditional assignment.src/main/presenter/llmProviderPresenter/aiSdk/providerFactory.ts (1)
201-225: Header sanitization may inadvertently block legitimate provider headers.The
cleanHeaderslogic uses a hardcoded allowlist and blocks headers starting withx-stainless-, containinguser-agent, or containingopenai-. This could inadvertently block legitimate headers required by some providers (e.g., customx-api-keyheaders or provider-specific authentication headers).Consider documenting which providers/scenarios require
cleanHeaders: trueand whether the allowlist is sufficient.src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts (1)
126-215: Consider handling stream termination without explicit 'finish' event.If the stream ends unexpectedly (e.g., network interruption) without emitting a
finishorerrorevent, any content remaining inbufferedLegacyTextwould be lost silently. While the AI SDK should always emit a terminal event, defensive handling could improve reliability.💡 Optional: Add a finally block to flush remaining buffer
+ try { for await (const part of fullStream) { switch (part.type) { // ... existing cases } } + } finally { + // Flush any remaining legacy text if stream ends without finish event + if (!options.supportsNativeTools && bufferedLegacyText.length > 0) { + yield* emitLegacyTextBuffer(true) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts` around lines 126 - 215, The for-await loop over fullStream can exit without a 'finish' or 'error' and lose bufferedLegacyText; wrap the for-await (...) { ... } in a try/finally (or add an outer finally) and in the finally check bufferedLegacyText / endedToolCalls state and call yield* emitLegacyTextBuffer(true) (and if needed yield toUsageEvent(...) and yield createStreamEvent.stop('cancelled' or appropriate reason)) to flush any remaining legacy text and emit a terminal stop event; use the existing symbols bufferedLegacyText, emitLegacyTextBuffer, fullStream, toUsageEvent and createStreamEvent to implement this defensively and avoid duplicating finalization if a 'finish' or 'error' branch already ran (guard with a local finished flag).
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@docs/specs/ai-sdk-runtime/tasks.md`:
- Around line 5-13: Update the checklist in docs/specs/ai-sdk-runtime/tasks.md
to reflect completed work by marking the relevant items as checked: "Add shared
AI SDK runtime modules", "Integrate OpenAI-compatible runtime path", "Integrate
OpenAI responses runtime path", "Integrate Anthropic runtime path", "Integrate
Gemini runtime path", "Integrate Vertex runtime path", "Integrate Bedrock
runtime path", "Integrate Ollama runtime path", and "Add regression tests for
runtime adapter behavior"; keep the wording the same but change each unchecked
box to a checked box and optionally add a brief parenthetical note or link
mentioning the PR that implemented them for traceability.
In `@src/main/presenter/llmProviderPresenter/aiSdk/messageMapper.ts`:
- Around line 55-71: The mapper currently assumes non-text parts have image data
and directly accesses part.image_url.url, which can throw for unexpected types;
update the mapping in messageMapper (the content.map callback used to produce
objects for resolveImageMediaType and resolveBinaryData) to first guard that
part.type === 'image' and that part.image_url and part.image_url.url are present
before calling resolveImageMediaType or resolveBinaryData, and for any
other/unknown part.type return a safe fallback (e.g., skip the item or return
null/placeholder) so you avoid runtime TypeErrors when encountering unexpected
content shapes.
In
`@src/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.ts`:
- Around line 9-18: The branch in legacyFunctionCallMiddleware.ts that handles
non-array message.content rebuilds the message from scratch and drops any
existing provider-specific fields; instead return a message that preserves the
original message object fields while replacing only content (and keep role
fallback to 'user' if role is missing). In other words, spread the original
message (e.g. ...message), set content to the single text-item array you
construct, and ensure role is message.role ?? 'user' so metadata/options on
message are preserved.
In `@src/main/presenter/llmProviderPresenter/aiSdk/runtime.ts`:
- Around line 169-174: The prompt extraction in the
shouldUseImageGenerationRuntime branch incorrectly stringifies array/object
message.content (producing "[object Object]") — update the mapping that builds
prompt (in the block using messages.map(...).filter(...).join(...)) to normalize
message.content: if content is an array, iterate and extract text-like parts
(e.g., elements with a 'text' property or primitive strings) and join them; if
content is an object, pull its text field or toString safely; otherwise coerce
primitives. Implement this normalization as a small helper used in the map so
prompt contains actual text instead of "[object Object]".
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 101-117: The current fallback for non-object roots returns a
mutated schema that can keep non-object fields (e.g., items) and confuse
consumers; update the branch in toolMapper where normalized, branchKey and
variants are used so that when !variants.length you return a clean object schema
containing only object-appropriate fields (at minimum { type: 'object',
properties: isObjectSchema(normalized.properties) ? normalized.properties : {}
}) and do not spread/retain non-object keys like items; rely on isObjectSchema
to validate properties and strip any other top-level keys that are not valid for
an object schema.
In `@src/main/presenter/llmProviderPresenter/baseProvider.ts`:
- Around line 782-785: The code that builds descriptionAttr using
paramMeta.description in baseProvider.ts can produce malformed XML if the
description contains characters like ", <, >, or &; update the construction of
descriptionAttr to HTML/XML-escape paramMeta.description (e.g., replace & with
&, " with ", < with <, > with >) before interpolating, ensuring
you only apply the escape when paramMeta.description is a string and reference
the existing descriptionAttr and paramMeta.description symbols to locate and
change the logic.
In `@test/setup.ts`:
- Line 3: The test setup currently forces process.env.DEEPCHAT_LLM_RUNTIME to
'legacy' which overrides the production default and forces all tests onto the
legacy runtime; update test/setup.ts to stop forcing the legacy runtime by
either removing the assignment or setting process.env.DEEPCHAT_LLM_RUNTIME to
the production default ('ai-sdk') instead, and ensure any suites that need the
legacy runtime opt in by explicitly setting process.env.DEEPCHAT_LLM_RUNTIME =
'legacy' in their own setup.
---
Nitpick comments:
In `@src/main/presenter/agentRuntimePresenter/contextBuilder.ts`:
- Around line 296-299: The object construction calls
getBlockProviderOptions(block) twice causing redundant work; capture its result
once (e.g., const providerOptions = getBlockProviderOptions(block)) before
building the object, then use providerOptions in the spread conditional instead
of calling getBlockProviderOptions(block) again so the function (and any JSON
parsing inside it) runs only once when assembling the function: { name:
toolCall.name, arguments: toolCall.params || '{}' } and the ...(providerOptions
? { provider_options: providerOptions } : {}).
In `@src/main/presenter/agentRuntimePresenter/dispatch.ts`:
- Around line 86-101: The function parseProviderOptionsJson is duplicated
between dispatch.ts and contextBuilder.ts; extract it into a shared utility
(e.g., create a new module under src/main/presenter/agentRuntimePresenter such
as providerOptionsUtil.ts) that exports parseProviderOptionsJson, update
dispatch.ts to import and use the shared parseProviderOptionsJson, and update
contextBuilder.ts to import the same utility instead of its local copy; ensure
the exported function signature and behavior (returns ChatMessageProviderOptions
| undefined) remain identical and update any imports/exports accordingly.
In `@src/main/presenter/llmProviderPresenter/aiSdk/providerOptionsMapper.ts`:
- Around line 163-202: The Vertex branch currently always adds a config object
(with streamFunctionCallArguments possibly false) to providerOptions, causing
empty-ish entries; change the 'vertex' case to mirror the 'google' case by
building config (only adding streamFunctionCallArguments when
params.tools.length > 0 or by adding it then checking), and only assign
providerOptions[params.providerOptionsKey] = config when
Object.keys(config).length > 0 so no empty config is inserted; update the code
around the 'vertex' case where config and thinkingConfig are constructed to
perform this conditional assignment.
In `@src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts`:
- Around line 126-215: The for-await loop over fullStream can exit without a
'finish' or 'error' and lose bufferedLegacyText; wrap the for-await (...) { ...
} in a try/finally (or add an outer finally) and in the finally check
bufferedLegacyText / endedToolCalls state and call yield*
emitLegacyTextBuffer(true) (and if needed yield toUsageEvent(...) and yield
createStreamEvent.stop('cancelled' or appropriate reason)) to flush any
remaining legacy text and emit a terminal stop event; use the existing symbols
bufferedLegacyText, emitLegacyTextBuffer, fullStream, toUsageEvent and
createStreamEvent to implement this defensively and avoid duplicating
finalization if a 'finish' or 'error' branch already ran (guard with a local
finished flag).
In `@test/main/presenter/llmProviderPresenter/aiSdkProviderOptionsMapper.test.ts`:
- Around line 1-122: Move the test file for providerOptionsMapper so its
directory structure mirrors the source module's aiSdk nesting (i.e., place the
test alongside other tests for the aiSdk submodule), update any imports if
needed so buildProviderOptions (from providerOptionsMapper) is resolved from the
new location, and ensure the test suite filename and describe block remain
unchanged so the existing tests (references to buildProviderOptions and
providerOptions?.anthropic/vertex assertions) continue to run.
In `@test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts`:
- Around line 1-181: The test file is in the wrong folder — it must mirror the
source aiSdk path; move the test file
test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts into a new
aiSdk subfolder (test/main/presenter/llmProviderPresenter/aiSdk/) so its path
mirrors src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts, and
ensure the test still imports adaptAiSdkStream from
'@/presenter/llmProviderPresenter/aiSdk/streamAdapter' (and that any helper
paths like collectEvents remain correct after the move).
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: e3dff1fb-c44e-4722-a67e-86b0f8d35191
📒 Files selected for processing (46)
docs/specs/ai-sdk-runtime/plan.mddocs/specs/ai-sdk-runtime/spec.mddocs/specs/ai-sdk-runtime/tasks.mdpackage.jsonsrc/main/presenter/agentRuntimePresenter/accumulator.tssrc/main/presenter/agentRuntimePresenter/contextBuilder.tssrc/main/presenter/agentRuntimePresenter/dispatch.tssrc/main/presenter/agentRuntimePresenter/types.tssrc/main/presenter/configPresenter/index.tssrc/main/presenter/llmProviderPresenter/aiSdk/index.tssrc/main/presenter/llmProviderPresenter/aiSdk/messageMapper.tssrc/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.tssrc/main/presenter/llmProviderPresenter/aiSdk/middlewares/reasoningMiddleware.tssrc/main/presenter/llmProviderPresenter/aiSdk/providerFactory.tssrc/main/presenter/llmProviderPresenter/aiSdk/providerOptionsMapper.tssrc/main/presenter/llmProviderPresenter/aiSdk/runtime.tssrc/main/presenter/llmProviderPresenter/aiSdk/runtimeMode.tssrc/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.tssrc/main/presenter/llmProviderPresenter/aiSdk/toolMapper.tssrc/main/presenter/llmProviderPresenter/aiSdk/toolProtocol.tssrc/main/presenter/llmProviderPresenter/baseProvider.tssrc/main/presenter/llmProviderPresenter/providers/aihubmixProvider.tssrc/main/presenter/llmProviderPresenter/providers/anthropicProvider.tssrc/main/presenter/llmProviderPresenter/providers/awsBedrockProvider.tssrc/main/presenter/llmProviderPresenter/providers/geminiProvider.tssrc/main/presenter/llmProviderPresenter/providers/ollamaProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAICompatibleProvider.tssrc/main/presenter/llmProviderPresenter/providers/openAIResponsesProvider.tssrc/main/presenter/llmProviderPresenter/providers/vertexProvider.tssrc/main/presenter/sessionPresenter/messageFormatter.tssrc/shared/types/core/chat-message.tssrc/shared/types/core/chat.tssrc/shared/types/core/llm-events.tstest/main/presenter/agentRuntimePresenter/contextBuilder.test.tstest/main/presenter/agentRuntimePresenter/dispatch.test.tstest/main/presenter/llmProviderPresenter/aiSdkProviderFactory.test.tstest/main/presenter/llmProviderPresenter/aiSdkProviderOptionsMapper.test.tstest/main/presenter/llmProviderPresenter/aiSdkRuntimeMode.test.tstest/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.tstest/main/presenter/llmProviderPresenter/aiSdkToolMapper.test.tstest/main/presenter/llmProviderPresenter/aihubmixProvider.test.tstest/main/presenter/llmProviderPresenter/anthropicProvider.test.tstest/main/presenter/llmProviderPresenter/baseProvider.test.tstest/main/presenter/llmProviderPresenter/openAICompatibleProvider.test.tstest/main/presenter/sessionPresenter/messageFormatter.test.tstest/setup.ts
src/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.ts
Show resolved
Hide resolved
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts (1)
101-103:⚠️ Potential issue | 🟠 MajorObject-root schemas bypass sanitization in the early return path.
When
type === 'object', this branch returns rawnormalizedand skips the sanitization logic used elsewhere (safeproperties, optionalrequired,additionalProperties, and composition-key stripping). This can leak malformed schema fields into downstream consumers.💡 Proposed fix
- if (normalized.type === 'object') { - return normalized - } + if (normalized.type === 'object') { + const required = Array.isArray(normalized.required) + ? normalized.required.filter((key): key is string => typeof key === 'string') + : undefined + const additionalProperties = + typeof normalized.additionalProperties === 'boolean' || + isObjectSchema(normalized.additionalProperties) + ? normalized.additionalProperties + : undefined + const { anyOf: _anyOf, oneOf: _oneOf, allOf: _allOf, ...rest } = normalized + + return { + ...rest, + type: 'object', + properties: isObjectSchema(normalized.properties) ? normalized.properties : {}, + ...(required?.length ? { required } : {}), + ...(additionalProperties !== undefined ? { additionalProperties } : {}) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts` around lines 101 - 103, The early-return branch if (normalized.type === 'object') returns raw normalized and skips the sanitization applied elsewhere—adjust this branch in toolMapper.ts so that instead of returning normalized directly you run the same sanitization routine used for non-object schemas: ensure properties is a safe object (sanitize keys/values), make required optional (omit or validate it's an array of strings), set additionalProperties to a safe default (false or normalized.additionalProperties validated), and strip composition keys like oneOf/anyOf/allOf/$ref/if/then/else before returning; reference the normalized variable and the existing sanitization logic used in the other return path to reuse or extract into a helper to avoid duplication.
🧹 Nitpick comments (3)
test/main/presenter/llmProviderPresenter/aiSdkMessageMapper.test.ts (1)
4-37: Good test for malformed content handling.The test properly validates that malformed and unrecognized content parts are skipped without throwing, and that valid parts are correctly mapped. The assertion correctly expects
URLobject for non-data-URL images.Consider expanding test coverage to include:
- System and assistant message mapping
- Tool call flows (both native and legacy paths)
- The
applyLegacyFunctionCallPromptintegration whensupportsNativeTools: false🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/aiSdkMessageMapper.test.ts` around lines 4 - 37, Add additional unit tests covering: mapping of system and assistant messages through mapMessagesToModelMessages (include content variants similar to the existing user test), tool call flows for both native and legacy paths by passing supportsNativeTools: true and false with example tool entries to assert correct tool-call mapping, and a test exercising applyLegacyFunctionCallPrompt integration when supportsNativeTools is false to ensure legacy function-call prompts are applied; reference mapMessagesToModelMessages and applyLegacyFunctionCallPrompt when locating code to extend.test/main/presenter/llmProviderPresenter/legacyFunctionCallMiddleware.test.ts (1)
1-49: Test file location may not mirror source structure.The source file is at
src/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.ts, but this test is placed directly intest/main/presenter/llmProviderPresenter/. Consider placing it attest/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.test.tsto mirror the source structure.The test logic itself is correct and provides good coverage for the
providerMetadatapreservation behavior.As per coding guidelines: "Vitest test suites should mirror the source structure under
test/main/**andtest/renderer/**"🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/main/presenter/llmProviderPresenter/legacyFunctionCallMiddleware.test.ts` around lines 1 - 49, Move this test to mirror the source tree: relocate the file to test/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.test.ts and keep the existing imports (no code changes needed inside); this ensures the test suite structure follows the source (src/main/presenter/llmProviderPresenter/aiSdk/middlewares) and preserves the coverage for applyLegacyFunctionCallPrompt and its providerMetadata behavior.src/main/presenter/llmProviderPresenter/aiSdk/messageMapper.ts (1)
273-287: Consider logging when falling back to 'unknown' tool name.When
pendingis undefined and no tool call metadata is available, the code silently falls back totoolName: 'unknown'. This edge case might indicate a mismatch between assistant tool_calls and tool responses, which could be worth logging for debugging purposes.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/messageMapper.ts` around lines 273 - 287, The code silently falls back to toolName: 'unknown' when consumeToolCall(pendingNativeToolCalls, message.tool_call_id) returns undefined; before pushing the tool-result in the mapping inside messageMapper.ts, add a debug/warn log call when pending is falsy and message has no tool metadata, logging the message.tool_call_id plus any relevant context (e.g., serialized output and message.provider_options) to aid debugging; use the file's existing logger (or inject one if none exists) and keep the log message concise and consistent with other logs.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 147-159: The accumulator uses a plain object and assigns by
untrusted tool names (in tools.reduce with acc and toolDef.function.name), which
can lead to prototype pollution; change the reducer to use a safe dictionary
(e.g., initialize acc as Object.create(null) or use a Map) and ensure
assignments into acc use that safe container, and optionally validate/filter out
dangerous keys like "__proto__", "constructor", and "prototype" before
inserting; update any downstream code that expects a plain object to handle the
new structure or convert it back to a safe plain object via
Object.assign(Object.create(null), ...) if needed.
---
Duplicate comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 101-103: The early-return branch if (normalized.type === 'object')
returns raw normalized and skips the sanitization applied elsewhere—adjust this
branch in toolMapper.ts so that instead of returning normalized directly you run
the same sanitization routine used for non-object schemas: ensure properties is
a safe object (sanitize keys/values), make required optional (omit or validate
it's an array of strings), set additionalProperties to a safe default (false or
normalized.additionalProperties validated), and strip composition keys like
oneOf/anyOf/allOf/$ref/if/then/else before returning; reference the normalized
variable and the existing sanitization logic used in the other return path to
reuse or extract into a helper to avoid duplication.
---
Nitpick comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/messageMapper.ts`:
- Around line 273-287: The code silently falls back to toolName: 'unknown' when
consumeToolCall(pendingNativeToolCalls, message.tool_call_id) returns undefined;
before pushing the tool-result in the mapping inside messageMapper.ts, add a
debug/warn log call when pending is falsy and message has no tool metadata,
logging the message.tool_call_id plus any relevant context (e.g., serialized
output and message.provider_options) to aid debugging; use the file's existing
logger (or inject one if none exists) and keep the log message concise and
consistent with other logs.
In `@test/main/presenter/llmProviderPresenter/aiSdkMessageMapper.test.ts`:
- Around line 4-37: Add additional unit tests covering: mapping of system and
assistant messages through mapMessagesToModelMessages (include content variants
similar to the existing user test), tool call flows for both native and legacy
paths by passing supportsNativeTools: true and false with example tool entries
to assert correct tool-call mapping, and a test exercising
applyLegacyFunctionCallPrompt integration when supportsNativeTools is false to
ensure legacy function-call prompts are applied; reference
mapMessagesToModelMessages and applyLegacyFunctionCallPrompt when locating code
to extend.
In
`@test/main/presenter/llmProviderPresenter/legacyFunctionCallMiddleware.test.ts`:
- Around line 1-49: Move this test to mirror the source tree: relocate the file
to
test/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.test.ts
and keep the existing imports (no code changes needed inside); this ensures the
test suite structure follows the source
(src/main/presenter/llmProviderPresenter/aiSdk/middlewares) and preserves the
coverage for applyLegacyFunctionCallPrompt and its providerMetadata behavior.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 5f3d40f5-8414-4a90-a801-414a2fd6c84e
📒 Files selected for processing (14)
docs/specs/ai-sdk-runtime/tasks.mdsrc/main/presenter/llmProviderPresenter/aiSdk/messageMapper.tssrc/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.tssrc/main/presenter/llmProviderPresenter/aiSdk/runtime.tssrc/main/presenter/llmProviderPresenter/aiSdk/toolMapper.tssrc/main/presenter/llmProviderPresenter/baseProvider.tstest/main/presenter/llmProviderPresenter/aiSdkMessageMapper.test.tstest/main/presenter/llmProviderPresenter/aiSdkRuntime.test.tstest/main/presenter/llmProviderPresenter/aiSdkToolMapper.test.tstest/main/presenter/llmProviderPresenter/baseProvider.test.tstest/main/presenter/llmProviderPresenter/doubaoProvider.test.tstest/main/presenter/llmProviderPresenter/legacyFunctionCallMiddleware.test.tstest/main/presenter/llmProviderPresenter/openAIResponsesProvider.test.tstest/main/presenter/llmProviderPresenter/zenmuxProvider.test.ts
✅ Files skipped from review due to trivial changes (1)
- docs/specs/ai-sdk-runtime/tasks.md
🚧 Files skipped from review as they are similar to previous changes (4)
- test/main/presenter/llmProviderPresenter/baseProvider.test.ts
- src/main/presenter/llmProviderPresenter/aiSdk/middlewares/legacyFunctionCallMiddleware.ts
- src/main/presenter/llmProviderPresenter/baseProvider.ts
- src/main/presenter/llmProviderPresenter/aiSdk/runtime.ts
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (1)
src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts (1)
102-104:⚠️ Potential issue | 🟠 MajorSanitize object-root schemas before returning.
Line 102 returns
normalizedas-is whentype === 'object', so invalidpropertiespayloads can pass
through unsanitized and reach downstream consumers.💡 Proposed fix
- if (normalized.type === 'object') { - return normalized - } + if (normalized.type === 'object') { + const required = Array.isArray(normalized.required) + ? normalized.required.filter((key): key is string => typeof key === 'string') + : undefined + const additionalProperties = + typeof normalized.additionalProperties === 'boolean' || + isObjectSchema(normalized.additionalProperties) + ? normalized.additionalProperties + : undefined + + return { + ...normalized, + properties: isObjectSchema(normalized.properties) ? normalized.properties : {}, + ...(required?.length ? { required } : {}), + ...(additionalProperties !== undefined ? { additionalProperties } : {}) + } + }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts` around lines 102 - 104, The early return of object-root schemas returns `normalized` verbatim and lets invalid `properties` payloads pass; in the function containing the check (the block with `if (normalized.type === 'object') return normalized` in toolMapper.ts) validate and sanitize `normalized` before returning: ensure `normalized.properties` is an object (or set to {}), filter its entries to only allow property schemas, coerce or remove invalid `required` (ensure an array of strings), enforce `additionalProperties` to a boolean or remove it, and normalize any nested property schemas via the same sanitizer routine; replace the direct return with a call to that sanitizer and return the cleaned `normalized`.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 106-141: The code currently uses intersectRequiredKeys for all
branch types when building the merged schema, which is incorrect for allOf; for
allOf you must union required keys. Add a new helper unionRequiredKeys(variants)
that returns the union of required arrays from the variant schemas, and in the
return block of toolMapper.ts use branchKey === 'allOf' ?
unionRequiredKeys(variants) : intersectRequiredKeys(variants) instead of always
calling intersectRequiredKeys; keep existing references to branchKey, variants,
mergeVariantProperties and preserve the additionalProperties logic as-is.
---
Duplicate comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 102-104: The early return of object-root schemas returns
`normalized` verbatim and lets invalid `properties` payloads pass; in the
function containing the check (the block with `if (normalized.type === 'object')
return normalized` in toolMapper.ts) validate and sanitize `normalized` before
returning: ensure `normalized.properties` is an object (or set to {}), filter
its entries to only allow property schemas, coerce or remove invalid `required`
(ensure an array of strings), enforce `additionalProperties` to a boolean or
remove it, and normalize any nested property schemas via the same sanitizer
routine; replace the direct return with a call to that sanitizer and return the
cleaned `normalized`.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 133e53ae-eb56-4616-8bf1-f5c55d845669
📒 Files selected for processing (2)
src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.tstest/main/presenter/llmProviderPresenter/aiSdkToolMapper.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/main/presenter/llmProviderPresenter/aiSdkToolMapper.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts (1)
131-134: Consider typingproviderMetadataaccess more explicitly.The repeated
(part as any).providerMetadatapattern (lines 132, 144, 153, 162, 171, 178) is a type escape hatch. If this property is consistently present across AI SDK stream parts, consider creating a local type augmentation or helper to avoid scatteringas anycasts throughout.💡 Example helper approach
// At module level function getProviderMetadata(part: TextStreamPart<ToolSet>): unknown { return (part as { providerMetadata?: unknown }).providerMetadata } // Usage yield createStreamEvent.text(part.text, toProviderOptions(getProviderMetadata(part)))🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts` around lines 131 - 134, Replace the repeated unsafe casts of (part as any).providerMetadata by declaring a small typed helper or local type augmentation for the stream part (e.g., a getProviderMetadata(part: TextStreamPart|OtherStreamPart) or an interface extension that includes providerMetadata) and use it wherever provider metadata is passed (references: part, providerMetadata, toProviderOptions, and createStreamEvent.text in streamAdapter.ts); update all occurrences that currently use (part as any).providerMetadata to call the helper or use the extended type so the code is strongly typed and the casts are removed.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts`:
- Around line 186-200: Check for a nullable mediaType before calling startsWith
on part.file.mediaType and skip non-string or missing mediaType values; then
wrap the await options.cacheImage(dataUrl) call in a try/catch so any rejection
is handled (e.g., log the error and fall back to the original dataUrl) before
yielding createStreamEvent.imageData({ data: ..., mimeType: mediaType }); ensure
you reference part.file.mediaType, options.cacheImage, and
createStreamEvent.imageData when applying the changes.
---
Nitpick comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.ts`:
- Around line 131-134: Replace the repeated unsafe casts of (part as
any).providerMetadata by declaring a small typed helper or local type
augmentation for the stream part (e.g., a getProviderMetadata(part:
TextStreamPart|OtherStreamPart) or an interface extension that includes
providerMetadata) and use it wherever provider metadata is passed (references:
part, providerMetadata, toProviderOptions, and createStreamEvent.text in
streamAdapter.ts); update all occurrences that currently use (part as
any).providerMetadata to call the helper or use the extended type so the code is
strongly typed and the casts are removed.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 1c169477-bb73-42fa-bcf7-59335f31e0f0
📒 Files selected for processing (2)
src/main/presenter/llmProviderPresenter/aiSdk/streamAdapter.tstest/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts
🚧 Files skipped from review as they are similar to previous changes (1)
- test/main/presenter/llmProviderPresenter/aiSdkStreamAdapter.test.ts
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.ts`:
- Around line 76-81: The merge into the plain object "merged" is vulnerable to
prototype pollution when iterating "propertyMaps"; change "merged" to a
null-prototype object (e.g., created via Object.create(null)) and, inside the
loop that assigns into "merged" (the block that calls
mergePropertySchemas(merged[key], value)), skip any dangerous keys such as
"__proto__", "constructor", and "prototype" before assigning or merging; keep
using mergePropertySchemas for safe merges but ensure the key check happens
first to prevent tampering the prototype chain.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: 08d35373-3303-4053-9040-55ccca1c042d
📒 Files selected for processing (2)
src/main/presenter/llmProviderPresenter/aiSdk/toolMapper.tstest/main/presenter/llmProviderPresenter/aiSdkToolMapper.test.ts
Summary by CodeRabbit
New Features
Behavior Changes
Documentation
Tests
Chores