Skip to content

Conversation

@sawka
Copy link
Member

@sawka sawka commented Nov 19, 2025

Created an interface in aiusechat for the backend providers. Use that interface throughout the usechat code. Isolate the backend implementations to only the new file usechat-backend.go.

@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 19, 2025

Walkthrough

This PR introduces a UseChatBackend abstraction with concrete OpenAI and Anthropic backends and routes chat operations through that interface. It adds a public AIFunctionCallInput type and WaveChatOpts.GetWaveRequestType(), reduces fields emitted in WaveStopReason and WaveContinueResponse, converts several previously exported methods to unexported, centralizes tool-progress logic, refactors OpenAI message-to-UI conversion, and adds a new aiutil package with utilities for image/text extraction, attached-file formatting, XML-attribute parsing, and JSON request encoding.

Estimated code review effort

🎯 4 (Complex) | ⏱️ ~45 minutes

  • Verify UseChatBackend surface and that both backends meet the contract (semantics of RunChatStep, conversions, function-call input).
  • Check that backend is correctly threaded through all updated call sites (runAIChatStep, RunAIChat, tool-processing paths).
  • Validate removal of WaveStopReason / WaveContinueResponse fields against serializers/consumers.
  • Inspect renamed-to-unexported methods and ensure no external callers remain.
  • Review new aiutil helpers for correctness, edge cases (data: URLs, encoding), and deterministic suffix logic.
  • Confirm anthopic backend gaps (UpdateToolUseData/GetFunctionCallInput) are handled where used.

Pre-merge checks and finishing touches

❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 54.29% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
✅ Passed checks (2 passed)
Check name Status Explanation
Title check ✅ Passed The pull request title accurately summarizes the main change: creating an interface for backend AI providers and using it throughout the codebase.
Description check ✅ Passed The pull request description is directly related to the changeset, describing the creation of a backend interface and its integration throughout the usechat code.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sawka/usechat-interface

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 44a0688 and c2c5002.

⛔ Files ignored due to path filters (1)
  • package-lock.json is excluded by !**/package-lock.json
📒 Files selected for processing (4)
  • pkg/aiusechat/uctypes/usechat-types.go (1 hunks)
  • pkg/aiusechat/usechat-backend.go (1 hunks)
  • pkg/aiusechat/usechat-utils.go (1 hunks)
  • pkg/aiusechat/usechat.go (12 hunks)
🧰 Additional context used
🧠 Learnings (1)
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/usechat-backend.go
  • pkg/aiusechat/usechat.go
🧬 Code graph analysis (3)
pkg/aiusechat/usechat-utils.go (1)
pkg/aiusechat/usechat-backend.go (1)
  • GetBackendByAPIType (57-66)
pkg/aiusechat/usechat-backend.go (8)
pkg/web/sse/ssehandler.go (1)
  • SSEHandlerCh (64-77)
pkg/aiusechat/uctypes/usechat-types.go (11)
  • WaveChatOpts (452-472)
  • WaveContinueResponse (210-215)
  • WaveStopReason (193-207)
  • GenAIMessage (285-288)
  • RateLimitInfo (493-500)
  • UIMessageDataToolUse (148-159)
  • AIToolResult (319-324)
  • AIMessage (296-299)
  • AIChat (240-246)
  • AIFunctionCallInput (276-281)
  • UIChat (21-27)
pkg/aiusechat/openai/openai-backend.go (2)
  • UpdateToolUseData (400-429)
  • RunOpenAIChatStep (431-557)
pkg/aiusechat/openai/openai-convertmessage.go (3)
  • GetFunctionCallInputByToolCallId (684-695)
  • ConvertToolResultsToOpenAIChatMessage (487-541)
  • ConvertAIMessageToOpenAIChatMessage (444-484)
pkg/aiusechat/usechat-utils.go (1)
  • ConvertAIChatToUIChat (69-85)
pkg/aiusechat/usechat.go (2)
  • APIType_OpenAI (36-36)
  • APIType_Anthropic (35-35)
pkg/aiusechat/anthropic/anthropic-backend.go (1)
  • RunAnthropicChatStep (389-511)
pkg/aiusechat/anthropic/anthropic-convertmessage.go (2)
  • ConvertToolResultsToAnthropicChatMessage (716-780)
  • ConvertAIMessageToAnthropicChatMessage (436-471)
pkg/aiusechat/usechat.go (3)
pkg/aiusechat/usechat-backend.go (2)
  • UseChatBackend (18-50)
  • GetBackendByAPIType (57-66)
pkg/aiusechat/openai/openai-backend.go (1)
  • UpdateToolUseData (400-429)
pkg/aiusechat/openai/openai-convertmessage.go (1)
  • GetFunctionCallInputByToolCallId (684-695)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Build for TestDriver.ai
🔇 Additional comments (13)
pkg/aiusechat/usechat-utils.go (1)

74-84: LGTM! Clean backend abstraction.

The refactor successfully replaces API-specific switch logic with dynamic backend routing. Error handling is correct, and the public API signature remains stable while enabling extensibility for future backends.

pkg/aiusechat/uctypes/usechat-types.go (1)

276-281: LGTM! Well-designed type for cross-backend function call inputs.

The new AIFunctionCallInput struct provides a clean, standardized representation of function call inputs across different AI backends. The use of an optional pointer for ToolUseData and proper JSON tags makes this suitable for the public API.

pkg/aiusechat/usechat-backend.go (3)

16-50: LGTM! Well-designed backend abstraction interface.

The UseChatBackend interface provides a clean contract for AI provider implementations. The method signatures appropriately handle multi-message returns (e.g., ConvertToolResultsToNativeChatMessage returning a slice) to accommodate provider-specific requirements. Documentation is clear and helpful.


56-66: LGTM! Clean factory implementation.

The factory function correctly routes to backend implementations and provides clear error messages for unsupported API types.


71-120: LGTM! Proper OpenAI backend wrapper.

The implementation correctly wraps existing OpenAI package functions and handles type conversions between provider-specific types and the generic interface types. The conversion loops are straightforward and the delegation pattern is clean.

pkg/aiusechat/usechat.go (8)

204-211: LGTM! Proper backend delegation with API validation.

The function correctly accepts and uses the backend parameter. The new Chat Completions API guard (lines 205-207) prevents unsupported older OpenAI models from being used, which is a good defensive check. Backend method call and error handling are correct.


238-242: LGTM! Backend delegation with appropriate error handling.

The function correctly delegates to the backend's UpdateToolUseData method. Error logging without propagation is intentional, as this operation is best-effort for metadata persistence and shouldn't block tool execution.


244-321: LGTM! Correct backend parameter threading.

The function properly threads the backend parameter to downstream calls (updateToolUseDataInChat). The core tool execution logic remains unchanged, maintaining consistency.


323-347: LGTM! Consistent backend parameter threading.

The function correctly passes the backend parameter to both processToolCallInternal and updateToolUseDataInChat. The refactor is consistent and complete.


349-381: LGTM! Backend-driven tool result conversion.

The function properly delegates tool result conversion to the backend (line 373), replacing hardcoded API-specific logic. The error handling correctly skips message posting if conversion fails, preventing invalid messages from being stored.


383-480: LGTM! Public API updated for backend abstraction.

The function signature correctly adds the backend parameter, threading it through to runAIChatStep and processToolCalls. This is a breaking change to the public API, but it's necessary for the backend abstraction and enables better testability and extensibility.


536-578: LGTM! Proper backend lookup and usage.

The function correctly obtains the backend via GetBackendByAPIType and uses it for message conversion and chat execution. Error handling is appropriate, and the backend abstraction is properly applied throughout the flow.


765-828: LGTM! Backend-driven function call retrieval.

The function correctly uses GetBackendByAPIType to obtain the backend and retrieves function call input via backend.GetFunctionCallInputByToolCallId. The nil check (line 777) properly handles backends that don't support this operation (e.g., Anthropic), returning an appropriate error.

Comment on lines +135 to +137
func (b *anthropicBackend) UpdateToolUseData(chatId string, toolCallId string, toolUseData *uctypes.UIMessageDataToolUse) error {
return fmt.Errorf("UpdateToolUseData not implemented for anthropic backend")
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

🧩 Analysis chain

Verify unimplemented methods are appropriately handled.

The UpdateToolUseData method returns an error for Anthropic, which is logged but not propagated (see usechat.go line 239-241). Similarly, GetFunctionCallInputByToolCallId returns nil for Anthropic (line 151-153), which is checked before use in CreateWriteTextFileDiff.

These appear to be OpenAI-specific features (function call metadata updates), so the partial implementation is reasonable. However, verify that:

  1. Tool use with Anthropic works correctly without UpdateToolUseData persistence
  2. The error message clearly indicates this is expected behavior, not a bug
  3. Any Anthropic-specific tool metadata handling is correctly implemented elsewhere

🏁 Script executed:

#!/bin/bash
# Verify UpdateToolUseData error handling and tool use flows for Anthropic

# Check all call sites of UpdateToolUseData to confirm errors are handled
echo "=== UpdateToolUseData call sites ==="
rg -n -C3 'UpdateToolUseData'

# Check for Anthropic-specific tool use handling
echo -e "\n=== Anthropic tool use implementation ==="
rg -n -C5 'anthropic.*[Tt]ool' --type=go

# Verify GetFunctionCallInputByToolCallId usage and nil checks
echo -e "\n=== GetFunctionCallInputByToolCallId usage ==="
rg -n -C3 'GetFunctionCallInputByToolCallId'

Length of output: 13011


[/run_scripts]


🏁 Script executed:

#!/bin/bash
# Find tool definitions and check if they're backend-specific

echo "=== Searching for tool definitions ==="
fd -e go -type f -path '*/tools_*' | head -20

echo -e "\n=== Tool definitions in usechat ==="
rg -n 'CreateWriteTextFileDiff\|ReadFileDiff\|ApproveToolUse' --type=go -A 5 -B 2

echo -e "\n=== Check tool available for Anthropic ==="
rg -n 'NewAnthropicBackend\|anthropicBackend.*tools\|ReadFileTool\|WriteTextFile' --type=go -C 2

echo -e "\n=== Check if GetFunctionCallInputByToolCallId is called from processToolCall* ==="
rg -n 'GetFunctionCallInputByToolCallId' -A 10 -B 5 --type=go | grep -E '(processToolCall|processToolUse|toolCallId)'

Length of output: 1621


[/run_scripts]


🏁 Script executed:

#!/bin/bash
# Find where CreateWriteTextFileDiff is called and check for backend routing

echo "=== CreateWriteTextFileDiff call sites ==="
rg -n 'CreateWriteTextFileDiff' -B 10 -A 3

echo -e "\n=== Check for write_text_file tool in tool definitions ==="
rg -n 'write_text_file\|edit_text_file' -B 3 -A 3

echo -e "\n=== Check if tools differ by backend ==="
rg -n 'tools_readfile\|tools_writefile' --type=go

Length of output: 2594


[/run_scripts]


🏁 Script executed:

#!/bin/bash
# Get the full CreateWriteTextFileDiff implementation

echo "=== Full CreateWriteTextFileDiff implementation ==="
rg -n 'func CreateWriteTextFileDiff' -A 80 --type=go

Length of output: 4942


Correct the misleading comment about Anthropic error handling.

The comment at usechat.go:764 states "For Anthropic, this returns an unimplemented error," but the actual code returns a generic "tool call not found" error (line 778). When Anthropic's GetFunctionCallInputByToolCallId returns nil, users receive a confusing error message that suggests the tool call was lost rather than indicating the feature is unsupported for that backend.

Update the comment to accurately reflect the behavior: the function returns a "tool call not found" error for Anthropic. Alternatively, improve the error message by adding an explicit Anthropic check before line 776 that returns a clearer "write_text_file/edit_text_file diff not supported for Anthropic backend" error.

🤖 Prompt for AI Agents
In pkg/aiusechat/usechat-backend.go around lines 135 to 137, update the
misleading comment and behavior: either change the comment that currently claims
"For Anthropic, this returns an unimplemented error" to state that the function
currently returns a "tool call not found" error for Anthropic, or add an
explicit Anthropic branch before the generic "tool call not found" return that
detects the anthropicBackend and returns a clear error like
"write_text_file/edit_text_file diffs not supported for Anthropic backend" so
callers see an accurate, actionable message.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 0

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/aiusechat/usechat.go (1)

383-471: Add handling for StopKindRateLimit to surface rate-limit conditions to clients

The issue is confirmed: StopKindRateLimit returned by backends (with err == nil) bypasses the existing error handling and falls through to break without emitting SSE error signals. Unlike StopKindPremiumRateLimit and StopKindToolUse which have explicit handling, StopKindRateLimit silently terminates and returns (metrics, nil), preventing clients from detecting the rate-limit condition.

Add a conditional check before the final break to handle StopKindRateLimit similar to the error path:

if stopReason != nil && stopReason.Kind == uctypes.StopKindRateLimit {
	metrics.HadError = true
	_ = sseHandler.AiMsgError("rate limit exceeded")
	_ = sseHandler.AiMsgFinish("", nil)
	break
}

This ensures rate-limit conditions are visible to clients via SSE and properly reflected in metrics, consistent with other error handling in the loop.

🧹 Nitpick comments (4)
pkg/aiusechat/anthropic/anthropic-backend.go (1)

838-873: Tool‑use stop reason construction matches updated WaveStopReason shape

The message_stop handler now returns WaveStopReason{Kind: StopKindToolUse, RawReason, ToolCalls} without extra fields like message/model, which is consistent with the new type and preserves all tool‑call data required by downstream processing.

You might optionally mirror OpenAI’s behavior by populating ErrorText for "max_tokens" or other refusal‑like reasons to keep UX messaging consistent across backends.

pkg/aiusechat/usechat.go (3)

238-321: Backend‑driven tool‑use persistence is correct but may spam logs for Anthropic

Delegating tool‑use updates to backend.UpdateToolUseData from updateToolUseDataInChat, processToolCallInternal, and processToolCall is a good abstraction and keeps chatstore updates provider‑specific. However, Anthropic’s backend intentionally returns an unimplemented error here, so every update will log "failed to update tool use data in chat".

Consider either:

  • Having the Anthropic backend make UpdateToolUseData a no‑op (nil error), or
  • Detecting and suppressing a well‑known “not implemented” error in updateToolUseDataInChat.

This avoids noisy logs while preserving the abstraction.


349-381: Tool‑result conversion via backend is solid; handle PostMessage errors as well

Converting toolResults with backend.ConvertToolResultsToNativeChatMessage and posting each native message is the right place for provider‑specific formatting. Right now, errors from chatstore.DefaultChatStore.PostMessage are ignored; a failing post would silently drop tool outputs even though tools ran.

It would be safer to at least log these failures, e.g.:

-        for _, msg := range toolResultMsgs {
-            chatstore.DefaultChatStore.PostMessage(chatOpts.ChatId, &chatOpts.Config, msg)
-        }
+        for _, msg := range toolResultMsgs {
+            if err := chatstore.DefaultChatStore.PostMessage(chatOpts.ChatId, &chatOpts.Config, msg); err != nil {
+                log.Printf("failed to store tool result message for chat %s: %v\n", chatOpts.ChatId, err)
+            }
+        }

754-819: CreateWriteTextFileDiff now correctly uses backend‑agnostic function‑call input

Using GetBackendByAPIType(aiChat.APIType) and then backend.GetFunctionCallInputByToolCallId decouples this helper from OpenAI internals and lets other backends opt in later. The subsequent JSON unmarshal and tool‑name checks match how AIFunctionCallInput is structured.

The doc comment still claims “For Anthropic, this returns an unimplemented error”, but the new behavior is effectively “tool call not found” via nil from the backend. You may want to update the comment to reflect this softer failure.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between c2c5002 and 69616a0.

📒 Files selected for processing (5)
  • pkg/aiusechat/anthropic/anthropic-backend.go (2 hunks)
  • pkg/aiusechat/openai/openai-backend.go (7 hunks)
  • pkg/aiusechat/openai/openai-convertmessage.go (2 hunks)
  • pkg/aiusechat/uctypes/usechat-types.go (2 hunks)
  • pkg/aiusechat/usechat.go (14 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/uctypes/usechat-types.go
  • pkg/aiusechat/openai/openai-backend.go
  • pkg/aiusechat/usechat.go
🧬 Code graph analysis (4)
pkg/aiusechat/openai/openai-convertmessage.go (2)
pkg/aiusechat/openai/openai-backend.go (1)
  • OpenAIChatMessage (46-52)
pkg/aiusechat/uctypes/usechat-types.go (1)
  • UIMessage (29-34)
pkg/aiusechat/anthropic/anthropic-backend.go (1)
pkg/aiusechat/uctypes/usechat-types.go (4)
  • StopKindPremiumRateLimit (182-182)
  • WaveStopReason (193-199)
  • StopKindRateLimit (183-183)
  • StopKindToolUse (176-176)
pkg/aiusechat/openai/openai-backend.go (3)
pkg/aiusechat/uctypes/usechat-types.go (5)
  • StopKindPremiumRateLimit (182-182)
  • WaveStopReason (193-199)
  • StopKindRateLimit (183-183)
  • ToolDefinition (80-95)
  • UIMessageDataToolProgress (166-170)
pkg/web/sse/ssehandler.go (1)
  • SSEHandlerCh (64-77)
pkg/util/utilfn/partial.go (1)
  • ParsePartialJson (167-175)
pkg/aiusechat/usechat.go (4)
pkg/aiusechat/usechat-backend.go (2)
  • UseChatBackend (18-50)
  • GetBackendByAPIType (57-66)
pkg/aiusechat/uctypes/usechat-types.go (13)
  • WaveChatOpts (442-462)
  • WaveContinueResponse (202-205)
  • WaveStopReason (193-199)
  • GenAIMessage (275-278)
  • UIMessageDataToolUse (148-159)
  • WaveToolCall (186-191)
  • ToolDefinition (80-95)
  • AIToolResult (309-314)
  • AIMetrics (246-264)
  • StopKindPremiumRateLimit (182-182)
  • PremiumOpenAIModel (15-15)
  • DefaultOpenAIModel (14-14)
  • StopKindToolUse (176-176)
pkg/aiusechat/openai/openai-backend.go (1)
  • UpdateToolUseData (400-429)
pkg/aiusechat/openai/openai-convertmessage.go (1)
  • GetFunctionCallInputByToolCallId (684-695)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (11)
pkg/aiusechat/anthropic/anthropic-backend.go (1)

479-495: Rate‑limit stop reasons now cleanly separate metadata from RateLimitInfo

The 429 handling now returns WaveStopReason with only Kind set and passes RateLimitInfo separately. This aligns with the slimmer WaveStopReason struct while still letting callers update global rate‑limit state. Looks good.

pkg/aiusechat/uctypes/usechat-types.go (2)

266-271: AIFunctionCallInput is a useful backend‑agnostic bridge for tool calls

The new AIFunctionCallInput (call id, name, JSON arguments, ToolUseData) gives CreateWriteTextFileDiff and similar helpers a clean, provider‑neutral way to inspect function calls. This matches how the OpenAI backend exposes function calls and leaves Anthropic free to return nil where unsupported.


193-205: <!-- [scratchpad_start]
[task verification_step_3/3]
Confirm whether fields in question were ever on WaveStopReason or if review was based on speculation.
[observations]

  • Git history for WaveStopReason returned empty
  • RateLimitInfo is SEPARATE struct, not nested in WaveStopReason
  • Current WaveStopReason has only: Kind, RawReason, ToolCalls, ErrorType, ErrorText
  • Function signatures return WaveStopReason AND RateLimitInfo separately (see usechat-backend.go)
  • No "ratelimitinfo", "message_id", or "model" found as JSON fields on WaveStopReason
    [analysis]
  • Original review concern may have been precautionary but based on fields that were never on WaveStopReason
  • RateLimitInfo correctly kept separate
  • Error fields (ErrorType, ErrorText) are new additions, not removals
  • WaveContinueResponse keeps Model, which is correct for continuation logic
  • No actual removed JSON fields detected on WaveStopReason
    [conclusion]
  • Review comment asked developer to verify removed fields that don't appear to have existed
  • Verification shows: (1) RateLimitInfo is separate, (2) no old field references found on WaveStopReason, (3) new error fields properly populated
  • Original review comment can be marked as resolved - verification shows structure changes are sound
    [actions]
  • Approve changes and resolve review comment

Based on the verification:

  1. RateLimitInfo is a separate struct — not a field on WaveStopReason. It's returned alongside WaveStopReason from backend functions.

  2. No removed fields detected on WaveStopReason — searches found "messageid" only on request/backend structs (not on WaveStopReason), and no "ratelimitinfo" or "model" JSON fields on WaveStopReason.

  3. Current WaveStopReason structure is correct — Contains only: Kind, RawReason, ToolCalls, ErrorType, ErrorText. The new error fields are being properly populated in anthropic-backend.go.

  4. WaveContinueResponse correctly retains Model — Used for continuation logic to adjust the model between steps.

The code changes are structurally sound. The fields mentioned in the original review verification request don't appear to have existed on WaveStopReason, so there are no external consumers to break.

Wave stop/continue types are simplified and error‑aware. Adding ErrorType/ErrorText to WaveStopReason and slimming WaveContinueResponse to Model + ContinueFromKind aligns with how backends now populate stop reasons and continuations.

pkg/aiusechat/usechat.go (3)

204-211: runAIChatStep backend wiring and legacy‑model guard look correct

Threading UseChatBackend through runAIChatStep and updating global rate‑limit info from the backend’s RateLimitInfo keeps the core loop backend‑agnostic. The pre‑check that rejects legacy Chat Completions models for OpenAI is a clear, early failure and avoids accidentally routing them through the responses API.


527-569: WaveAIPostMessageWrap now cleanly delegates provider specifics to UseChatBackend

Resolving the backend by chatOpts.Config.APIType, converting the user AIMessage via backend.ConvertAIMessageToNativeChatMessage, and then reusing the same backend for RunAIChat keeps WaveAIPostMessageWrap simple and provider‑agnostic. Metrics collection using the original AIMessage remains intact and independent of the backend.


828-859: generateBuilderAppData changes maintain behavior and improve clarity

The small refactor around staticFilesJSON initialization and marshaling preserves behavior while keeping the function easy to read. No issues here.

pkg/aiusechat/openai/openai-convertmessage.go (1)

543-651: UI conversion logic is preserved while making the helper internal

Renaming ConvertToUIMessage to the unexported convertToUIMessage and updating ConvertAIChatToUIChat to use it keeps all existing conversion behavior (attached files/directories, userfile parts, tool‑use data) while tightening the API surface so that only the backend calls this helper. This fits the new backend‑centric design.

Also applies to: 661-671

pkg/aiusechat/openai/openai-backend.go (4)

96-122: Cleaning helpers correctly strip internal fields before sending to OpenAI

OpenAIMessageContent.clean, OpenAIMessage.cleanAndCopy, and OpenAIFunctionCallInput.clean ensure PreviewUrl and ToolUseData are removed from outbound payloads while leaving stored chat messages untouched. Their use in RunOpenAIChatStep when constructing inputs is correct and keeps OpenAI wire format clean.

Also applies to: 472-492


520-541: OpenAI rate‑limit stop reasons align with new WaveStopReason contract

For HTTP 429, constructing WaveStopReason with just Kind set to StopKindPremiumRateLimit / StopKindRateLimit and returning RateLimitInfo separately matches the updated type definition and lets callers update global rate‑limit state without duplicating it in the stop reason.


581-647: Stricter handling of unexpected EOFs improves robustness

In handleOpenAIStreamingResp, treating an io.EOF before a completed response as a protocol error (emitting AiMsgError and returning StopKindError with ErrorType: "protocol") is a sensible defensive change and should make partial/broken streams easier to detect and handle upstream.


847-873: sendToolProgress centralizes tool‑progress updates and is safely guarded

Refactoring the tool‑progress logic into sendToolProgress makes the "response.function_call_arguments.delta/done" handlers much cleaner. The helper:

  • Bails out if toolDef or ToolProgressDesc are nil.
  • Uses ParsePartialJson for incremental deltas and json.Unmarshal for full arguments.
  • Silently skips on parse/descriptor errors, avoiding noisy failures.
  • Emits data-toolprogress with UIMessageDataToolProgress keyed by progress-<toolCallId>.

This is a good, low‑risk consolidation of behavior.

Also applies to: 929-953

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

🧹 Nitpick comments (1)
pkg/aiusechat/openai/openai-convertmessage.go (1)

27-67: Consider logging when input_image or input_file blocks are dropped for non-user roles.

Lines 44 and 54 silently skip creating UI parts for input_image and input_file blocks when the role is not "user". While this may be intentional, adding a debug log would help diagnose unexpected behavior if these blocks appear in assistant messages.

Example:

 case "input_image":
 	if role == "user" {
 		parts = append(parts, uctypes.UIMessagePart{
 			Type: "data-userfile",
 			Data: uctypes.UIMessageDataUserFile{
 				MimeType:   "image/*",
 				PreviewUrl: block.PreviewUrl,
 			},
 		})
+	} else {
+		log.Printf("openai: dropping input_image block for role '%s'", role)
 	}
📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 69616a0 and f1acc84.

📒 Files selected for processing (3)
  • pkg/aiusechat/aiutil/aiutil.go (1 hunks)
  • pkg/aiusechat/openai/openai-convertmessage.go (11 hunks)
  • pkg/aiusechat/uctypes/usechat-types.go (3 hunks)
🧰 Additional context used
🧠 Learnings (2)
📓 Common learnings
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/openai/openai-convertmessage.go
  • pkg/aiusechat/uctypes/usechat-types.go
🧬 Code graph analysis (2)
pkg/aiusechat/openai/openai-convertmessage.go (3)
pkg/aiusechat/openai/openai-backend.go (2)
  • OpenAIMessageContent (82-94)
  • OpenAIChatMessage (46-52)
pkg/aiusechat/uctypes/usechat-types.go (3)
  • UIMessagePart (36-69)
  • UIMessageDataUserFile (72-77)
  • UIMessage (29-34)
pkg/aiusechat/aiutil/aiutil.go (6)
  • ConvertDataUserFile (127-171)
  • JsonEncodeRequestBody (173-182)
  • ExtractImageUrl (63-77)
  • ExtractTextData (80-95)
  • FormatAttachedTextFile (98-109)
  • FormatAttachedDirectoryListing (112-122)
pkg/aiusechat/aiutil/aiutil.go (2)
pkg/util/utilfn/marshal.go (1)
  • DecodeDataURL (169-203)
pkg/aiusechat/uctypes/usechat-types.go (2)
  • UIMessagePart (36-69)
  • UIMessageDataUserFile (72-77)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (13)
pkg/aiusechat/uctypes/usechat-types.go (3)

202-205: Summary inconsistency detected.

The AI-generated summary claims that the Model field was removed from WaveContinueResponse, but the code shows Model is still present on line 203. Please verify whether this field should remain or be removed.


266-271: LGTM!

The new AIFunctionCallInput type is well-structured for representing function call inputs from AI providers. The use of Arguments as a string (likely JSON) is appropriate for handling raw provider responses.


478-484: LGTM!

The GetWaveRequestType() method correctly determines the request type based on the presence of BuilderId. The logic is clear and straightforward.

pkg/aiusechat/openai/openai-convertmessage.go (3)

189-298: LGTM!

The refactored buildOpenAIHTTPRequest function correctly:

  • Injects tab state, app static files, and app Go file into the last user message
  • Uses aiutil.JsonEncodeRequestBody for JSON encoding with proper HTML escaping disabled
  • Sets the request type header via chatOpts.GetWaveRequestType()

301-371: LGTM!

The refactored convertFileAIMessagePart function appropriately delegates to aiutil helpers for image URL extraction, text data extraction, and formatting. The error handling is preserved, and the logic is clearer.


475-509: All references to the old ConvertToUIMessage method have been successfully updated.

Verification confirms that the method rename from ConvertToUIMessage (exported) to convertToUIMessage (unexported) is complete:

  • The only internal caller at line 522 in openai-convertmessage.go has been updated to use the new name
  • No remaining references to the old exported method exist in the OpenAI package
  • No external callers from outside the aiusechat package reference the old method name
  • The Anthropic package's separate ConvertToUIMessage method remains unaffected

The refactoring is complete and correct.

pkg/aiusechat/aiutil/aiutil.go (7)

53-60: LGTM!

The GenerateDeterministicSuffix function correctly generates an 8-character hash using SHA-256. The collision probability with 32 bits of entropy is acceptable for generating deterministic suffixes in attachment tags.


63-77: LGTM!

The ExtractImageUrl function properly validates URL protocols and securely handles both URL and raw data inputs. The error handling is appropriate.


80-95: LGTM!

The ExtractTextData function correctly prioritizes the data field, handles data URLs via utilfn.DecodeDataURL, and appropriately rejects HTTP(S) URLs to enforce security requirements.


112-122: Same encoding pattern as FormatAttachedTextFile.

This function uses the same encoding sequence (replace " with &quot;, then strconv.Quote()). Please verify this is intentional and matches the expected format for ExtractXmlAttribute, as discussed in the comment for FormatAttachedTextFile.


127-171: LGTM!

The ConvertDataUserFile function correctly:

  • Distinguishes between matched prefixes and successful parsing via the boolean return
  • Validates the XML-like tag structure
  • Extracts attributes using ExtractXmlAttribute
  • Creates appropriate UIMessagePart structures for files and directories

The boundary checks are safe (line 130, 151), as openTagEnd will be -1 or greater than the prefix length when the checks are performed.


173-182: LGTM!

The JsonEncodeRequestBody function correctly:

  • Uses SetEscapeHTML(false) to preserve special characters (important for AI content)
  • Returns both the buffer and error, allowing inspection of partial output for debugging

98-109: No issues found—encoding/decoding works correctly.

The test confirms that the filename encoding sequence (lines 103–104) works as intended. Round-trip encoding and decoding successfully preserves filenames with special characters, including quotes. The ExtractXmlAttribute function correctly handles the combination of HTML entity encoding (&quot;) and strconv.Quote escaping.

Comment on lines +23 to +50
func ExtractXmlAttribute(tag, attrName string) (string, bool) {
attrStart := strings.Index(tag, attrName+"=")
if attrStart == -1 {
return "", false
}

pos := attrStart + len(attrName+"=")
start := strings.Index(tag[pos:], `"`)
if start == -1 {
return "", false
}
start += pos

end := strings.Index(tag[start+1:], `"`)
if end == -1 {
return "", false
}
end += start + 1

quotedValue := tag[start : end+1]
value, err := strconv.Unquote(quotedValue)
if err != nil {
return "", false
}

value = strings.ReplaceAll(value, "&quot;", `"`)
return value, true
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Potential issues with attribute name matching.

Lines 24 and 29: The current implementation searches for attrName+"=", which could match partial attribute names. For example, searching for "name=" would also match "filename=".

Additionally, the parser expects no whitespace around the = sign. If the XML allows attrName = "value" (with spaces), this would fail.

Consider improving the attribute matching:

-	attrStart := strings.Index(tag, attrName+"=")
+	// Use word boundary to avoid partial matches
+	pattern := attrName + "="
+	attrStart := -1
+	for i := 0; i <= len(tag)-len(pattern); i++ {
+		if strings.HasPrefix(tag[i:], pattern) {
+			// Check if preceded by whitespace or start of string
+			if i == 0 || tag[i-1] == ' ' || tag[i-1] == '\t' {
+				attrStart = i
+				break
+			}
+		}
+	}
 	if attrStart == -1 {
 		return "", false
 	}

Alternatively, verify that the XML format strictly disallows spaces around = and that attribute names don't overlap.

🤖 Prompt for AI Agents
In pkg/aiusechat/aiutil/aiutil.go around lines 23 to 50, the current search for
attrName+"=" can match substrings (e.g. "filename" when looking for "name") and
fails if there are spaces around '='; update the logic to find an attribute by
matching a proper boundary for the name (start of string or whitespace or
tag-char before the name) and allow optional spaces around the '=' (e.g. use a
compiled regex like `(?i)(?:^|\\s)attrName\\s*=\\s*"(.*?)"` with attrName
injected safely) or otherwise scan tokens to locate the attribute name followed
by optional whitespace, an '=', optional whitespace, then a quoted value;
extract the quoted group, unquote/HTML-decode as before, and return that value
and true, returning false on any parse failure.

Comment on lines 144 to 151
func ConvertToolDefinitionToOpenAI(tool uctypes.ToolDefinition) OpenAIRequestTool {
cleanedTool := tool.Clean()
return OpenAIRequestTool{
Name: cleanedTool.Name,
Description: cleanedTool.Description,
Parameters: cleanedTool.InputSchema,
Strict: cleanedTool.Strict,
Name: tool.Name,
Description: tool.Description,
Parameters: tool.InputSchema,
Strict: tool.Strict,
Type: "function",
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

Potential issue: Internal fields may be sent to the OpenAI API.

The ToolDefinition type has DisplayName and ShortDescription fields marked as "internal field (cannot marshal to API, must be stripped)" in their comments (lines 82, 84 in usechat-types.go). However, these fields have JSON tags (displayname,omitempty and shortdescription,omitempty), so they will be marshaled.

Previously, this function likely called tool.Clean() to strip these internal fields. Without cleaning, these fields may be sent to the OpenAI API, potentially causing issues.

Apply this diff to use the cleaned tool:

 func ConvertToolDefinitionToOpenAI(tool uctypes.ToolDefinition) OpenAIRequestTool {
+	cleanedTool := tool.Clean()
 	return OpenAIRequestTool{
-		Name:        tool.Name,
-		Description: tool.Description,
-		Parameters:  tool.InputSchema,
-		Strict:      tool.Strict,
+		Name:        cleanedTool.Name,
+		Description: cleanedTool.Description,
+		Parameters:  cleanedTool.InputSchema,
+		Strict:      cleanedTool.Strict,
 		Type:        "function",
 	}
 }
🤖 Prompt for AI Agents
In pkg/aiusechat/openai/openai-convertmessage.go around lines 144 to 151, the
ConvertToolDefinitionToOpenAI function is using the raw ToolDefinition which may
include internal fields (DisplayName, ShortDescription) that must be stripped
before sending to OpenAI; call tool.Clean() (or otherwise obtain the cleaned
copy) and use that cleaned tool when building and returning the
OpenAIRequestTool so internal fields are not marshaled to the API, e.g. assign
cleaned := tool.Clean() and reference cleaned.Name, cleaned.Description,
cleaned.InputSchema, cleaned.Strict when constructing the return value.

@sawka sawka merged commit cd6389d into main Nov 19, 2025
8 checks passed
@sawka sawka deleted the sawka/usechat-interface branch November 19, 2025 19:38
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants