Skip to content

Conversation

@sawka
Copy link
Member

@sawka sawka commented Nov 26, 2025

No description provided.

sawka added 30 commits November 24, 2025 14:50
@coderabbitai
Copy link
Contributor

coderabbitai bot commented Nov 26, 2025

Walkthrough

This PR renames and expands "thinking mode" into an AI mode system, adds OpenAI Completions and OpenRouter integration paths, and centralizes tool-use and streaming logic. It introduces new frontend AIMode components and atoms, updates runtime keys from waveai:thinkingmode to waveai:mode, adds mode configuration types and default configs, and adds substantial backend changes: new OpenAI chat/completions backends, shared openaichat utilities/types, aiutil helpers for tool-use/progress, APIType and capability constants, and path/validation changes for file tools. CLI test tooling and configs were extended to support the new providers and models.

Estimated code review effort

🎯 5 (Critical) | ⏱️ ~120 minutes

Areas needing focused review:

  • pkg/aiusechat/openaichat/ (new backend, streaming, SSE lifecycle, error paths)
  • pkg/aiusechat/openaichat/* types and conversion helpers (message conversion, tool-call persistence)
  • pkg/aiusechat/openai/openai-backend.go and aiutil.* (tool-use flow moved to aiutil, SendToolProgress/CreateToolUseData semantics)
  • pkg/aiusechat/usechat.go and usechat-mode.go (AI opts resolution, getSystemPrompt, secret & endpoint handling, API routing)
  • Interface change: UseChatBackend.UpdateToolUseData (pointer → value) and all implementers (openaiResponsesBackend, anthropicBackend, openaiCompletionsBackend, callers)
  • uctypes changes (new APIType constants, capabilities, AIMode types) and AreModelsCompatible usages
  • Frontend changes (waveai-model.tsx atoms, AIModeDropdown, removal of ThinkingMode component, telemetry/runtime key renames) for consistency with backend runtime keys
  • CLI additions (cmd/testai/main-testai.go) — flags, env key checks, default model selection
  • Tool path validation changes (read/write/readfile, screenshot capability flags) for security and UX implications
  • pkg/wconfig/defaultconfig/waveai.json and settings types (AIModeConfigType and FullConfigType) — ensure config loading and schema compatibility
  • Cross-cutting: telemetry and runtime structs renamed (WaveAIMode / waveai:mode) — check serialization, persistence, and backward compatibility where applicable

Pre-merge checks and finishing touches

❌ Failed checks (1 warning, 1 inconclusive)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 32.31% which is insufficient. The required threshold is 80.00%. You can run @coderabbitai generate docstrings to improve docstring coverage.
Description check ❓ Inconclusive No pull request description was provided by the author, making it impossible to assess whether it relates to the changeset. Add a description explaining the changes, key features added, and any breaking changes or migration notes for reviewers.
✅ Passed checks (1 passed)
Check name Status Explanation
Title check ✅ Passed The title accurately summarizes the main change: implementing OpenAI chat completions API support, which enables local model support.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch sawka/waveai-openai-comps

Tip

📝 Customizable high-level summaries are now available in beta!

You can now customize how CodeRabbit generates the high-level summary in your pull requests — including its content, structure, tone, and formatting.

  • Provide your own instructions using the high_level_summary_instructions setting.
  • Format the summary however you like (bullet lists, tables, multi-section layouts, contributor stats, etc.).
  • Use high_level_summary_in_walkthrough to move the summary from the description to the walkthrough section.

Example instruction:

"Divide the high-level summary into five sections:

  1. 📝 Description — Summarize the main change in 50–60 words, explaining what was done.
  2. 📓 References — List relevant issues, discussions, documentation, or related PRs.
  3. 📦 Dependencies & Requirements — Mention any new/updated dependencies, environment variable changes, or configuration updates.
  4. 📊 Contributor Summary — Include a Markdown table showing contributions:
    | Contributor | Lines Added | Lines Removed | Files Changed |
  5. ✔️ Additional Notes — Add any extra reviewer context.
    Keep each section concise (under 200 words) and use bullet or numbered lists for clarity."

Note: This feature is currently in beta for Pro-tier users, and pricing will be announced later.


Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 5

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
pkg/aiusechat/tools_readfile.go (1)

74-79: Add explicit validation for max_bytes to avoid panics

max_bytes is defaulted when nil but never range-checked. If a negative value leaks through (e.g., from a non-schema caller), truncateData can panic due to invalid slicing with a negative index.

Consider mirroring the count validation with an explicit lower-bound check:

 if result.MaxBytes == nil {
   maxBytes := ReadFileDefaultMaxBytes
   result.MaxBytes = &maxBytes
+} else if *result.MaxBytes < 1 {
+  return nil, fmt.Errorf("max_bytes must be at least 1, got %d", *result.MaxBytes)
 }

This keeps runtime behavior safe even if external callers bypass JSON schema validation.

♻️ Duplicate comments (2)
pkg/aiusechat/tools_writefile.go (2)

341-344: Edit filename description is consistent with write/read tools

The edit_text_file filename description mirrors the write/read tools (absolute path, ~ support, no relative paths) and keeps the tool surface consistent without introducing any behavioral mismatch.


477-480: Delete filename description matches the rest of the file tools

The delete_text_file filename description is now aligned with the other file tools’ contract (absolute path, ~ support, no relative paths) and keeps the tool API coherent.

🧹 Nitpick comments (17)
pkg/aiusechat/tools_readfile.go (1)

329-332: Filename description change is clear and consistent with tool behavior

The updated description for filename is precise (absolute path, ~ support, no relative paths) and matches how wavebase.ExpandHomeDir is used here. The code still technically accepts relative paths, but documenting “absolute only” is fine if the intent is to constrain how the LLM uses the tool.

If you eventually want behavior to match the contract more strictly, you could add a filepath.IsAbs check in verifyReadTextFileInput (and the sibling tools) to reject non-absolute paths.

The description also correctly avoids mentioning approval or consent flows, which aligns with the prior guidance to keep approval semantics out of tool descriptions. Based on learnings, this is the desired pattern.

pkg/aiusechat/tools_writefile.go (1)

185-188: Clarified write_text_file filename description looks good

The new filename description clearly specifies absolute paths, ~ expansion, and that relative paths should not be used. This aligns with how wavebase.ExpandHomeDir is used and keeps the contract consistent with the read/edit/delete tools.

As with read_text_file, if you eventually want behavior to match the “absolute only” contract, consider adding a filepath.IsAbs check in verifyWriteTextFileInput (and siblings) to reject non-absolute paths. Right now the implementation is more permissive than the description, which is safe but slightly divergent.

The description correctly avoids mentioning approval semantics, which is what we want for these tools. Based on learnings, this avoids double-approval UX.

pkg/aiusechat/tools_readdir.go (1)

119-122: Path description is clear; verify ~ handling in readDirCallback

The updated path description (absolute path, ~ support, no relative paths) is consistent with the file tools and with verifyReadDirInput using wavebase.ExpandHomeDir.

One thing to double-check: readDirCallback passes params.Path directly to fileutil.ReadDir without explicit expansion. If fileutil.ReadDir does not itself expand ~, then calls like "~" will validate successfully in verifyReadDirInput but fail at execution time, despite the description promising ~ support. If that’s the case, consider:

 func readDirCallback(input any, toolUseData *uctypes.UIMessageDataToolUse) (any, error) {
   params, err := parseReadDirInput(input)
   if err != nil {
     return nil, err
   }

-  result, err := fileutil.ReadDir(params.Path, *params.MaxEntries)
+  expandedPath, err := wavebase.ExpandHomeDir(params.Path)
+  if err != nil {
+    return nil, fmt.Errorf("failed to expand path: %w", err)
+  }
+
+  result, err := fileutil.ReadDir(expandedPath, *params.MaxEntries)
   if err != nil {
     return nil, err
   }

If fileutil.ReadDir already handles ~, then the current code is fine—just worth confirming given the strengthened documentation.

frontend/app/aipanel/aipanel.tsx (1)

248-251: Consider gating debug window.aichat* hooks or scoping them to dev mode

Assigning aichatmessages and aichatstatus onto window each render is handy for debugging but permanently expands the public surface of window in all environments.

Consider wrapping these in a dev-only guard (e.g., a feature flag or existing dev-mode check) or moving into a small helper that can be disabled in production.

pkg/wconfig/settingsconfig.go (1)

260-276: LGTM! Well-structured AI mode configuration type.

The new AIModeConfigType struct provides a comprehensive configuration model for AI modes with appropriate separation of display properties and AI-specific settings. The inclusion of APITokenSecretName for secret references is a good security practice.

One minor inconsistency: WaveAIPremium (line 275) lacks omitempty while WaveAICloud (line 274) has it. Consider adding omitempty to WaveAIPremium for consistency, unless explicitly serializing false values is intended.

-	WaveAIPremium      bool     `json:"waveai:premium"`
+	WaveAIPremium      bool     `json:"waveai:premium,omitempty"`
pkg/aiusechat/usechat-mode.go (1)

35-42: Consider documenting the mode key format.

The function expects aiMode to match keys in WaveAIModes (e.g., "waveai@quick", "waveai@balanced"). Consider adding a brief comment documenting the expected format to help future maintainers.

frontend/app/aipanel/aipanel-contextmenu.ts (1)

48-86: Consider deriving labels from config instead of hardcoding.

The model names in labels (e.g., "gpt-5-mini", "gpt-5.1") are hardcoded here but the actual model configuration lives in waveai.json. If the backend model changes, these labels will become stale. Consider fetching the display names from aiModeConfigs similar to how AIModeDropdown does it, or at minimum add a comment noting these need manual sync.

frontend/app/aipanel/aimode.tsx (1)

73-111: Consider adding keyboard accessibility.

The dropdown lacks keyboard navigation (Escape to close, arrow keys for selection, Enter to confirm). While not blocking, this would improve accessibility.

pkg/aiusechat/openaichat/openaichat-backend.go (2)

71-76: Consider adding timeouts to the HTTP client.

The http.Client{} is created without any default timeout configuration. While there's a context-based timeout set earlier (lines 40-44), the HTTP client itself has no connection or response header timeouts. This could lead to hung connections if the server accepts the connection but never responds.

-	client := &http.Client{}
+	client := &http.Client{
+		Timeout: 5 * time.Minute, // Overall request timeout as safety net
+	}

78-81: Limit error response body size to prevent memory exhaustion.

Reading the entire error response body without a size limit could lead to memory issues if the server returns an unexpectedly large error response.

 	if resp.StatusCode != http.StatusOK {
-		bodyBytes, _ := io.ReadAll(resp.Body)
+		bodyBytes, _ := io.ReadAll(io.LimitReader(resp.Body, 4096))
 		return nil, nil, nil, fmt.Errorf("API returned status %d: %s", resp.StatusCode, string(bodyBytes))
 	}
pkg/aiusechat/aiutil/aiutil.go (2)

188-195: Redundant prefix check in IsOpenAIReasoningModel.

The check for strings.HasPrefix(m, "gpt-5.1") is redundant because strings.HasPrefix(m, "gpt-5") already covers all gpt-5.1* models.

 func IsOpenAIReasoningModel(model string) bool {
 	m := strings.ToLower(model)
 	return strings.HasPrefix(m, "o1") ||
 		strings.HasPrefix(m, "o3") ||
 		strings.HasPrefix(m, "o4") ||
-		strings.HasPrefix(m, "gpt-5") ||
-		strings.HasPrefix(m, "gpt-5.1")
+		strings.HasPrefix(m, "gpt-5")
 }

243-273: SendToolProgress silently swallows all errors.

The function returns early without any indication of failure on:

  • Missing tool definition (line 247-249)
  • JSON parse errors (line 258-260)
  • Progress descriptor errors (line 263-265)
  • SSE send errors (line 272, result ignored)

While this may be intentional to avoid disrupting the main flow, consider adding debug logging for troubleshooting purposes.

+	"log"
...
 func SendToolProgress(toolCallID, toolName string, jsonData []byte, chatOpts uctypes.WaveChatOpts, sseHandler *sse.SSEHandlerCh, usePartialParse bool) {
 	toolDef := chatOpts.GetToolDefinition(toolName)
 	if toolDef == nil || toolDef.ToolProgressDesc == nil {
 		return
 	}
 	
 	var parsedJSON any
 	var err error
 	if usePartialParse {
 		parsedJSON, err = utilfn.ParsePartialJson(jsonData)
 	} else {
 		err = json.Unmarshal(jsonData, &parsedJSON)
 	}
 	if err != nil {
+		log.Printf("aiutil: failed to parse JSON for tool progress %s: %v", toolName, err)
 		return
 	}
cmd/testai/main-testai.go (1)

159-207: Consider extracting common logic to reduce duplication.

testOpenAIComp and testOpenRouter are nearly identical, differing only in environment variable name, base URL, and log messages. This is acceptable for a test CLI tool, but if more providers are added, consider extracting a shared helper.

Also applies to: 209-257

pkg/aiusechat/openaichat/openaichat-types.go (1)

151-171: Variable name shadows builtin.

The variable copy at line 155 shadows Go's builtin copy function. Consider renaming to copied or msgCopy for clarity.

 func (m *StoredChatMessage) Copy() *StoredChatMessage {
 	if m == nil {
 		return nil
 	}
-	copy := *m
+	copied := *m
 	if len(m.Message.ToolCalls) > 0 {
-		copy.Message.ToolCalls = make([]ToolCall, len(m.Message.ToolCalls))
+		copied.Message.ToolCalls = make([]ToolCall, len(m.Message.ToolCalls))
 		for i, tc := range m.Message.ToolCalls {
-			copy.Message.ToolCalls[i] = tc
+			copied.Message.ToolCalls[i] = tc
 			if tc.ToolUseData != nil {
 				toolUseDataCopy := *tc.ToolUseData
-				copy.Message.ToolCalls[i].ToolUseData = &toolUseDataCopy
+				copied.Message.ToolCalls[i].ToolUseData = &toolUseDataCopy
 			}
 		}
 	}
 	if m.Usage != nil {
 		usageCopy := *m.Usage
-		copy.Usage = &usageCopy
+		copied.Usage = &usageCopy
 	}
-	return &copy
+	return &copied
 }
pkg/aiusechat/usechat.go (2)

64-113: Verify error handling for missing BaseURL configuration.

The function properly handles secret retrieval and cloud/local base URL selection. However, at line 97, when config.WaveAICloud is false and config.BaseURL is empty, the error message references aiMode but doesn't indicate what valid configurations look like.

Also, the environment variable is read twice (lines 91-92) - consider storing the result:

 	if config.WaveAICloud {
 		baseUrl = uctypes.DefaultAIEndpoint
-		if os.Getenv(WaveAIEndpointEnvName) != "" {
-			baseUrl = os.Getenv(WaveAIEndpointEnvName)
+		if envEndpoint := os.Getenv(WaveAIEndpointEnvName); envEndpoint != "" {
+			baseUrl = envEndpoint
 		}

421-428: Magic model strings should be extracted to constants.

Lines 421-424 contain hardcoded model names uctypes.PremiumOpenAIModel and uctypes.DefaultOpenAIModel. The log message at line 422 references "gpt-5.1" and "gpt-5-mini" directly. Consider keeping log messages in sync with the actual constant values to avoid confusion if constants change.

pkg/aiusechat/usechat-backend.go (1)

176-178: Anthropic backend UpdateToolUseData returns error - consider documenting limitation.

The anthropic backend returns "not implemented" for UpdateToolUseData. While this is acceptable if tool use isn't supported for Anthropic in this codebase, consider adding a comment explaining why, or ensure callers handle this error gracefully.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 6211b21 and 872eee5.

⛔ Files ignored due to path filters (1)
  • go.sum is excluded by !**/*.sum
📒 Files selected for processing (32)
  • cmd/testai/main-testai.go (7 hunks)
  • frontend/app/aipanel/aimode.tsx (1 hunks)
  • frontend/app/aipanel/aipanel-contextmenu.ts (2 hunks)
  • frontend/app/aipanel/aipanel.tsx (3 hunks)
  • frontend/app/aipanel/aipanelmessages.tsx (3 hunks)
  • frontend/app/aipanel/aitypes.ts (1 hunks)
  • frontend/app/aipanel/thinkingmode.tsx (0 hunks)
  • frontend/app/aipanel/waveai-model.tsx (4 hunks)
  • frontend/types/gotypes.d.ts (4 hunks)
  • go.mod (1 hunks)
  • pkg/aiusechat/aiutil/aiutil.go (2 hunks)
  • pkg/aiusechat/anthropic/anthropic-backend.go (1 hunks)
  • pkg/aiusechat/anthropic/anthropic-convertmessage.go (2 hunks)
  • pkg/aiusechat/openai/openai-backend.go (9 hunks)
  • pkg/aiusechat/openai/openai-convertmessage.go (2 hunks)
  • pkg/aiusechat/openaichat/openaichat-backend.go (1 hunks)
  • pkg/aiusechat/openaichat/openaichat-convertmessage.go (1 hunks)
  • pkg/aiusechat/openaichat/openaichat-types.go (1 hunks)
  • pkg/aiusechat/tools_readdir.go (1 hunks)
  • pkg/aiusechat/tools_readfile.go (1 hunks)
  • pkg/aiusechat/tools_screenshot.go (1 hunks)
  • pkg/aiusechat/tools_writefile.go (3 hunks)
  • pkg/aiusechat/uctypes/uctypes.go (11 hunks)
  • pkg/aiusechat/usechat-backend.go (6 hunks)
  • pkg/aiusechat/usechat-mode.go (1 hunks)
  • pkg/aiusechat/usechat-prompts.go (1 hunks)
  • pkg/aiusechat/usechat.go (15 hunks)
  • pkg/telemetry/telemetrydata/telemetrydata.go (1 hunks)
  • pkg/waveobj/objrtinfo.go (1 hunks)
  • pkg/wconfig/defaultconfig/waveai.json (1 hunks)
  • pkg/wconfig/settingsconfig.go (2 hunks)
  • pkg/wshrpc/wshserver/wshserver.go (0 hunks)
💤 Files with no reviewable changes (2)
  • pkg/wshrpc/wshserver/wshserver.go
  • frontend/app/aipanel/thinkingmode.tsx
🧰 Additional context used
🧠 Learnings (4)
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/tools_readfile.go
  • pkg/aiusechat/aiutil/aiutil.go
  • pkg/aiusechat/tools_writefile.go
  • pkg/aiusechat/tools_screenshot.go
  • frontend/app/aipanel/aitypes.ts
  • pkg/aiusechat/tools_readdir.go
  • pkg/aiusechat/openaichat/openaichat-convertmessage.go
  • pkg/aiusechat/usechat-prompts.go
  • pkg/aiusechat/openaichat/openaichat-types.go
  • pkg/aiusechat/usechat-backend.go
  • pkg/aiusechat/uctypes/uctypes.go
  • pkg/aiusechat/openai/openai-backend.go
  • pkg/aiusechat/usechat.go
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.

Applied to files:

  • frontend/app/aipanel/aipanelmessages.tsx
  • frontend/app/aipanel/aimode.tsx
  • frontend/app/aipanel/aipanel.tsx
  • frontend/app/aipanel/aitypes.ts
📚 Learning: 2025-10-21T05:09:26.916Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2465
File: frontend/app/onboarding/onboarding-upgrade.tsx:13-21
Timestamp: 2025-10-21T05:09:26.916Z
Learning: In the waveterm codebase, clientData is loaded and awaited in wave.ts before React runs, ensuring it is always available when components mount. This means atoms.client will have data on first render.

Applied to files:

  • frontend/app/aipanel/waveai-model.tsx
📚 Learning: 2025-11-01T00:57:23.025Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2504
File: frontend/app/aipanel/aipanel-contextmenu.ts:15-16
Timestamp: 2025-11-01T00:57:23.025Z
Learning: In the waveterm codebase, types defined in custom.d.ts are globally available and do not require explicit imports. Backend types defined in gotypes.d.ts are also globally available.

Applied to files:

  • frontend/app/aipanel/aitypes.ts
  • frontend/types/gotypes.d.ts
🧬 Code graph analysis (13)
pkg/wconfig/settingsconfig.go (1)
pkg/wcloud/wcloud.go (1)
  • APIVersion (34-34)
frontend/app/aipanel/aipanelmessages.tsx (2)
frontend/app/aipanel/waveai-model.tsx (1)
  • scrollToBottom (286-288)
frontend/app/aipanel/aimode.tsx (1)
  • AIModeDropdown (10-114)
frontend/app/aipanel/aipanel-contextmenu.ts (2)
frontend/app/store/wshclientapi.ts (1)
  • RpcApi (682-682)
frontend/app/store/wshrpcutil.ts (1)
  • TabRpcClient (37-37)
pkg/aiusechat/anthropic/anthropic-convertmessage.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
  • APIType_AnthropicMessages (19-19)
pkg/aiusechat/openaichat/openaichat-backend.go (4)
pkg/web/sse/ssehandler.go (9)
  • SSEHandlerCh (64-77)
  • AiMsgStart (39-39)
  • AiMsgStartStep (50-50)
  • AiMsgError (53-53)
  • AiMsgTextStart (40-40)
  • AiMsgTextDelta (41-41)
  • AiMsgTextEnd (42-42)
  • AiMsgFinishStep (51-51)
  • AiMsgFinish (52-52)
pkg/aiusechat/uctypes/uctypes.go (4)
  • WaveChatOpts (498-519)
  • WaveContinueResponse (250-253)
  • WaveStopReason (241-247)
  • WaveToolCall (234-239)
pkg/aiusechat/openaichat/openaichat-types.go (4)
  • StoredChatMessage (118-122)
  • ChatRequestMessage (23-29)
  • ToolCall (63-68)
  • StreamChunk (84-90)
pkg/aiusechat/chatstore/chatstore.go (1)
  • DefaultChatStore (18-20)
pkg/aiusechat/openai/openai-convertmessage.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
  • APIType_OpenAIResponses (20-20)
pkg/aiusechat/anthropic/anthropic-backend.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
  • APIType_AnthropicMessages (19-19)
frontend/app/aipanel/aipanel.tsx (1)
frontend/app/aipanel/aimode.tsx (1)
  • AIModeDropdown (10-114)
pkg/aiusechat/tools_screenshot.go (1)
pkg/aiusechat/uctypes/uctypes.go (1)
  • AICapabilityImages (161-161)
frontend/types/gotypes.d.ts (1)
pkg/wconfig/settingsconfig.go (1)
  • AIModeConfigType (260-276)
pkg/aiusechat/openaichat/openaichat-types.go (1)
pkg/aiusechat/uctypes/uctypes.go (3)
  • UIMessageDataToolUse (196-207)
  • AIUsage (291-297)
  • APIType_OpenAIChat (21-21)
pkg/aiusechat/openai/openai-backend.go (2)
pkg/aiusechat/uctypes/uctypes.go (2)
  • APIType_OpenAIResponses (20-20)
  • UIMessageDataToolUse (196-207)
pkg/aiusechat/aiutil/aiutil.go (1)
  • SendToolProgress (245-273)
pkg/aiusechat/usechat.go (5)
pkg/aiusechat/uctypes/uctypes.go (5)
  • APIType_OpenAIResponses (20-20)
  • AIOptsType (256-269)
  • DefaultAIEndpoint (13-13)
  • WaveChatOpts (498-519)
  • UIMessageDataToolUse (196-207)
pkg/waveobj/objrtinfo.go (1)
  • ObjRTInfo (6-27)
pkg/secretstore/secretstore.go (1)
  • GetSecret (251-263)
pkg/aiusechat/usechat-backend.go (1)
  • UseChatBackend (19-51)
pkg/aiusechat/aiutil/aiutil.go (1)
  • CreateToolUseData (198-241)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (javascript-typescript)
  • GitHub Check: Analyze (go)
🔇 Additional comments (64)
go.mod (1)

84-84: The goid dependency is legitimate and actively used in the codebase.

The library is imported in ./tsunami/engine/globalctx.go where it tracks goroutine IDs to isolate rendering, event, and effect contexts per goroutine. No security advisories exist for version v0.3.0. The dependency aligns with runtime concurrency management rather than the OpenAI API feature specifically, but is a valid addition to the codebase.

frontend/app/aipanel/aipanelmessages.tsx (1)

7-7: AIModeDropdown integration in messages view looks correct

Importing and rendering <AIModeDropdown /> in the messages overlay is consistent with the new AI mode model and matches the welcome view usage.

Also applies to: 65-66

frontend/app/aipanel/aipanel.tsx (1)

492-506: Welcome view AIModeDropdown usage is consistent with messages panel

Showing <AIModeDropdown /> in the empty/welcome state keeps AI mode selection available regardless of chat history and matches the messages overlay behavior.

pkg/aiusechat/anthropic/anthropic-backend.go (1)

53-63: Using APIType_AnthropicMessages for usage metadata is correct

Setting AIUsage.APIType to uctypes.APIType_AnthropicMessages aligns this backend with the centralized API type constants and Anthropic messages path.

pkg/aiusechat/tools_screenshot.go (1)

52-83: Image capability requirement for screenshot tool is appropriate

Adding RequiredCapabilities: []string{uctypes.AICapabilityImages} correctly ties capture_screenshot availability to the “images” capability, and the Description remains free of approval semantics, which matches the tools_* guidance.

pkg/aiusechat/openai/openai-convertmessage.go (1)

302-303: Switch to APIType_OpenAIResponses constant looks good

Using uctypes.APIType_OpenAIResponses both for the X-Wave-APIType header and in ConvertAIChatToUIChat’s APIType check keeps the OpenAI “responses” path consistent and avoids stringly-typed mismatches.

Also applies to: 521-543

pkg/telemetry/telemetrydata/telemetrydata.go (1)

128-151: Telemetry property rename to waveai:mode is coherent with AI mode refactor

Adding WaveAIMode with json:"waveai:mode,omitempty" alongside WaveAIThinkingLevel matches the new AI mode terminology and keeps thinking level as a separate dimension.

pkg/wconfig/settingsconfig.go (1)

287-289: LGTM!

The WaveAIModes field integrates cleanly into FullConfigType with the appropriate JSON tag "waveai" that aligns with the config file structure (waveai.json).

pkg/waveobj/objrtinfo.go (1)

24-26: Verify migration for existing persisted data.

The field rename from WaveAIThinkingMode to WaveAIMode (and JSON tag from waveai:thinkingmode to waveai:mode) is a breaking change for any existing serialized runtime info. Ensure that existing user data with the old key is either migrated or gracefully handled (e.g., defaults applied when the new key is missing).

pkg/aiusechat/anthropic/anthropic-convertmessage.go (2)

174-174: LGTM!

Good refactor replacing the hardcoded string with uctypes.APIType_AnthropicMessages constant for consistency and maintainability.


798-799: LGTM!

The validation now correctly uses the constant and the improved error message format provides clear diagnostics showing both expected and actual values.

pkg/aiusechat/usechat-mode.go (1)

13-33: LGTM!

The resolveAIMode function implements clean fallback logic:

  1. Defaults empty mode to balanced
  2. Downgrades cloud-only modes to quick for non-premium users
  3. Properly propagates errors from config lookups

The control flow is straightforward and handles the premium gating requirement well.

frontend/app/aipanel/aipanel-contextmenu.ts (2)

44-44: LGTM!

The default mode logic correctly falls back to "waveai@balanced" for premium users and "waveai@quick" for non-premium, aligning with the backend resolveAIMode behavior.


166-169: LGTM!

The menu label and submenu variable naming correctly reflect the "AI Mode" terminology.

frontend/app/aipanel/aimode.tsx (1)

10-19: LGTM!

Good use of memo for the dropdown component. The state management approach using Jotai atoms for reactive updates is consistent with the codebase patterns.

pkg/aiusechat/openaichat/openaichat-backend.go (4)

119-127: Return early on context cancellation without the SSE error message in some cases.

When the context is cancelled, this sends an SSE error and returns both a WaveStopReason and an error. This is appropriate, but note that if the SSE handler is already closed or has issues, the error message won't reach the client. The current approach is reasonable given the error is also returned.


167-189: Tool call delta aggregation logic looks correct.

The logic properly handles incremental tool call deltas by:

  1. Expanding the slice as needed based on index
  2. Accumulating ID, Type, and Function fields incrementally
  3. Appending to Arguments string

This correctly implements the OpenAI streaming tool call format.


211-227: Silent continue on JSON parse failure may hide issues.

When tool call arguments fail to parse as JSON (line 216-218), the tool call is silently skipped with only a log message. This could lead to confusing behavior where tool calls appear to be lost. Consider whether this should be surfaced to the user or at least tracked in the stop reason.

Verify this is the intended behavior - should malformed tool call arguments result in a visible error to the user, or is silent logging sufficient?


248-254: SSE finish signal handling is correct.

The code properly:

  1. Ends text streaming if it was started
  2. Sends finish-step signal
  3. Only sends the final finish signal if not continuing with tool use

This ensures proper SSE lifecycle management for multi-step tool-use flows.

frontend/app/aipanel/aitypes.ts (1)

7-14: Comment path updates correctly reflect type relocation.

The comment references have been updated from usechat-types.go to uctypes.go, which aligns with the PR's restructuring of AI-related types into the uctypes package.

frontend/types/gotypes.d.ts (3)

16-33: AIModeConfigType correctly mirrors the Go struct.

The TypeScript type accurately reflects the Go AIModeConfigType struct from pkg/wconfig/settingsconfig.go:

  • Optional fields (omitempty in Go) correctly use ? in TypeScript
  • Required field waveai:premium (no omitempty) is correctly non-optional
  • Field names match JSON tags from Go struct

772-772: New waveai configuration mapping added to FullConfigType.

The waveai field correctly maps mode names to their configurations, enabling the frontend to access AI mode settings from the full configuration.


953-953: Renamed field from waveai:thinkingmode to waveai:mode.

This aligns with the PR's terminology change from "thinking mode" to "AI mode" throughout the codebase.

frontend/app/aipanel/waveai-model.tsx (4)

60-61: New atoms for AI mode state management.

The currentAIMode atom is initialized with a sensible default of "waveai@balanced", and aiModeConfigs is declared for derived configuration access.


86-89: Derived atom safely handles missing configuration.

The aiModeConfigs atom correctly provides an empty object fallback (?? {}) when the full config or waveai section is unavailable, preventing null reference errors.


346-352: setAIMode correctly updates both local state and backend.

The method properly:

  1. Updates the local Jotai atom
  2. Sends the mode change to the backend via SetRTInfoCommand

This ensures consistency between frontend state and backend storage.


368-369: Consistent default value for AI mode.

The default value "waveai@balanced" matches the initialization on line 60, ensuring consistent behavior when no mode is stored.

pkg/aiusechat/aiutil/aiutil.go (2)

227-238: Potential latency from synchronous BlockId resolution with 2-second timeout.

The ResolveBlockIdFromPrefix call with a 2-second timeout happens synchronously during tool use data creation. If this is called frequently or the resolution is slow, it could introduce noticeable latency. Consider whether this resolution could be deferred or made optional.

Is the 2-second timeout acceptable for the user experience? Consider whether this block ID resolution is critical for the initial tool use data, or if it could be populated asynchronously.


197-241: CreateToolUseData provides robust error handling for tool lookup and argument parsing.

The function correctly:

  1. Returns error status when tool is not found
  2. Returns error status with descriptive message on JSON parse failure
  3. Safely handles optional ToolCallDesc and ToolApproval callbacks
  4. Uses type assertions with ok checks for argument map access
cmd/testai/main-testai.go (2)

27-29: Verify model names are correct for the target API.

DefaultOpenAIModel = "gpt-5.1" doesn't match current OpenAI model naming conventions (e.g., gpt-4o, gpt-4-turbo). If this is used against OpenAI's API directly, verify this model exists. If it's for Wave's proxy service, this is fine.


349-416: LGTM!

The CLI flag handling and main flow logic correctly routes to the appropriate test functions based on provider flags. The default model assignment based on provider type is well-structured.

pkg/aiusechat/openaichat/openaichat-convertmessage.go (8)

26-34: LGTM!

The backwards iteration to find and append to the last user message is correct. Silent no-op when no user message exists is appropriate for this use case.


36-59: LGTM!

The tool definition conversion correctly filters by capabilities and maps Wave tool definitions to the OpenAI function format.


61-146: LGTM!

The HTTP request builder correctly handles system prompts, injected data, token limits (with reasoning model distinction), and tool definitions. Header configuration is comprehensive.


148-199: LGTM!

The message conversion correctly handles different part types (text, text/plain files, directory listings), validates input, and aggregates content with proper separators.


201-229: LGTM!

Tool result conversion correctly creates OpenAI tool messages with proper role, ID mapping, and error handling.


231-295: LGTM!

The UI chat conversion correctly handles text parts, tool use data, and appropriately filters out tool result messages and empty messages.


297-317: LGTM!

The function correctly searches for and retrieves function call input by tool call ID.


319-346: LGTM!

The update function correctly creates a copy of the message before modification and persists the updated tool use data back to the store.

pkg/aiusechat/uctypes/uctypes.go (5)

18-22: LGTM!

The API type constants are well-defined and provide clear differentiation between Anthropic Messages, OpenAI Responses, and OpenAI Chat APIs.


125-135: LGTM!

The HasRequiredCapabilities method correctly implements the capability check with proper nil handling and uses the standard slices.Contains for lookups.


173-193: LGTM!

The AIModeConfig struct provides comprehensive configuration for AI modes with proper JSON serialization.


279-281: LGTM!

The HasCapability method on AIOptsType provides consistent capability checking.


610-628: LGTM!

Using the APIType_OpenAIResponses constant instead of a string literal improves maintainability. The model compatibility logic for the gpt-5 family remains intact.

pkg/aiusechat/openaichat/openaichat-types.go (4)

12-50: LGTM!

The request types correctly model the OpenAI Chat Completions API. The clean() method properly strips internal fields before API transmission.


52-82: LGTM!

Tool-related types correctly model OpenAI's function calling structure. The ToolUseData field is appropriately marked as internal with a clean() method to strip it.


84-115: LGTM!

The streaming types correctly model OpenAI's SSE response format. Using *string for FinishReason properly handles the null case.


139-149: LGTM!

GetUsage() correctly maps the chat usage to the common AIUsage type with the appropriate API type constant.

pkg/aiusechat/openai/openai-backend.go (5)

147-158: LGTM!

Using uctypes.APIType_OpenAIResponses constant instead of a string literal improves type safety and consistency.


408-437: LGTM!

The signature change from pointer to value type is correct. The function properly copies the parameter and stores a pointer to the copy in the updated message.


397-404: LGTM!

The streamlined state structure removes redundant tool-use data tracking in favor of centralized progress handling via aiutil.


854-877: LGTM!

The aiutil.SendToolProgress integration correctly handles both streaming (partial JSON) and completion (full JSON) scenarios with the appropriate parse mode flag.


973-984: LGTM!

The simplified function call message creation correctly removes local tool-use data attachment in favor of centralized management.

pkg/aiusechat/usechat.go (7)

51-62: LGTM - System prompt selection logic is well-structured.

The regex-based model detection for needsStrictToolAddOn correctly identifies models that may require stricter tool instructions. The builder mode returning an empty prompt array is intentional per the summary.

One minor observation: the regexp.MatchString compiles the regex on every call. For a hot path, consider pre-compiling, but this is likely called infrequently enough that it's not a concern.


161-163: Verify the error message alignment with actual API behavior.

The check at line 161 uses APIType_OpenAIResponses with shouldUseChatCompletionsAPI to return an error. The error message says "Chat completions API not available" but this logic seems inverted - it's checking if the model requires chat completions while using the responses API type.

Ensure this error message accurately describes the failure condition: the user is trying to use older models (gpt-3.5, gpt-4, o1) with the OpenAI Responses API, which doesn't support them.


194-198: Signature change from pointer to value type is consistent with interface update.

The change from *uctypes.UIMessageDataToolUse to uctypes.UIMessageDataToolUse aligns with the interface change in usechat-backend.go. All call sites (lines 234, 263, 299, 326) correctly dereference the pointer when calling this function.


305-330: Tool call processing loop has potential for deferred cleanup issue.

At lines 306-308, the defer activeToolMap.Delete(toolCall.ID) is called inside a loop. In Go, defer executes when the function returns, not when the loop iteration ends. This means all deletions happen at function end, which is likely the intended behavior, but worth noting.

The tool data creation and SSE emission logic (lines 311-330) is well-structured. The comment at line 331 correctly documents the guarantee that all ToolCalls have non-nil ToolUseData after this loop.


356-367: Metrics initialization correctly captures new AIMode field.

The AIMetrics struct initialization properly includes the new AIMode field from chatOpts.Config.AIMode. This aligns with the telemetry changes and replaces the previous ThinkingMode tracking.


613-615: Good defensive nil check for rtInfo.

Creating a default ObjRTInfo when nil prevents nil pointer dereferences in downstream code. This is a good safety improvement.


644-644: System prompt assignment consolidated to single location.

The use of getSystemPrompt here properly centralizes prompt selection logic and correctly passes builder mode detection via chatOpts.BuilderId != "".

pkg/aiusechat/usechat-backend.go (4)

32-32: Interface signature change from pointer to value is appropriate.

Changing toolUseData *uctypes.UIMessageDataToolUse to toolUseData uctypes.UIMessageDataToolUse is a reasonable choice since the data is being passed for storage/transmission rather than in-place modification. All implementations have been updated consistently.


54-56: Compile-time interface checks are a good practice.

Adding var _ UseChatBackend = (*openaiCompletionsBackend)(nil) ensures the new backend type properly implements the interface at compile time rather than discovering issues at runtime.


59-70: API type dispatch correctly routes to new backend.

The switch statement properly handles:

  • APIType_OpenAIResponsesopenaiResponsesBackend
  • APIType_OpenAIChatopenaiCompletionsBackend (new)
  • APIType_AnthropicMessagesanthropicBackend

The default case returns a descriptive error for unsupported types.


126-161: Based on my verification of the codebase, I can now provide the rewritten review comment:


The openaiCompletionsBackend implementation correctly delegates to openaichat package functions without issues.

The implementation follows the same pattern as openaiResponsesBackend. All function signatures are correct:

  1. ConvertToolResultsToNativeChatMessage (lines 147-149): openaichat.ConvertToolResultsToNativeChatMessage already returns []uctypes.GenAIMessage directly, so no additional conversion is needed.

  2. GetFunctionCallInputByToolCallId (lines 155-157): openaichat.GetFunctionCallInputByToolCallId returns *uctypes.AIFunctionCallInput directly, matching the interface requirement.

  3. ConvertAIMessageToNativeChatMessage (lines 151-152): Returns *StoredChatMessage from openaichat.ConvertAIMessageToStoredChatMessage, which correctly implements the uctypes.GenAIMessage interface with GetMessageId(), GetRole(), and GetUsage() methods.

Comment on lines +40 to +49
let currentMode = aiMode || "waveai@balanced";
const currentConfig = aiModeConfigs[currentMode];
if (currentConfig) {
if (!hasPremium && currentConfig["waveai:premium"]) {
currentMode = "waveai@quick";
}
if (hideQuick && currentMode === "waveai@quick") {
currentMode = "waveai@balanced";
}
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Premium fallback logic may cause UI inconsistency.

The currentMode normalization (lines 40-49) silently overrides the user's selected mode based on premium status without updating the persisted state. This could cause confusion where the UI shows one mode but the atom holds a different value.

Consider calling model.setAIMode(currentMode) when a fallback is applied, or ensure the backend handles this normalization so the frontend stays in sync.

🤖 Prompt for AI Agents
In frontend/app/aipanel/aimode.tsx around lines 40 to 49, currentMode is being
silently overridden when the user lacks premium or hideQuick constraints are
applied, causing the UI to display a mode different from the persisted atom;
update the persisted state when a fallback occurs by calling
model.setAIMode(currentMode) (or the appropriate setter) immediately after you
change currentMode so the atom/backend stays in sync, and ensure this call is
only made when the normalized mode differs from the incoming aiMode to avoid
unnecessary writes.

Comment on lines +242 to +246
if len(validToolCalls) > 0 {
assistantMsg.Message.ToolCalls = validToolCalls
} else {
assistantMsg.Message.Content = textBuilder.String()
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

OpenAI API assistant messages content tool_calls simultaneously same response


<web_search_result>
Short answer: Yes — an assistant message can include content and tool_calls at the same time, but if the assistant issues tool_calls you must follow with corresponding tool-role messages (tool results) for every tool_call_id before continuing the run.

Key points and docs:

  • Example assistant message containing tool_calls (and content) in the Assistants/chat examples. [1]
  • The tool-calling flow and requirement to return tool results after a model issues a tool call. [2]
  • API message object examples and the rule that tool-call assistant messages must be followed by tool messages responding to each tool_call_id (otherwise the API returns an error). [3]

Sources:
[1] OpenAI Assistants examples showing assistant messages with tool_calls.
[2] OpenAI Function calling / Tools guide describing the tool-calling flow.
[3] OpenAI Chat/Responses API reference and example error guidance requiring tool messages for each tool_call_id.
</web_search_result>


Assistant messages should allow both content and tool_calls simultaneously.

OpenAI's API supports assistant messages with both content and tool_calls in the same response. The current if/else logic (lines 242-246) discards the text content whenever tool calls are present. If the model generates both explanatory text and tool calls, that content will be lost. Consider allowing both fields to coexist when both are present in the response.

🤖 Prompt for AI Agents
In pkg/aiusechat/openaichat/openaichat-backend.go around lines 242 to 246, the
current if/else discards textual content when tool calls exist; change the logic
to always set assistantMsg.Message.Content = textBuilder.String() and, in
addition, if len(validToolCalls) > 0 set assistantMsg.Message.ToolCalls =
validToolCalls (remove the else branch) so both content and tool_calls can
coexist; ensure ToolCalls is only assigned when non-empty to avoid adding empty
slices.

Comment on lines +28 to +39
"waveai@deep": {
"display:name": "Deep",
"display:order": -1,
"display:icon": "lightbulb",
"display:shortdesc": "gpt-5.1, full thinking",
"display:description": "Slower but most capable\n(gpt-5.1 with full reasoning)",
"ai:apitype": "openai-responses",
"ai:model": "gpt-5.1",
"ai:thinkinglevel": "medium",
"ai:capabilities": ["tools", "images", "pdfs"],
"waveai:cloud": true,
"waveai:premium": true
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Align Deep mode description with ai:thinkinglevel

"waveai@deep" advertises "full thinking" / "full reasoning" while ai:thinkinglevel is "medium". This mismatch can be confusing and under-deliver on the label.

Either:

  • Bump ai:thinkinglevel to "high" (if supported and acceptable cost-wise), or
  • Soften the display:shortdesc / display:description copy to match "medium" reasoning.
🤖 Prompt for AI Agents
In pkg/wconfig/defaultconfig/waveai.json around lines 28 to 39 there is a
mismatch: the "waveai@deep" display copy claims "full thinking" / "full
reasoning" while ai:thinkinglevel is set to "medium"; either change
ai:thinkinglevel to "high" if the model/config truly supports higher reasoning
(and you've validated cost/availability), or update display:shortdesc and
display:description to tone down claims to match "medium" (e.g., "enhanced
reasoning" or "improved thinking" and remove "full" wording) so the label and
metadata align.

Copy link
Contributor

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 1

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
pkg/aiusechat/tools_writefile.go (1)

18-54: “Maximum file size: 100KB” is not enforced for new/overwritten writes

Right now:

  • MaxEditFileSize is enforced in validateTextFile based on the existing file’s size (fileInfo.Size()).
  • writeTextFileCallback checks HasBinaryData(contentsBytes) but does not enforce a size limit on contentsBytes.
  • For new files (or existing files that are being overwritten with larger content), the 100KB limit mentioned in the write_text_file description is therefore not actually enforced.

If the intent is to enforce that limit on the resulting file size as seen by the user, consider adding a length check on contentsBytes:

 	contentsBytes := []byte(params.Contents)
 	if utilfn.HasBinaryData(contentsBytes) {
 		return nil, fmt.Errorf("contents appear to contain binary data")
 	}
+
+	if len(contentsBytes) > MaxEditFileSize {
+		return nil, fmt.Errorf("file is too large (%d bytes, max %d bytes)", len(contentsBytes), MaxEditFileSize)
+	}

Alternatively, if larger writes are acceptable, you may want to relax or clarify the “Maximum file size: 100KB.” text in the description to avoid surprising users.

Also applies to: 148-180, 195-196

pkg/aiusechat/usechat.go (1)

194-198: ToolUseData lifecycle mostly looks solid, but there’s a pointer aliasing bug in processToolCalls.

The overall flow is nice:

  • Centralizing UpdateToolUseDataInChat through updateToolUseDataInChat.
  • Creating ToolUseData up‑front with aiutil.CreateToolUseData, pushing initial data-tooluse SSE packets, and registering approvals before executing tools.
  • Re‑posting updated ToolUseData both via SSE and chat backend after input validation and after approval/denial.

However, in processToolCalls the creation of ToolUseData has a subtle but serious Go gotcha:

for i := range stopReason.ToolCalls {
    toolCall := &stopReason.ToolCalls[i]
    ...
    toolUseData := aiutil.CreateToolUseData(toolCall.ID, toolCall.Name, argsJSON, chatOpts)
    stopReason.ToolCalls[i].ToolUseData = &toolUseData
    ...
}

Because toolUseData is a single variable reused across loop iterations (same lexical scope), every ToolUseData pointer stored into stopReason.ToolCalls[i].ToolUseData will point to the same underlying instance. Any mutation in one tool call’s ToolUseData (e.g., in ToolVerifyInput, approval updates, or error handling) will leak into all others.

That will manifest as:

  • All tool calls sharing the same status/error fields.
  • SSE data-tooluse updates for later tools retroactively “changing” earlier ones when dereferenced.

You can fix this by allocating a distinct struct instance per iteration and copying the value in:

-        toolUseData := aiutil.CreateToolUseData(toolCall.ID, toolCall.Name, argsJSON, chatOpts)
-        stopReason.ToolCalls[i].ToolUseData = &toolUseData
+        toolUseData := aiutil.CreateToolUseData(toolCall.ID, toolCall.Name, argsJSON, chatOpts)
+        // Allocate a distinct ToolUseData instance per tool call to avoid pointer aliasing.
+        stopReason.ToolCalls[i].ToolUseData = &uctypes.UIMessageDataToolUse{}
+        *stopReason.ToolCalls[i].ToolUseData = toolUseData

This keeps your “pass by value to SSE/backends” approach while ensuring each tool call owns its own mutable ToolUseData.

Also applies to: 233-235, 263-264, 297-300, 311-327

♻️ Duplicate comments (1)
frontend/app/aipanel/aimode.tsx (1)

40-49: Normalization should update persisted state to maintain consistency.

As noted in the previous review, the currentMode normalization adjusts the displayed mode based on premium status and constraints, but does not update the persisted aiMode atom. This creates an inconsistency where the UI shows one mode while the atom retains a different value.

Every render will re-normalize the same stale atom value, and any other code reading the atom will see a mode that doesn't match what's displayed in the dropdown.

Consider updating the atom when normalization changes the mode:

 let currentMode = aiMode || "waveai@balanced";
+const originalMode = currentMode;
 const currentConfig = aiModeConfigs[currentMode];
 if (currentConfig) {
     if (!hasPremium && currentConfig["waveai:premium"]) {
         currentMode = "waveai@quick";
     }
     if (hideQuick && currentMode === "waveai@quick") {
         currentMode = "waveai@balanced";
     }
 }
+
+// Sync atom if normalization changed the mode
+if (currentMode !== originalMode) {
+    model.setAIMode(currentMode);
+}

Alternatively, move this normalization logic to the backend so the frontend always receives a valid mode.

🧹 Nitpick comments (7)
pkg/aiusechat/openaichat/openaichat-types.go (1)

12-171: Consider adding godoc comments to exported types.

While the code is clear and well-structured, adding godoc comments to exported types and methods would improve discoverability and IDE support. This is optional and can be deferred.

Example for the main request type:

// ChatRequest represents an OpenAI Chat Completions API request.
// It supports both streaming and non-streaming modes.
type ChatRequest struct {
pkg/aiusechat/tools_readfile.go (1)

244-246: Consider factoring out shared “expand + require absolute” helper

readTextFileCallback repeats the same ExpandHomeDir + absolute-path check logic as verifyReadTextFileInput (and similar logic exists in other tools). That’s fine functionally, but you could centralize it to reduce duplication and keep future tweaks in one place.

For example:

+func expandAndRequireAbs(raw string) (string, error) {
+	expanded, err := wavebase.ExpandHomeDir(raw)
+	if err != nil {
+		return "", fmt.Errorf("failed to expand path: %w", err)
+	}
+	if !filepath.IsAbs(expanded) {
+		return "", fmt.Errorf("path must be absolute, got relative path: %s", raw)
+	}
+	return expanded, nil
+}

Then:

-	expandedPath, err := wavebase.ExpandHomeDir(params.Filename)
-	if err != nil {
-		return nil, fmt.Errorf("failed to expand path: %w", err)
-	}
-
-	if !filepath.IsAbs(expandedPath) {
-		return nil, fmt.Errorf("path must be absolute, got relative path: %s", params.Filename)
-	}
+	expandedPath, err := expandAndRequireAbs(params.Filename)
+	if err != nil {
+		return nil, err
+	}

Same helper could be reused across other file tools.

pkg/aiusechat/tools_readdir.go (1)

89-98: Use expandedPath in fileutil.ReadDir for consistency with validation

You expand and validate expandedPath but then call fileutil.ReadDir with the original params.Path. That’s correct with the current checks (relatives are rejected before this point), but using expandedPath would avoid double ~ handling and guarantee you’re listing exactly what you validated.

For example:

-	expandedPath, err := wavebase.ExpandHomeDir(params.Path)
+	expandedPath, err := wavebase.ExpandHomeDir(params.Path)
 	if err != nil {
 		return nil, fmt.Errorf("failed to expand path: %w", err)
 	}

 	if !filepath.IsAbs(expandedPath) {
 		return nil, fmt.Errorf("path must be absolute, got relative path: %s", params.Path)
 	}

-	result, err := fileutil.ReadDir(params.Path, *params.MaxEntries)
+	result, err := fileutil.ReadDir(expandedPath, *params.MaxEntries)

If you still want to preserve the original string for display, you can keep params.Path solely for messages/UI.

pkg/aiusechat/tools_writefile.go (1)

144-147: Shared “expand + require absolute” logic could be centralized

writeTextFileCallback, verifyEditTextFileInput, EditTextFileDryRun, editTextFileCallback, verifyDeleteTextFileInput, and deleteTextFileCallback all duplicate the same ExpandHomeDir + filepath.IsAbs + error-message pattern. That’s fine functionally and gives good defense-in-depth, but it’s a good candidate for a small helper to keep things DRY and future-proof.

For example:

+func expandAndRequireAbs(raw string) (string, error) {
+	expanded, err := wavebase.ExpandHomeDir(raw)
+	if err != nil {
+		return "", fmt.Errorf("failed to expand path: %w", err)
+	}
+	if !filepath.IsAbs(expanded) {
+		return "", fmt.Errorf("path must be absolute, got relative path: %s", raw)
+	}
+	return expanded, nil
+}

Then each site can become:

-	expandedPath, err := wavebase.ExpandHomeDir(params.Filename)
-	if err != nil {
-		return nil, fmt.Errorf("failed to expand path: %w", err)
-	}
-
-	if !filepath.IsAbs(expandedPath)) {
-		return nil, fmt.Errorf("path must be absolute, got relative path: %s", params.Filename)
-	}
+	expandedPath, err := expandAndRequireAbs(params.Filename)
+	if err != nil {
+		return nil, err
+	}

Same applies to delete/edit dry-run, etc.

Also applies to: 258-261, 284-287, 322-325, 445-448, 469-472

pkg/aiusechat/usechat.go (3)

51-62: System prompt derivation looks good; consider precompiling regex.

The new getSystemPrompt behavior (empty in builder mode, base prompt plus strict add‑on for specific open‑source models) matches the intended separation of builder vs non‑builder flows and tool strictness.

Minor nit: regexp.MatchString recompiles the pattern on every call, and you’re already lower‑casing the model while also using (?i) in the pattern. You could micro‑optimize/readability by:

  • Precompiling a package‑level var strictToolModelRe = regexp.MustCompile(...), and
  • Dropping either strings.ToLower or the (?i) flag.

Not urgent, just a small cleanup opportunity.


38-42: WaveAI settings flow and default API type: semantics look right; consider plumbing remaining config fields.

The new getWaveAISettings flow and DefaultAPI = APIType_OpenAIResponses together make sense for routing everything through responses by default, with:

  • Max tokens derived from builder vs normal vs WaveAIMaxOutputTokens.
  • AI mode resolution via resolveAIMode(rtInfo.WaveAIMode, premium).
  • Token lookup via APIToken or APITokenSecretName (with good error messages if the secret is missing/empty).
  • Cloud vs non‑cloud base URL choice, with WAVETERM_WAVEAI_ENDPOINT override for cloud.

Two follow‑ups to consider:

  1. Non‑cloud modes require a BaseURL now.
    For !config.WaveAICloud you now hard‑fail when BaseURL is empty. That’s likely correct for local/proxy providers, but it’s stricter than “empty == default” semantics. Worth double‑checking all existing AI modes set BaseURL (or WaveAICloud) so you don’t break older configs.

  2. Plumbing through additional AI config fields.
    AIOptsType supports fields like OrgID, APIVersion, ProxyURL, and TimeoutMs, but this function currently only sets APIType, Model, APIToken, MaxTokens, ThinkingLevel, AIMode, BaseURL, and Capabilities. If any AI mode configs rely on those extra fields, you may want to copy them across here to keep parity with previous behavior.

If you know they’re unused today, this can stay as‑is; otherwise, populating them now would avoid subtle config regressions.

Also applies to: 64-113, 88-99


160-167: Guard against using OpenAI Responses with chat‑only models is correct; error message could be a bit more explicit.

The check

if chatOpts.Config.APIType == uctypes.APIType_OpenAIResponses && shouldUseChatCompletionsAPI(chatOpts.Config.Model) {
    return nil, nil, fmt.Errorf("Chat completions API not available (must use newer OpenAI models)")
}

is a good early‑fail for pairing APIType_OpenAIResponses with older gpt‑3.5* / gpt‑4 / o1‑* models.

You might improve UX by including the actual model in the error (and ideally a hint that they should switch the AI mode/model to one that supports the Responses API), e.g.:

return nil, nil, fmt.Errorf("model %q requires the Chat Completions API; please select a newer OpenAI model that supports the Responses API", chatOpts.Config.Model)

Not required, but will make the failure mode clearer to users.

📜 Review details

Configuration used: CodeRabbit UI

Review profile: CHILL

Plan: Pro

📥 Commits

Reviewing files that changed from the base of the PR and between 872eee5 and 83feb92.

📒 Files selected for processing (8)
  • frontend/app/aipanel/aimode.tsx (1 hunks)
  • pkg/aiusechat/openaichat/openaichat-types.go (1 hunks)
  • pkg/aiusechat/tools_readdir.go (4 hunks)
  • pkg/aiusechat/tools_readfile.go (3 hunks)
  • pkg/aiusechat/tools_writefile.go (10 hunks)
  • pkg/aiusechat/usechat-prompts.go (1 hunks)
  • pkg/aiusechat/usechat.go (15 hunks)
  • pkg/wconfig/settingsconfig.go (2 hunks)
🚧 Files skipped from review as they are similar to previous changes (2)
  • pkg/wconfig/settingsconfig.go
  • pkg/aiusechat/usechat-prompts.go
🧰 Additional context used
🧠 Learnings (2)
📚 Learning: 2025-10-15T03:21:02.229Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2433
File: pkg/aiusechat/tools_readfile.go:197-197
Timestamp: 2025-10-15T03:21:02.229Z
Learning: In Wave Terminal's AI tool definitions (pkg/aiusechat/tools_*.go), the Description field should not mention approval requirements even when ToolApproval returns ApprovalNeedsApproval. This prevents the LLM from asking users for approval before calling the tool, avoiding redundant double-approval prompts since the runtime will enforce approval anyway.

Applied to files:

  • pkg/aiusechat/tools_readfile.go
  • frontend/app/aipanel/aimode.tsx
  • pkg/aiusechat/tools_writefile.go
  • pkg/aiusechat/openaichat/openaichat-types.go
  • pkg/aiusechat/tools_readdir.go
  • pkg/aiusechat/usechat.go
📚 Learning: 2025-10-14T06:30:54.763Z
Learnt from: sawka
Repo: wavetermdev/waveterm PR: 2430
File: frontend/app/aipanel/aimessage.tsx:137-144
Timestamp: 2025-10-14T06:30:54.763Z
Learning: In `frontend/app/aipanel/aimessage.tsx`, the `AIToolUseGroup` component splits file operation tool calls into separate batches (`fileOpsNeedApproval` and `fileOpsNoApproval`) based on their approval state before passing them to `AIToolUseBatch`. This ensures each batch has homogeneous approval states, making group-level approval handling valid.

Applied to files:

  • frontend/app/aipanel/aimode.tsx
🧬 Code graph analysis (3)
frontend/app/aipanel/aimode.tsx (3)
frontend/app/aipanel/waveai-model.tsx (1)
  • WaveAIModel (43-588)
frontend/app/store/global.ts (1)
  • atoms (816-816)
frontend/util/util.ts (1)
  • makeIconClass (505-505)
pkg/aiusechat/openaichat/openaichat-types.go (1)
pkg/aiusechat/uctypes/uctypes.go (3)
  • UIMessageDataToolUse (196-207)
  • AIUsage (291-297)
  • APIType_OpenAIChat (21-21)
pkg/aiusechat/usechat.go (4)
pkg/aiusechat/uctypes/uctypes.go (6)
  • APIType_OpenAIResponses (20-20)
  • AIOptsType (256-269)
  • DefaultAIEndpoint (13-13)
  • WaveChatOpts (498-519)
  • UIMessageDataToolUse (196-207)
  • DefaultOpenAIModel (15-15)
pkg/secretstore/secretstore.go (1)
  • GetSecret (251-263)
pkg/aiusechat/usechat-backend.go (1)
  • UseChatBackend (19-51)
pkg/aiusechat/aiutil/aiutil.go (1)
  • CreateToolUseData (198-241)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (3)
  • GitHub Check: Build for TestDriver.ai
  • GitHub Check: Analyze (go)
  • GitHub Check: Analyze (javascript-typescript)
🔇 Additional comments (23)
pkg/aiusechat/openaichat/openaichat-types.go (9)

1-8: LGTM - Clean package structure.

The package declaration and imports are minimal and appropriate.


12-21: Well-structured request type.

The use of any for ToolChoice appropriately handles the polymorphic nature of this field ("auto", "none", or structured object). The dual token limit fields accommodate both legacy and modern API versions.


23-50: Solid message type with defensive cleaning.

The clean() method correctly creates a new instance and recursively cleans nested ToolCall objects, ensuring internal metadata doesn't leak to API calls. The FindToolCallIndex() helper follows standard Go conventions (returning -1 for not found).


52-61: LGTM - Tool definitions match OpenAI spec.

The flexible Parameters field using map[string]any appropriately handles JSON schema variations.


63-77: Good defensive pattern for internal metadata.

The clean() method correctly strips the internal ToolUseData field before API transmission. The inline comment on line 67 clearly documents the requirement, which is important since Go cannot enforce this at compile-time.


79-115: LGTM - Streaming types properly structured.

The streaming response types correctly model OpenAI's incremental delta protocol. The nullable FinishReason pointer and indexed deltas allow proper streaming assembly.


117-129: LGTM - Storage types are well-defined.

The StoredChatMessage and ChatUsage types appropriately wrap API messages with persistence metadata and usage tracking.


131-149: LGTM - Accessor methods with proper nil handling.

The getter methods are straightforward. GetUsage() correctly converts to the internal uctypes.AIUsage format with appropriate nil checking and hardcodes APIType_OpenAIChat as expected for this OpenAI-specific implementation.


151-171: Excellent deep copy implementation.

The Copy() method correctly handles all reference types:

  • Defensively returns nil for nil input
  • Deep-copies the ToolCalls array and nested ToolUseData pointers
  • Deep-copies the Usage pointer
  • String fields are immutable in Go, so shallow copy is appropriate
frontend/app/aipanel/aimode.tsx (6)

1-9: LGTM: Clean imports and setup.

The imports are well-organized and all appear necessary for the component's functionality.


21-28: LGTM: Config sorting and filtering logic is sound.

The sorting by display:order (primary) and display:name (secondary) is correct, and the filtering of the quick mode when appropriate is cleanly implemented.


64-64: Past tooltip terminology issue has been resolved.

The tooltip now correctly uses "AI Mode:" terminology, consistent with the component label. The previous review comment about inconsistent "Thinking:" terminology no longer applies.


56-71: LGTM: Button rendering and interactions are well-implemented.

The toggle button has good UX with responsive label display, appropriate hover states, and correct icon rendering.


73-112: LGTM: Dropdown menu implementation is robust and accessible.

The menu rendering handles all states correctly:

  • Click-outside-to-close using standard overlay pattern
  • Proper disabled states for premium modes without access
  • Clear visual feedback for the selected mode (checkmark and bold text)
  • Semantic HTML with appropriate button elements and disabled attributes
  • Multiline descriptions supported with whiteSpace: "pre-line"

19-19: The model.inBuilder property is properly defined and will not cause runtime errors.

The verification confirms that inBuilder is a public boolean property on the WaveAIModel class (line 54 of waveai-model.tsx), initialized to false by default and set during instance construction based on the window type (lines 77–79, 129–137). The code at line 19 in aimode.tsx is safe and correct.

pkg/aiusechat/tools_readfile.go (2)

211-213: Absolute-path validation in verifyReadTextFileInput is sound

Expanding ~ first and then enforcing filepath.IsAbs cleanly enforces the “absolute or ~-based only” contract while keeping the error message in terms of the original params.Filename. This looks correct and consistent with the other tools in this PR.


339-340: Filename schema description correctly documents absolute-path requirement

The updated filename description now clearly states that the path must be absolute, supports ~, and that relative paths are not supported, matching the new runtime checks and avoiding any mention of approval logic. Based on learnings, this keeps Description focused on behavior without duplicating approval semantics.

pkg/aiusechat/tools_readdir.go (2)

67-69: Absolute-path requirement in verifyReadDirInput is consistent and safe

Validating that the expanded path is absolute before doing os.Stat aligns directory reads with the file tools’ “no relative paths” policy and should prevent accidental traversal into unintended working directories.


135-136: path schema description matches runtime behavior and omits approval details

The new description correctly documents the absolute-path requirement, mentions ~ support, and explicitly says relative paths are not supported, which matches the new checks and keeps approval concerns out of the Description. Based on learnings, this is the desired pattern for tool descriptions.

pkg/aiusechat/tools_writefile.go (2)

115-118: Absolute-path enforcement in verifyWriteTextFileInput matches other tools

Requiring an absolute path after ExpandHomeDir and reporting the original params.Filename in the error keeps behavior clear to users and consistent with read/edit/delete tools.


195-196: Filename schema descriptions align with behavior and avoid approval wording

All three tools now clearly describe filename as an absolute path, note ~ support, and state that relative paths are not supported, matching the new validations. None of the Description fields mention approval or consent flows, which is consistent with the guidance to keep approval concerns in ToolApproval only. Based on learnings, this is exactly the intended pattern.

Also applies to: 363-364, 507-508

pkg/aiusechat/usechat.go (2)

355-367: Metrics/usage aggregation and AIMode tagging look correct.

The changes to RunAIChat and telemetry wiring look good:

  • Initializing AIMetrics with ChatId, StepNum, and Usage seeded from chatOpts.Config.
  • Incrementing RequestCount, PremiumReqCount, and ProxyReqCount per step.
  • Aggregating usage across all returned messages with getUsage, including NativeWebSearchCount, and marking Model as "mixed" when multiple models show up.
  • Recording AIMode in both AIMetrics and telemetry (WaveAIMode), which will be useful for observing mode adoption and behavior.
  • Handling premium rate‑limit fallback only when using APIType_OpenAIResponses and the configured premium model, and continuing with DefaultOpenAIModel.

I don’t see correctness issues here; this should give you a much better picture of end‑to‑end usage while staying backward‑compatible.

Also applies to: 387-405, 415-423, 565-566


613-615: RTInfo fallback and unified system prompt: behavior change seems intentional, but worth confirming.

Two notable behavior changes in WaveAIPostMessageHandler:

  1. RTInfo fallback:
var rtInfo *waveobj.ObjRTInfo
...
if rtInfo == nil {
    rtInfo = &waveobj.ObjRTInfo{}
}
...
aiOpts, err := getWaveAISettings(premium, builderMode, *rtInfo)

Previously, a missing RTInfo (e.g., no Tab/Builder) would likely cause a failure earlier in the flow; now you fall back to a zero‑value ObjRTInfo and let resolveAIMode pick the default mode. That’s probably what you want for “plain” chats without a backing tab/builder, but it does change error semantics for mis‑wired Tab/Builder IDs (they now silently get a default mode instead of an obvious failure).

  1. Centralized system prompt:
chatOpts.SystemPrompt = getSystemPrompt(chatOpts.Config.APIType, chatOpts.Config.Model, chatOpts.BuilderId != "")

All system prompt wiring now goes through getSystemPrompt, with builder chats explicitly getting no system prompt, and non‑builder chats getting the base prompt plus strict add‑on for the configured models. That unifies prompt behavior nicely.

I’d just double‑check that:

  • You do want missing/invalid Tab/Builder RTInfo to fall back to the global default AI mode instead of erroring, and
  • The “builder mode” detection purely via BuilderId != "" matches all the contexts where you intend to suppress the system prompt.

If that’s intentional, the wiring looks consistent.

Also applies to: 620-621, 644-645

Comment on lines +30 to +38
const handleSelect = (mode: string) => {
const config = aiModeConfigs[mode];
if (!config) return;
if (!hasPremium && config["waveai:premium"]) {
return;
}
model.setAIMode(mode);
setIsOpen(false);
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟡 Minor

Consider adding user feedback when premium mode selection is blocked.

When a user without premium access attempts to select a premium mode, the function silently returns (lines 33-35) without any notification or feedback. Consider adding a toast message or alert to inform users why the mode cannot be selected (e.g., "Premium subscription required for this mode").

Example implementation:

 const handleSelect = (mode: string) => {
     const config = aiModeConfigs[mode];
     if (!config) return;
     if (!hasPremium && config["waveai:premium"]) {
+        // Show toast or notification: "Premium subscription required"
         return;
     }
     model.setAIMode(mode);
     setIsOpen(false);
 };

Committable suggestion skipped: line range outside the PR's diff.

🤖 Prompt for AI Agents
In frontend/app/aipanel/aimode.tsx around lines 30 to 38, the handler silently
returns when a non-premium user selects a premium mode; update handleSelect so
that instead of silently returning it triggers a user-facing notification
(toast/alert) like "Premium subscription required for this mode" and then
returns, leaving the UI state unchanged; use the project's existing
notification/toast utility or a simple alert if none exists, ensure the message
is clear, do not close the modal or call model.setAIMode when blocked, and keep
the early return after emitting the notification.

@sawka sawka merged commit 73e5619 into main Nov 26, 2025
7 checks passed
@sawka sawka deleted the sawka/waveai-openai-comps branch November 26, 2025 19:43
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants