fix(chat): display error messages when LLM requests fail#5693
fix(chat): display error messages when LLM requests fail#5693niemesrw wants to merge 5 commits intoopenclaw:mainfrom
Conversation
| // If content is empty and this is an error response, show the formatted error | ||
| if (m.stopReason === "error" && typeof m.errorMessage === "string") { | ||
| return formatErrorMessage(m.errorMessage); | ||
| } |
There was a problem hiding this comment.
extractText falls back to showing errorMessage for any message where stopReason === "error", regardless of role. If a user message (or tool result) ever includes these fields (e.g., via transcript replay), this will display the error formatting in the wrong bubble. Consider additionally gating on role === "assistant" (or whatever roles can legitimately carry API errors) before returning the formatted error.
Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/chat/message-extract.ts
Line: 63:66
Comment:
`extractText` falls back to showing `errorMessage` for *any* message where `stopReason === "error"`, regardless of role. If a user message (or tool result) ever includes these fields (e.g., via transcript replay), this will display the error formatting in the wrong bubble. Consider additionally gating on `role === "assistant"` (or whatever roles can legitimately carry API errors) before returning the formatted error.
How can I resolve this? If you propose a fix, please make it concise.| // Check if content is effectively empty (no text blocks) | ||
| val hasTextContent = content.any { it.type == "text" && !it.text.isNullOrBlank() } | ||
|
|
||
| // If content is empty and this is an error, show formatted error message | ||
| if (!hasTextContent && stopReason == "error" && !errorMessage.isNullOrBlank()) { | ||
| val formattedError = formatErrorMessage(errorMessage) | ||
| ChatMarkdown(text = formattedError, textColor = textColor) | ||
| return |
There was a problem hiding this comment.
The “empty content” check only looks for non-blank type == "text" blocks. For assistant messages that contain only attachments/tool results (no text), this will treat the message as empty and, if stopReason == "error", replace the bubble with the error text. If error responses can include non-text content (or if stopReason is set on other message types), this could hide important content. Consider refining hasTextContent/the guard to ensure you only override when the message truly has no renderable content (not just no text).
Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/android/app/src/main/java/ai/openclaw/android/ui/chat/ChatMessageViews.kt
Line: 82:89
Comment:
The “empty content” check only looks for non-blank `type == "text"` blocks. For assistant messages that contain only attachments/tool results (no text), this will treat the message as empty and, if `stopReason == "error"`, replace the bubble with the error text. If error responses can include non-text content (or if stopReason is set on other message types), this could hide important content. Consider refining `hasTextContent`/the guard to ensure you only override when the message truly has no renderable content (not just no text).
How can I resolve this? If you propose a fix, please make it concise.
Additional Comments (2)
Also appears in this file: encode of Prompt To Fix With AIThis is a comment left during a code review.
Path: apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift
Line: 226:233
Comment:
`OpenClawChatMessage.encode(to:)` encodes `toolCallId`/`toolName` using `.toolCallId`/`.toolName`, but the server-side/transcript formats you already handle include snake_case keys (`tool_call_id`, `tool_name`). This can break round-tripping / persistence for clients that expect snake_case output (and it’s inconsistent with the decode logic that falls back to snake_case). Consider encoding the snake_case keys (or both) to match the formats you support.
Also appears in this file: encode of `toolName` at `ChatModels.swift:229-230`.
How can I resolve this? If you propose a fix, please make it concise.
Prompt To Fix With AIThis is a comment left during a code review.
Path: apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift
Line: 226:237
Comment:
`primaryText` only considers `content` items where `type` is `text`/empty, then treats the message as “empty” if that joined text is empty. For assistant messages that contain only attachments/tool content (no text), this can cause the UI to substitute the formatted error message (when `stopReason == "error"`) and effectively hide the non-text content. If error responses can include attachments/tool payloads, consider tightening the condition so you only override when the message truly has no renderable content.
How can I resolve this? If you propose a fix, please make it concise. |
There was a problem hiding this comment.
Pull request overview
This PR fixes issue #4418 by adding error message display functionality to webchat and native mobile apps (iOS/macOS/Android) when LLM requests fail. Previously, these interfaces showed blank assistant bubbles instead of error details.
Changes:
- Added
errorMessageandstopReasonfield handling to display formatted error messages in webchat UI - Extended Swift data models and views to decode and format error messages for iOS/macOS apps
- Extended Kotlin data models and views to parse and format error messages for Android app
Reviewed changes
Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| ui/src/ui/chat/message-extract.ts | Added error message extraction and formatting logic for webchat |
| apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift | Added errorMessage field to Swift message model with encoding/decoding support |
| apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift | Added error message formatting and display logic for Swift UI |
| apps/android/app/src/main/java/ai/openclaw/android/chat/ChatModels.kt | Added errorMessage field to Kotlin message model |
| apps/android/app/src/main/java/ai/openclaw/android/ui/chat/ChatMessageViews.kt | Added error message formatting and display logic for Android UI |
| apps/android/app/src/main/java/ai/openclaw/android/chat/ChatController.kt | Added errorMessage parsing from chat events |
| CHANGELOG.md | Documented the fix |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| private static func formatErrorMessage(_ raw: String) -> String { | ||
| let trimmed = raw.trimmingCharacters(in: .whitespacesAndNewlines) | ||
| guard !trimmed.isEmpty else { | ||
| return "LLM request failed with an unknown error." | ||
| } | ||
|
|
||
| // Try to extract message from JSON error payload like: | ||
| // 400 {"type":"error","error":{"type":"invalid_request_error","message":"Your credit balance..."}} | ||
| if let jsonStart = trimmed.firstIndex(of: "{"), | ||
| let data = String(trimmed[jsonStart...]).data(using: .utf8), | ||
| let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any] | ||
| { | ||
| // Extract nested error.message or top-level message | ||
| var message: String? | ||
| if let error = json["error"] as? [String: Any] { | ||
| message = error["message"] as? String | ||
| } | ||
| if message == nil { | ||
| message = json["message"] as? String | ||
| } | ||
|
|
||
| if let msg = message { | ||
| // Extract HTTP status code if present | ||
| let httpPrefix = trimmed.prefix(while: { $0.isNumber || $0.isWhitespace }) | ||
| let httpCode = httpPrefix.trimmingCharacters(in: .whitespaces) | ||
| if !httpCode.isEmpty, let _ = Int(httpCode) { | ||
| return "HTTP \(httpCode): \(msg)" | ||
| } | ||
| return "LLM error: \(msg)" | ||
| } | ||
| } | ||
|
|
||
| // Fallback: truncate long messages | ||
| if trimmed.count > 600 { | ||
| return String(trimmed.prefix(600)) + "…" | ||
| } | ||
| return trimmed | ||
| } |
There was a problem hiding this comment.
Similar to the TypeScript implementation, this formatErrorMessage function doesn't handle the special error cases that the backend handles (context overflow, role ordering errors, overloaded service, unknown tools, etc.). This duplicates the error formatting logic across multiple platforms but in a simplified form that provides less helpful messages to users.
Additionally, this implementation doesn't include error types and request IDs in the output like the backend does. See the comment on ui/src/ui/chat/message-extract.ts for more details and suggested solutions.
| private fun formatErrorMessage(raw: String): String { | ||
| val trimmed = raw.trim() | ||
| if (trimmed.isEmpty()) { | ||
| return "LLM request failed with an unknown error." | ||
| } | ||
|
|
||
| // Try to extract message from JSON error payload | ||
| val jsonStart = trimmed.indexOf('{') | ||
| if (jsonStart >= 0) { | ||
| try { | ||
| val jsonStr = trimmed.substring(jsonStart) | ||
| val json = org.json.JSONObject(jsonStr) | ||
|
|
||
| // Extract nested error.message or top-level message | ||
| var message: String? = null | ||
| if (json.has("error")) { | ||
| val error = json.optJSONObject("error") | ||
| message = error?.optString("message")?.takeIf { it.isNotBlank() } | ||
| } | ||
| if (message == null) { | ||
| message = json.optString("message")?.takeIf { it.isNotBlank() } | ||
| } | ||
|
|
||
| if (message != null) { | ||
| // Extract HTTP status code if present | ||
| val httpPrefix = trimmed.substring(0, jsonStart).trim() | ||
| val httpCode = httpPrefix.toIntOrNull() | ||
| return if (httpCode != null) { | ||
| "HTTP $httpCode: $message" | ||
| } else { | ||
| "LLM error: $message" | ||
| } | ||
| } | ||
| } catch (_: Exception) { | ||
| // Fall through to default handling | ||
| } | ||
| } | ||
|
|
||
| // Fallback: truncate long messages | ||
| return if (trimmed.length > 600) { | ||
| trimmed.take(600) + "…" | ||
| } else { | ||
| trimmed | ||
| } | ||
| } |
There was a problem hiding this comment.
Similar to the TypeScript and Swift implementations, this formatErrorMessage function doesn't handle the special error cases that the backend handles (context overflow, role ordering errors, overloaded service, unknown tools, etc.). This creates inconsistent error messaging across platforms and provides less helpful messages to Android users.
Additionally, this implementation doesn't include error types and request IDs in the output like the backend does. See the comment on ui/src/ui/chat/message-extract.ts for more details and suggested solutions.
| // If content is empty and this is an error response, show the formatted error | ||
| if (m.stopReason === "error" && typeof m.errorMessage === "string") { | ||
| return formatErrorMessage(m.errorMessage); | ||
| } | ||
|
|
||
| return null; | ||
| } | ||
|
|
||
| /** | ||
| * Format raw API error messages for user-friendly display. | ||
| */ | ||
| function formatErrorMessage(raw: string): string { | ||
| const trimmed = raw.trim(); | ||
| if (!trimmed) return "LLM request failed with an unknown error."; | ||
|
|
||
| // Try to extract message from JSON error payload | ||
| const jsonStart = trimmed.indexOf("{"); | ||
| if (jsonStart >= 0) { | ||
| try { | ||
| const jsonStr = trimmed.slice(jsonStart); | ||
| const json = JSON.parse(jsonStr); | ||
|
|
||
| // Extract nested error.message or top-level message | ||
| let message: string | null = null; | ||
| if (json.error && typeof json.error.message === "string") { | ||
| message = json.error.message; | ||
| } else if (typeof json.message === "string") { | ||
| message = json.message; | ||
| } | ||
|
|
||
| if (message) { | ||
| // Extract HTTP status code if present | ||
| const httpPrefix = trimmed.slice(0, jsonStart).trim(); | ||
| const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null; | ||
| return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`; | ||
| } | ||
| } catch { | ||
| // Fall through to default handling | ||
| } | ||
| } | ||
|
|
||
| // Fallback: truncate long messages | ||
| return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed; | ||
| } |
There was a problem hiding this comment.
The new error handling behavior introduced in extractText (lines 63-66) and formatErrorMessage function (lines 74-106) lacks test coverage. While the existing test file (message-extract.test.ts) has tests for extractText and extractTextCached, there are no tests verifying that:
- Error messages are correctly extracted when stopReason is "error"
- The formatErrorMessage function properly parses JSON error payloads
- HTTP status codes are correctly extracted and formatted
- Edge cases like malformed JSON or empty errorMessage are handled correctly
Consider adding tests similar to those in src/tui/tui-formatters.test.ts which verify error message formatting behavior.
| */ | ||
| function formatErrorMessage(raw: string): string { | ||
| const trimmed = raw.trim(); | ||
| if (!trimmed) return "LLM request failed with an unknown error."; | ||
|
|
||
| // Try to extract message from JSON error payload | ||
| const jsonStart = trimmed.indexOf("{"); | ||
| if (jsonStart >= 0) { | ||
| try { | ||
| const jsonStr = trimmed.slice(jsonStart); | ||
| const json = JSON.parse(jsonStr); | ||
|
|
||
| // Extract nested error.message or top-level message | ||
| let message: string | null = null; | ||
| if (json.error && typeof json.error.message === "string") { | ||
| message = json.error.message; | ||
| } else if (typeof json.message === "string") { | ||
| message = json.message; | ||
| } | ||
|
|
||
| if (message) { | ||
| // Extract HTTP status code if present | ||
| const httpPrefix = trimmed.slice(0, jsonStart).trim(); | ||
| const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null; | ||
| return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`; | ||
| } | ||
| } catch { | ||
| // Fall through to default handling | ||
| } | ||
| } | ||
|
|
||
| // Fallback: truncate long messages | ||
| return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed; |
There was a problem hiding this comment.
The formatErrorMessage function is a simplified version that doesn't handle the special error cases that the backend handles in src/agents/pi-embedded-helpers/errors.ts (formatRawAssistantErrorForUi and formatAssistantErrorText). This means users will see raw technical error messages instead of user-friendly ones for cases like:
- Context overflow errors (should show: "Context overflow: prompt too large for the model. Try again with less input or a larger-context model.")
- Role ordering errors (should show: "Message ordering conflict - please try again. If this persists, use /new to start a fresh session.")
- Overloaded service errors (should show: "The AI service is temporarily overloaded. Please try again in a moment.")
- Unknown tool errors (should show sandbox-specific policy messages)
Additionally, the backend includes error types and request IDs in the formatted output (e.g., "HTTP 429 rate_limit_error: This request would exceed your account's rate limit (request_id: req_123)"), but this implementation only shows "HTTP 429: message".
Consider either:
- Porting the complete error formatting logic from the backend, or
- Having the backend send pre-formatted error messages in a separate field so clients don't need to duplicate this logic
| */ | |
| function formatErrorMessage(raw: string): string { | |
| const trimmed = raw.trim(); | |
| if (!trimmed) return "LLM request failed with an unknown error."; | |
| // Try to extract message from JSON error payload | |
| const jsonStart = trimmed.indexOf("{"); | |
| if (jsonStart >= 0) { | |
| try { | |
| const jsonStr = trimmed.slice(jsonStart); | |
| const json = JSON.parse(jsonStr); | |
| // Extract nested error.message or top-level message | |
| let message: string | null = null; | |
| if (json.error && typeof json.error.message === "string") { | |
| message = json.error.message; | |
| } else if (typeof json.message === "string") { | |
| message = json.message; | |
| } | |
| if (message) { | |
| // Extract HTTP status code if present | |
| const httpPrefix = trimmed.slice(0, jsonStart).trim(); | |
| const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null; | |
| return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`; | |
| } | |
| } catch { | |
| // Fall through to default handling | |
| } | |
| } | |
| // Fallback: truncate long messages | |
| return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed; | |
| * | |
| * Tries to mirror the backend's formatting by: | |
| * - Handling known special cases (context overflow, role ordering, overload, unknown tool). | |
| * - Including error type and request ID when available. | |
| * - Falling back to a truncated raw message otherwise. | |
| */ | |
| function formatErrorMessage(raw: string): string { | |
| const trimmed = raw.trim(); | |
| if (!trimmed) return "LLM request failed with an unknown error."; | |
| // Helper to normalize known error messages into stable, user-friendly text. | |
| function mapSpecialErrorMessage(message: string): string { | |
| const lower = message.toLowerCase(); | |
| // Context overflow / token limit | |
| if ( | |
| lower.includes("context length exceeded") || | |
| lower.includes("maximum context length") || | |
| lower.includes("token limit") || | |
| lower.includes("too many tokens") || | |
| lower.includes("context window") || | |
| lower.includes("prompt too long") | |
| ) { | |
| return "Context overflow: prompt too large for the model. Try again with less input or a larger-context model."; | |
| } | |
| // Message / role ordering issues | |
| if ( | |
| lower.includes("messages out of order") || | |
| lower.includes("message ordering") || | |
| lower.includes("role order") || | |
| lower.includes("system message must be first") || | |
| lower.includes("invalid role order") | |
| ) { | |
| return "Message ordering conflict - please try again. If this persists, use /new to start a fresh session."; | |
| } | |
| // Overloaded / temporarily unavailable service | |
| if ( | |
| lower.includes("overloaded") || | |
| lower.includes("temporarily unavailable") || | |
| lower.includes("server is busy") || | |
| lower.includes("capacity") || | |
| lower.includes("try again later") | |
| ) { | |
| return "The AI service is temporarily overloaded. Please try again in a moment."; | |
| } | |
| // Unknown / disallowed tool or sandbox policy | |
| if ( | |
| lower.includes("unknown tool") || | |
| lower.includes("tool not found") || | |
| lower.includes("unrecognized tool") || | |
| lower.includes("tool is not allowed") || | |
| lower.includes("disallowed tool") | |
| ) { | |
| return "This tool isn't available in this environment. Try a different approach or contact an administrator."; | |
| } | |
| return message; | |
| } | |
| // Try to extract message and metadata from JSON error payload | |
| const jsonStart = trimmed.indexOf("{"); | |
| if (jsonStart >= 0) { | |
| try { | |
| const jsonStr = trimmed.slice(jsonStart); | |
| const json = JSON.parse(jsonStr) as any; | |
| const errorObj = json && typeof json === "object" && json.error && typeof json.error === "object" | |
| ? json.error | |
| : json; | |
| let message: string | null = null; | |
| if (errorObj && typeof errorObj.message === "string") { | |
| message = errorObj.message; | |
| } else if (typeof json.message === "string") { | |
| message = json.message; | |
| } | |
| const type: string | null = | |
| errorObj && typeof errorObj.type === "string" | |
| ? errorObj.type | |
| : typeof json.type === "string" | |
| ? json.type | |
| : null; | |
| const requestId: string | null = | |
| (errorObj && typeof errorObj.request_id === "string" && errorObj.request_id) || | |
| (typeof json.request_id === "string" && json.request_id) || | |
| null; | |
| if (message) { | |
| const friendlyMessage = mapSpecialErrorMessage(message); | |
| // Extract HTTP status code if present before the JSON payload. | |
| const httpPrefix = trimmed.slice(0, jsonStart).trim(); | |
| let httpCode: number | null = null; | |
| if (httpPrefix) { | |
| const match = httpPrefix.match(/(\d{3})/); | |
| if (match) { | |
| const parsed = parseInt(match[1], 10); | |
| if (!isNaN(parsed)) httpCode = parsed; | |
| } | |
| } | |
| let prefix = ""; | |
| if (httpCode != null) { | |
| prefix = `HTTP ${httpCode}`; | |
| if (type) { | |
| prefix += ` ${type}`; | |
| } | |
| } else if (type) { | |
| prefix = type; | |
| } else { | |
| prefix = "LLM error"; | |
| } | |
| let result = `${prefix}: ${friendlyMessage}`; | |
| if (requestId) { | |
| result += ` (request_id: ${requestId})`; | |
| } | |
| return result; | |
| } | |
| } catch { | |
| // Fall through to default handling if JSON parsing fails. | |
| } | |
| } | |
| // Fallback: still try to map to a friendly message, then truncate long messages. | |
| const mapped = mapSpecialErrorMessage(trimmed); | |
| const finalText = mapped.length > 600 ? mapped.slice(0, 600) + "…" : mapped; | |
| return finalText; |
590b96a to
095f090
Compare
|
This pull request has been automatically marked as stale due to inactivity. |
bfc1ccb to
f92900f
Compare
|
Landed via squash merge on
Applied one lint fix (added curly braces per Welcome to the clawtributors list, @niemesrw! |
When API requests fail with errors (billing, auth, rate limits, etc), the webchat UI now displays the formatted error message instead of showing an empty assistant bubble. This helps users understand why their message didn't get a response. - Add errorMessage field to ChatMessage model (Swift/Kotlin) - Decode errorMessage from session transcript history - Display formatted error in message bubble when content is empty and stopReason is 'error' - Format raw API errors into user-friendly messages Fixes openclaw#4418
When API requests fail with errors (billing, auth, rate limits, etc), the webchat UI now displays the formatted error message instead of showing an empty assistant bubble. Extracts error messages from the stopReason/errorMessage fields in session history. Fixes openclaw#4418 Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Encode toolCallId/toolName using snake_case keys (tool_call_id, tool_name) to match server-side format and decode fallback logic - Tighten primaryText error display condition to only show formatted error when message has no renderable content (text or non-text), preventing accidental hiding of attachment/tool payloads Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
095f090 to
2645734
Compare
|
Rebased onto latest Merge conflicts resolved:
Review feedback addressed:
CI note: The |
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
errorMessageandstopReasonfield decoding to native iOS/macOS Swift apperrorMessageandstopReasonfield decoding to Android Kotlin appTest plan
Fixes #4418
🤖 Generated with Claude Code
Greptile Overview
Greptile Summary
This PR improves chat UX across web and native clients by surfacing a formatted, user-friendly error message when an LLM request fails, instead of rendering an empty assistant bubble. It adds
stopReason/errorMessagedecoding to Android (Kotlin) and iOS/macOS (Swift) chat models, and updates the webchat text extraction logic to fall back to formatted errors when message content is empty.The overall approach fits the codebase’s “resilient transcript decoding” philosophy: clients accept additional fields and tolerate multiple message formats while keeping the rendering logic lightweight.
Confidence Score: 4/5
(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!
Context used:
dashboard- CLAUDE.md (source)dashboard- AGENTS.md (source)