Skip to content

Comments

fix(chat): display error messages when LLM requests fail#5693

Open
niemesrw wants to merge 5 commits intoopenclaw:mainfrom
niemesrw:fix/webchat-error-display
Open

fix(chat): display error messages when LLM requests fail#5693
niemesrw wants to merge 5 commits intoopenclaw:mainfrom
niemesrw:fix/webchat-error-display

Conversation

@niemesrw
Copy link

@niemesrw niemesrw commented Jan 31, 2026

Summary

  • Display formatted error messages in webchat UI instead of blank assistant bubbles when API requests fail
  • Add errorMessage and stopReason field decoding to native iOS/macOS Swift app
  • Add errorMessage and stopReason field decoding to Android Kotlin app
  • Parse JSON error payloads and extract user-friendly messages (e.g., "HTTP 401: invalid x-api-key")

Test plan

  • Verified webchat displays error message instead of blank bubble
  • Swift build passes
  • Swift tests pass (5/6, 1 pre-existing failure)
  • TypeScript tests pass (4925 tests)

Fixes #4418

🤖 Generated with Claude Code

Greptile Overview

Greptile Summary

This PR improves chat UX across web and native clients by surfacing a formatted, user-friendly error message when an LLM request fails, instead of rendering an empty assistant bubble. It adds stopReason/errorMessage decoding to Android (Kotlin) and iOS/macOS (Swift) chat models, and updates the webchat text extraction logic to fall back to formatted errors when message content is empty.

The overall approach fits the codebase’s “resilient transcript decoding” philosophy: clients accept additional fields and tolerate multiple message formats while keeping the rendering logic lightweight.

Confidence Score: 4/5

  • This PR is likely safe to merge and should improve UX, with a few edge-case rendering/serialization concerns to verify.
  • Changes are localized and additive (new fields + error formatting) and don’t alter request/transport logic. Main risk is UI logic that may override non-text content on error messages and potential mismatch between encoded key casing vs decoded formats in Swift models.
  • apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift; apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift; apps/android/app/src/main/java/ai/openclaw/android/ui/chat/ChatMessageViews.kt; ui/src/ui/chat/message-extract.ts

(3/5) Reply to the agent's comments like "Can you suggest a fix for this @greptileai?" or ask follow-up questions!

Context used:

  • Context from dashboard - CLAUDE.md (source)
  • Context from dashboard - AGENTS.md (source)

Copilot AI review requested due to automatic review settings January 31, 2026 21:04
@openclaw-barnacle openclaw-barnacle bot added app: android App: android app: web-ui App: web-ui labels Jan 31, 2026
Copy link
Contributor

@greptile-apps greptile-apps bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

4 files reviewed, 4 comments

Edit Code Review Agent Settings | Greptile

Comment on lines +63 to +66
// If content is empty and this is an error response, show the formatted error
if (m.stopReason === "error" && typeof m.errorMessage === "string") {
return formatErrorMessage(m.errorMessage);
}
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

extractText falls back to showing errorMessage for any message where stopReason === "error", regardless of role. If a user message (or tool result) ever includes these fields (e.g., via transcript replay), this will display the error formatting in the wrong bubble. Consider additionally gating on role === "assistant" (or whatever roles can legitimately carry API errors) before returning the formatted error.

Prompt To Fix With AI
This is a comment left during a code review.
Path: ui/src/ui/chat/message-extract.ts
Line: 63:66

Comment:
`extractText` falls back to showing `errorMessage` for *any* message where `stopReason === "error"`, regardless of role. If a user message (or tool result) ever includes these fields (e.g., via transcript replay), this will display the error formatting in the wrong bubble. Consider additionally gating on `role === "assistant"` (or whatever roles can legitimately carry API errors) before returning the formatted error.

How can I resolve this? If you propose a fix, please make it concise.

Comment on lines +82 to +89
// Check if content is effectively empty (no text blocks)
val hasTextContent = content.any { it.type == "text" && !it.text.isNullOrBlank() }

// If content is empty and this is an error, show formatted error message
if (!hasTextContent && stopReason == "error" && !errorMessage.isNullOrBlank()) {
val formattedError = formatErrorMessage(errorMessage)
ChatMarkdown(text = formattedError, textColor = textColor)
return
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The “empty content” check only looks for non-blank type == "text" blocks. For assistant messages that contain only attachments/tool results (no text), this will treat the message as empty and, if stopReason == "error", replace the bubble with the error text. If error responses can include non-text content (or if stopReason is set on other message types), this could hide important content. Consider refining hasTextContent/the guard to ensure you only override when the message truly has no renderable content (not just no text).

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/android/app/src/main/java/ai/openclaw/android/ui/chat/ChatMessageViews.kt
Line: 82:89

Comment:
The “empty content” check only looks for non-blank `type == "text"` blocks. For assistant messages that contain only attachments/tool results (no text), this will treat the message as empty and, if `stopReason == "error"`, replace the bubble with the error text. If error responses can include non-text content (or if stopReason is set on other message types), this could hide important content. Consider refining `hasTextContent`/the guard to ensure you only override when the message truly has no renderable content (not just no text).

How can I resolve this? If you propose a fix, please make it concise.

@greptile-apps
Copy link
Contributor

greptile-apps bot commented Jan 31, 2026

Additional Comments (2)

apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift
OpenClawChatMessage.encode(to:) encodes toolCallId/toolName using .toolCallId/.toolName, but the server-side/transcript formats you already handle include snake_case keys (tool_call_id, tool_name). This can break round-tripping / persistence for clients that expect snake_case output (and it’s inconsistent with the decode logic that falls back to snake_case). Consider encoding the snake_case keys (or both) to match the formats you support.

Also appears in this file: encode of toolName at ChatModels.swift:229-230.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift
Line: 226:233

Comment:
`OpenClawChatMessage.encode(to:)` encodes `toolCallId`/`toolName` using `.toolCallId`/`.toolName`, but the server-side/transcript formats you already handle include snake_case keys (`tool_call_id`, `tool_name`). This can break round-tripping / persistence for clients that expect snake_case output (and it’s inconsistent with the decode logic that falls back to snake_case). Consider encoding the snake_case keys (or both) to match the formats you support.

Also appears in this file: encode of `toolName` at `ChatModels.swift:229-230`.

How can I resolve this? If you propose a fix, please make it concise.

apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift
primaryText only considers content items where type is text/empty, then treats the message as “empty” if that joined text is empty. For assistant messages that contain only attachments/tool content (no text), this can cause the UI to substitute the formatted error message (when stopReason == "error") and effectively hide the non-text content. If error responses can include attachments/tool payloads, consider tightening the condition so you only override when the message truly has no renderable content.

Prompt To Fix With AI
This is a comment left during a code review.
Path: apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift
Line: 226:237

Comment:
`primaryText` only considers `content` items where `type` is `text`/empty, then treats the message as “empty” if that joined text is empty. For assistant messages that contain only attachments/tool content (no text), this can cause the UI to substitute the formatted error message (when `stopReason == "error"`) and effectively hide the non-text content. If error responses can include attachments/tool payloads, consider tightening the condition so you only override when the message truly has no renderable content.

How can I resolve this? If you propose a fix, please make it concise.

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

This PR fixes issue #4418 by adding error message display functionality to webchat and native mobile apps (iOS/macOS/Android) when LLM requests fail. Previously, these interfaces showed blank assistant bubbles instead of error details.

Changes:

  • Added errorMessage and stopReason field handling to display formatted error messages in webchat UI
  • Extended Swift data models and views to decode and format error messages for iOS/macOS apps
  • Extended Kotlin data models and views to parse and format error messages for Android app

Reviewed changes

Copilot reviewed 7 out of 7 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
ui/src/ui/chat/message-extract.ts Added error message extraction and formatting logic for webchat
apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatModels.swift Added errorMessage field to Swift message model with encoding/decoding support
apps/shared/OpenClawKit/Sources/OpenClawChatUI/ChatMessageViews.swift Added error message formatting and display logic for Swift UI
apps/android/app/src/main/java/ai/openclaw/android/chat/ChatModels.kt Added errorMessage field to Kotlin message model
apps/android/app/src/main/java/ai/openclaw/android/ui/chat/ChatMessageViews.kt Added error message formatting and display logic for Android UI
apps/android/app/src/main/java/ai/openclaw/android/chat/ChatController.kt Added errorMessage parsing from chat events
CHANGELOG.md Documented the fix

💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.

Comment on lines +243 to 280
private static func formatErrorMessage(_ raw: String) -> String {
let trimmed = raw.trimmingCharacters(in: .whitespacesAndNewlines)
guard !trimmed.isEmpty else {
return "LLM request failed with an unknown error."
}

// Try to extract message from JSON error payload like:
// 400 {"type":"error","error":{"type":"invalid_request_error","message":"Your credit balance..."}}
if let jsonStart = trimmed.firstIndex(of: "{"),
let data = String(trimmed[jsonStart...]).data(using: .utf8),
let json = try? JSONSerialization.jsonObject(with: data) as? [String: Any]
{
// Extract nested error.message or top-level message
var message: String?
if let error = json["error"] as? [String: Any] {
message = error["message"] as? String
}
if message == nil {
message = json["message"] as? String
}

if let msg = message {
// Extract HTTP status code if present
let httpPrefix = trimmed.prefix(while: { $0.isNumber || $0.isWhitespace })
let httpCode = httpPrefix.trimmingCharacters(in: .whitespaces)
if !httpCode.isEmpty, let _ = Int(httpCode) {
return "HTTP \(httpCode): \(msg)"
}
return "LLM error: \(msg)"
}
}

// Fallback: truncate long messages
if trimmed.count > 600 {
return String(trimmed.prefix(600)) + "…"
}
return trimmed
}
Copy link

Copilot AI Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the TypeScript implementation, this formatErrorMessage function doesn't handle the special error cases that the backend handles (context overflow, role ordering errors, overloaded service, unknown tools, etc.). This duplicates the error formatting logic across multiple platforms but in a simplified form that provides less helpful messages to users.

Additionally, this implementation doesn't include error types and request IDs in the output like the backend does. See the comment on ui/src/ui/chat/message-extract.ts for more details and suggested solutions.

Copilot uses AI. Check for mistakes.
Comment on lines +109 to +153
private fun formatErrorMessage(raw: String): String {
val trimmed = raw.trim()
if (trimmed.isEmpty()) {
return "LLM request failed with an unknown error."
}

// Try to extract message from JSON error payload
val jsonStart = trimmed.indexOf('{')
if (jsonStart >= 0) {
try {
val jsonStr = trimmed.substring(jsonStart)
val json = org.json.JSONObject(jsonStr)

// Extract nested error.message or top-level message
var message: String? = null
if (json.has("error")) {
val error = json.optJSONObject("error")
message = error?.optString("message")?.takeIf { it.isNotBlank() }
}
if (message == null) {
message = json.optString("message")?.takeIf { it.isNotBlank() }
}

if (message != null) {
// Extract HTTP status code if present
val httpPrefix = trimmed.substring(0, jsonStart).trim()
val httpCode = httpPrefix.toIntOrNull()
return if (httpCode != null) {
"HTTP $httpCode: $message"
} else {
"LLM error: $message"
}
}
} catch (_: Exception) {
// Fall through to default handling
}
}

// Fallback: truncate long messages
return if (trimmed.length > 600) {
trimmed.take(600) + "…"
} else {
trimmed
}
}
Copy link

Copilot AI Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Similar to the TypeScript and Swift implementations, this formatErrorMessage function doesn't handle the special error cases that the backend handles (context overflow, role ordering errors, overloaded service, unknown tools, etc.). This creates inconsistent error messaging across platforms and provides less helpful messages to Android users.

Additionally, this implementation doesn't include error types and request IDs in the output like the backend does. See the comment on ui/src/ui/chat/message-extract.ts for more details and suggested solutions.

Copilot uses AI. Check for mistakes.
Comment on lines 63 to 106
// If content is empty and this is an error response, show the formatted error
if (m.stopReason === "error" && typeof m.errorMessage === "string") {
return formatErrorMessage(m.errorMessage);
}

return null;
}

/**
* Format raw API error messages for user-friendly display.
*/
function formatErrorMessage(raw: string): string {
const trimmed = raw.trim();
if (!trimmed) return "LLM request failed with an unknown error.";

// Try to extract message from JSON error payload
const jsonStart = trimmed.indexOf("{");
if (jsonStart >= 0) {
try {
const jsonStr = trimmed.slice(jsonStart);
const json = JSON.parse(jsonStr);

// Extract nested error.message or top-level message
let message: string | null = null;
if (json.error && typeof json.error.message === "string") {
message = json.error.message;
} else if (typeof json.message === "string") {
message = json.message;
}

if (message) {
// Extract HTTP status code if present
const httpPrefix = trimmed.slice(0, jsonStart).trim();
const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null;
return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`;
}
} catch {
// Fall through to default handling
}
}

// Fallback: truncate long messages
return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed;
}
Copy link

Copilot AI Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The new error handling behavior introduced in extractText (lines 63-66) and formatErrorMessage function (lines 74-106) lacks test coverage. While the existing test file (message-extract.test.ts) has tests for extractText and extractTextCached, there are no tests verifying that:

  1. Error messages are correctly extracted when stopReason is "error"
  2. The formatErrorMessage function properly parses JSON error payloads
  3. HTTP status codes are correctly extracted and formatted
  4. Edge cases like malformed JSON or empty errorMessage are handled correctly

Consider adding tests similar to those in src/tui/tui-formatters.test.ts which verify error message formatting behavior.

Copilot uses AI. Check for mistakes.
Comment on lines 73 to 105
*/
function formatErrorMessage(raw: string): string {
const trimmed = raw.trim();
if (!trimmed) return "LLM request failed with an unknown error.";

// Try to extract message from JSON error payload
const jsonStart = trimmed.indexOf("{");
if (jsonStart >= 0) {
try {
const jsonStr = trimmed.slice(jsonStart);
const json = JSON.parse(jsonStr);

// Extract nested error.message or top-level message
let message: string | null = null;
if (json.error && typeof json.error.message === "string") {
message = json.error.message;
} else if (typeof json.message === "string") {
message = json.message;
}

if (message) {
// Extract HTTP status code if present
const httpPrefix = trimmed.slice(0, jsonStart).trim();
const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null;
return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`;
}
} catch {
// Fall through to default handling
}
}

// Fallback: truncate long messages
return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed;
Copy link

Copilot AI Jan 31, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The formatErrorMessage function is a simplified version that doesn't handle the special error cases that the backend handles in src/agents/pi-embedded-helpers/errors.ts (formatRawAssistantErrorForUi and formatAssistantErrorText). This means users will see raw technical error messages instead of user-friendly ones for cases like:

  • Context overflow errors (should show: "Context overflow: prompt too large for the model. Try again with less input or a larger-context model.")
  • Role ordering errors (should show: "Message ordering conflict - please try again. If this persists, use /new to start a fresh session.")
  • Overloaded service errors (should show: "The AI service is temporarily overloaded. Please try again in a moment.")
  • Unknown tool errors (should show sandbox-specific policy messages)

Additionally, the backend includes error types and request IDs in the formatted output (e.g., "HTTP 429 rate_limit_error: This request would exceed your account's rate limit (request_id: req_123)"), but this implementation only shows "HTTP 429: message".

Consider either:

  1. Porting the complete error formatting logic from the backend, or
  2. Having the backend send pre-formatted error messages in a separate field so clients don't need to duplicate this logic
Suggested change
*/
function formatErrorMessage(raw: string): string {
const trimmed = raw.trim();
if (!trimmed) return "LLM request failed with an unknown error.";
// Try to extract message from JSON error payload
const jsonStart = trimmed.indexOf("{");
if (jsonStart >= 0) {
try {
const jsonStr = trimmed.slice(jsonStart);
const json = JSON.parse(jsonStr);
// Extract nested error.message or top-level message
let message: string | null = null;
if (json.error && typeof json.error.message === "string") {
message = json.error.message;
} else if (typeof json.message === "string") {
message = json.message;
}
if (message) {
// Extract HTTP status code if present
const httpPrefix = trimmed.slice(0, jsonStart).trim();
const httpCode = /^\d+$/.test(httpPrefix) ? parseInt(httpPrefix, 10) : null;
return httpCode ? `HTTP ${httpCode}: ${message}` : `LLM error: ${message}`;
}
} catch {
// Fall through to default handling
}
}
// Fallback: truncate long messages
return trimmed.length > 600 ? trimmed.slice(0, 600) + "…" : trimmed;
*
* Tries to mirror the backend's formatting by:
* - Handling known special cases (context overflow, role ordering, overload, unknown tool).
* - Including error type and request ID when available.
* - Falling back to a truncated raw message otherwise.
*/
function formatErrorMessage(raw: string): string {
const trimmed = raw.trim();
if (!trimmed) return "LLM request failed with an unknown error.";
// Helper to normalize known error messages into stable, user-friendly text.
function mapSpecialErrorMessage(message: string): string {
const lower = message.toLowerCase();
// Context overflow / token limit
if (
lower.includes("context length exceeded") ||
lower.includes("maximum context length") ||
lower.includes("token limit") ||
lower.includes("too many tokens") ||
lower.includes("context window") ||
lower.includes("prompt too long")
) {
return "Context overflow: prompt too large for the model. Try again with less input or a larger-context model.";
}
// Message / role ordering issues
if (
lower.includes("messages out of order") ||
lower.includes("message ordering") ||
lower.includes("role order") ||
lower.includes("system message must be first") ||
lower.includes("invalid role order")
) {
return "Message ordering conflict - please try again. If this persists, use /new to start a fresh session.";
}
// Overloaded / temporarily unavailable service
if (
lower.includes("overloaded") ||
lower.includes("temporarily unavailable") ||
lower.includes("server is busy") ||
lower.includes("capacity") ||
lower.includes("try again later")
) {
return "The AI service is temporarily overloaded. Please try again in a moment.";
}
// Unknown / disallowed tool or sandbox policy
if (
lower.includes("unknown tool") ||
lower.includes("tool not found") ||
lower.includes("unrecognized tool") ||
lower.includes("tool is not allowed") ||
lower.includes("disallowed tool")
) {
return "This tool isn't available in this environment. Try a different approach or contact an administrator.";
}
return message;
}
// Try to extract message and metadata from JSON error payload
const jsonStart = trimmed.indexOf("{");
if (jsonStart >= 0) {
try {
const jsonStr = trimmed.slice(jsonStart);
const json = JSON.parse(jsonStr) as any;
const errorObj = json && typeof json === "object" && json.error && typeof json.error === "object"
? json.error
: json;
let message: string | null = null;
if (errorObj && typeof errorObj.message === "string") {
message = errorObj.message;
} else if (typeof json.message === "string") {
message = json.message;
}
const type: string | null =
errorObj && typeof errorObj.type === "string"
? errorObj.type
: typeof json.type === "string"
? json.type
: null;
const requestId: string | null =
(errorObj && typeof errorObj.request_id === "string" && errorObj.request_id) ||
(typeof json.request_id === "string" && json.request_id) ||
null;
if (message) {
const friendlyMessage = mapSpecialErrorMessage(message);
// Extract HTTP status code if present before the JSON payload.
const httpPrefix = trimmed.slice(0, jsonStart).trim();
let httpCode: number | null = null;
if (httpPrefix) {
const match = httpPrefix.match(/(\d{3})/);
if (match) {
const parsed = parseInt(match[1], 10);
if (!isNaN(parsed)) httpCode = parsed;
}
}
let prefix = "";
if (httpCode != null) {
prefix = `HTTP ${httpCode}`;
if (type) {
prefix += ` ${type}`;
}
} else if (type) {
prefix = type;
} else {
prefix = "LLM error";
}
let result = `${prefix}: ${friendlyMessage}`;
if (requestId) {
result += ` (request_id: ${requestId})`;
}
return result;
}
} catch {
// Fall through to default handling if JSON parsing fails.
}
}
// Fallback: still try to map to a friendly message, then truncate long messages.
const mapped = mapSpecialErrorMessage(trimmed);
const finalText = mapped.length > 600 ? mapped.slice(0, 600) + "…" : mapped;
return finalText;

Copilot uses AI. Check for mistakes.
@openclaw-barnacle openclaw-barnacle bot added docs Improvements or additions to documentation channel: slack Channel integration: slack labels Feb 1, 2026
@niemesrw niemesrw force-pushed the fix/webchat-error-display branch from 590b96a to 095f090 Compare February 1, 2026 11:50
@openclaw-barnacle openclaw-barnacle bot removed docs Improvements or additions to documentation channel: slack Channel integration: slack labels Feb 1, 2026
@openclaw-barnacle
Copy link

This pull request has been automatically marked as stale due to inactivity.
Please add updates or it will be closed.

@openclaw-barnacle openclaw-barnacle bot added the stale Marked as stale due to inactivity label Feb 15, 2026
@openclaw-barnacle openclaw-barnacle bot removed the stale Marked as stale due to inactivity label Feb 16, 2026
@niemesrw
Copy link
Author

Landed via squash merge on main.

Applied one lint fix (added curly braces per eslint(curly) rule in message-extract.ts). Resolved merge conflicts with latest main. Changelog entry moved to current unreleased section with PR # and contributor thanks.

Welcome to the clawtributors list, @niemesrw!

niemesrw and others added 4 commits February 19, 2026 17:31
When API requests fail with errors (billing, auth, rate limits, etc),
the webchat UI now displays the formatted error message instead of
showing an empty assistant bubble. This helps users understand why
their message didn't get a response.

- Add errorMessage field to ChatMessage model (Swift/Kotlin)
- Decode errorMessage from session transcript history
- Display formatted error in message bubble when content is empty
  and stopReason is 'error'
- Format raw API errors into user-friendly messages

Fixes openclaw#4418
When API requests fail with errors (billing, auth, rate limits, etc),
the webchat UI now displays the formatted error message instead of
showing an empty assistant bubble. Extracts error messages from the
stopReason/errorMessage fields in session history.

Fixes openclaw#4418

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
- Encode toolCallId/toolName using snake_case keys (tool_call_id,
  tool_name) to match server-side format and decode fallback logic
- Tighten primaryText error display condition to only show formatted
  error when message has no renderable content (text or non-text),
  preventing accidental hiding of attachment/tool payloads

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@niemesrw niemesrw force-pushed the fix/webchat-error-display branch from 095f090 to 2645734 Compare February 19, 2026 22:35
@niemesrw
Copy link
Author

Rebased onto latest main and addressed review feedback:

Merge conflicts resolved:

  • ChatMessageViews.kt — adapted to upstream displayableContent rename while preserving error field pass-through
  • CHANGELOG.md — moved entry to current unreleased section

Review feedback addressed:

  • Snake-case encoding (Greptile) — encode(to:) now writes tool_call_id / tool_name instead of camelCase, matching the server-side format and the decode fallback logic. This fixes the round-trip mismatch.
  • Tightened error display condition (Greptile) — primaryText now checks for non-text content (attachments, tool payloads) before substituting the formatted error message. Previously it only checked if joined text was empty, which could hide non-text renderable content on error responses.

CI note: The macos-app (lint) failure is pre-existing — CI on main is also failing with 1,011 SwiftLint/SwiftFormat violations in apps/macos/Sources. This PR does not introduce any new violations.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Errors not displayed in TUI/webchat — shows blank or '(no output)' instead

1 participant