Skip to content

Conversation

@xinquiry
Copy link
Collaborator

@xinquiry xinquiry commented Feb 1, 2026

Summary

  • Fix bug where thinking content appears as a separate message bubble below the agent response
  • When agent_start consumes the loading message first, thinking_start now attaches to the agent execution message instead of creating a separate message
  • Updated thinking_chunk and thinking_end handlers to also find agent messages with isThinking flag

Test plan

  • Send message to an agent using a model with extended thinking (Claude with thinking enabled)
  • Verify thinking content appears in "Show thinking" collapsible ON the same message bubble as the agent response
  • Verify no separate empty message bubble is created
  • Refresh page and verify message still displays correctly

🤖 Generated with Claude Code

Summary by Sourcery

确保「思考内容」附加到对应的代理消息上,而不是作为单独的气泡渲染。

Bug 修复:

  • 当代理执行正在运行时,防止思考内容出现在单独的空助手消息中。

增强:

  • 在可用的情况下,将思考状态附加到当前正在运行的代理执行消息上,仅在必要时才退回到创建专门的思考消息。
  • 更新思考流式处理程序,以便在缺少原始事件 ID 的情况下,也能定位被标记为「思考中」的代理消息。
Original summary in English

Summary by Sourcery

Ensure thinking content is attached to the appropriate agent message instead of rendering as a separate bubble.

Bug Fixes:

  • Prevent thinking content from appearing in a separate empty assistant message when an agent execution is running.

Enhancements:

  • Attach thinking state to the current running agent execution message when available, falling back to creating a dedicated thinking message only when necessary.
  • Update thinking streaming handlers to locate agent messages marked as thinking, even when they lack the original event ID.

xinquiry and others added 11 commits January 28, 2026 22:47
…ismatch (#206)

* fix: resolve i18next missing key warnings in TierInfoModal

Replace hardcoded Chinese strings with proper i18n translation keys
in the tier selector component. This fixes console warnings about
missing translation keys when using the zh locale.

- Add speedLabels, reasoningLabels, and features keys to app.json
- Add multiplier key to tierSelector in all locales
- Add recommended key to common.json in all locales
- Refactor TierInfoModal.tsx to use translation key references

Co-Authored-By: Claude <noreply@anthropic.com>

* feat: add drag-and-drop agent reordering

   Integrate dnd-kit into existing AgentList and AgentListItem components
   to support drag-and-drop reordering in both the sidebar and spatial
   focused view.

   Backend:
   - Add sort_order field to Agent model
   - Add PATCH endpoint for bulk reordering agents
   - Add migration for sort_order column

   Frontend:
   - Add sortable prop to AgentList with DndContext/SortableContext
   - Add dragHandleProps to AgentListItem for drag behavior
   - Use plain div (not motion.div) when sortable to avoid animation conflicts
   - Use set-based comparison for state sync (only reset on add/remove)
   - Add reorderAgents action to agentSlice

* feat: auto-refresh frontend on version mismatch with backend

  Add automatic update mechanism that detects when the frontend version
  doesn't match the backend version and refreshes the page to fetch the
  latest assets. This ensures users with cached frontends always get
  updated code without manually clearing their browser cache.

  - Add useAutoUpdate hook with retry logic (max 3 attempts)
  - Add UpdateOverlay component for update feedback
  - Clear service workers and caches before reload
  - Store retry state in localStorage to prevent infinite loops

* fix: address PR review feedback for drag-and-drop agent reordering

  - Move AgentList state sync from render to useEffect to prevent render loops
  - Add isDraggingRef to prevent backend sync during active drag operations
  - Restore keyboard accessibility to CompactAgentListItem (role, tabIndex,
    aria-pressed, onKeyDown handler)
  - Guard localStorage writes with try/catch in useAutoUpdate to handle
    restricted environments

---------

Co-authored-by: Claude <noreply@anthropic.com>
…nt (#207)

* fix: move the comment to correct position

* fix: fix the test environment version

* fix: remove auto-update feature to prevent refresh loops

  The auto-update mechanism causes infinite refresh loops when frontend
  and backend versions mismatch in test environment. Remove the feature
  entirely until a more robust solution is implemented.

  - Delete useAutoUpdate hook and UpdateOverlay component
  - Remove AutoUpdateWrapper from App.tsx
…resh (#208)

* feat: add simple landing page and logout button

* fix(web): unify tool call rendering in agent timeline after refresh; add landing page

  - Render tool calls as pills with a shared details modal (args/results/errors)
  - Attach historical tool_call tool messages into agentExecution phases instead of standalone messages
  - Remove legacy ToolCallCard-based rendering path

* fix(web): address review feedback for tool call modal + typewriter

  - Move tool-call UI strings into i18n (app.chat.toolCall.*) for en/zh/ja
  - Memoize tool result parsing and image URL derivation in ToolCallDetails
  - Avoid duplicate argument headings in ToolCallDetailsModal for waiting_confirmation
  - Remove redundant typewriter cursor className conditional and fix unused state var
* feat: add message editing and deletion with truncate-regenerate flow

  - Add PATCH /messages/{id} endpoint for editing user messages
  - Add DELETE /messages/{id} endpoint for deleting any message
  - Add regenerate WebSocket handler for re-running agent after edit
  - Add edit/delete UI to ChatBubble with hover actions
  - Add i18n translations for en/zh/ja

  Includes fixes from code review:
  - Fix pre-deduction error handling to skip dispatch on any failure
  - Reset responding state before regeneration to prevent stuck UI
  - Add message ownership verification before edit/delete operations

* fix: improve the code according to sourcery review
…#211)

The previous approach used only the pyproject.toml version (e.g., 1.0.16)
which caused Kubernetes to not pull new images when multiple commits
used the same version tag.

Now uses format: {version}-{short-sha} (e.g., 1.0.16-f13e3c0)

Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
* feat: add conversation interrupt/abort functionality

- Add stop button and Escape key shortcut to abort streaming generation
- Implement Redis-based signaling between API server and Celery worker
- Worker checks abort signal every 0.5s and gracefully stops streaming
- Save partial content and perform partial billing on abort
- Add visual indicator for cancelled/aborted messages
- Add timeout fallback (10s) to reset UI if backend doesn't respond
- Add i18n strings for stop/stopping/escToStop in en/zh/ja

* fix: address abort feature edge cases from code review

- Clear stale abort signals at task start to prevent race condition when
  user reconnects quickly after disconnect
- Finalize AgentRun with 'failed' status on unhandled exceptions to ensure
  consistent DB state across all exit paths
- Move time import to module level (was inline import)

* fix: address Sourcery review feedback for abort feature

- Reuse existing Redis client for abort checks instead of creating new
  connections on each tick (performance improvement)
- Fix potential Redis connection leaks in ABORT and disconnect handlers
  by using try/finally pattern
- Track and cleanup abort timeout in frontend to prevent stale timers
  from racing with subsequent abort requests
* Preserve phase text when copying streamed output

* Extract agent phase content helper

* fix: use existing PhaseExecution type and correct useMemo dependencies

- Replace inline PhaseWithStreamedContent type with Pick<PhaseExecution, 'streamedContent'>
  for type consistency across the codebase
- Fix useMemo dependency array to use agentExecution?.phases instead of agentExecution
  to ensure proper recalculation when phases array changes
…ges (#213)

* Fix message deletion for agent executions

* Address Sourcery review: stricter UUID validation and safer id assignment

- Add isValidUuid utility with canonical UUID pattern (8-4-4-4-12 format)
  to replace loose regex that accepted invalid strings like all-hyphens
- Fix streaming_start handler to only set id when eventData.id is truthy,
  preventing accidental overwrites with undefined/null
- Improve delete guard with contextual messages ("still streaming" vs
  "not saved yet") and change notification type to warning
- Add comprehensive tests for isValidUuid covering valid UUIDs, client IDs,
  invalid formats, and edge cases

Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>

---------

Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
* fix: emit message_saved event after stream abort

When a user interrupts a streaming response, the message is saved to the
database but the frontend never receives the message_saved event. This
leaves the message with a temporary stream_ prefix ID, preventing deletion
until page refresh.

Now the MESSAGE_SAVED event is emitted after db.commit() in the abort
handler, before STREAM_ABORTED, so the frontend updates the message ID
to the real UUID and deletion works immediately.

Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>

* fix: always show latest topic when clicking an agent

Unify sidebar and spatial workspace to use the same logic for selecting
topics. Both now fetch from backend and always show the most recently
updated topic (by updated_at) instead of remembering previously active topics.

Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>

* feat: improve message editing UX with edit-only option and assistant editing

- Add "Edit" and "Edit & Regenerate" dropdown options for user messages
- Allow editing assistant messages (content-only, no regeneration)
- Add copy button to user messages
- Move assistant message actions to top-right for better UX
- Add auto-resizing textarea for editing long messages
- Update backend to support truncate_and_regenerate flag

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

* refactor: extract message content resolution into dedicated module

Extract scattered content resolution logic into core/chat/messageContent.ts
with two main utilities:
- resolveMessageContent(): Single source of truth for content priority
- getMessageDisplayMode(): Explicit rendering mode determination

This refactoring:
- Reduces ChatBubble.tsx complexity (60+ line IIFE → 30 line switch)
- Fixes inconsistency between copy/edit and display logic
- Makes content source priority explicit and documented
- Adds guard for empty content to avoid rendering empty divs
- Improves maintainability with testable pure functions

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>

---------

Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
- Use function_calling method for structured output in clarify node.
  The default json_mode doesn't work with Claude models via GPUGEEK
  provider. Claude supports tool/function calling natively but not
  OpenAI's response_format JSON mode.

- Increase recursion_limit from 25 to 50 in agent.astream() to handle
  complex research tasks with more iterations.

Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
When agent_start event arrives before thinking_start, it consumes the
loading message. The thinking_start handler then couldn't find a loading
message and created a separate thinking message, causing the thinking
content to appear as a separate bubble below the agent response.

Fix the thinking event handlers to also check for running agent execution
messages and attach thinking content to them instead of creating separate
messages.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Sorry @xinquiry, your pull request is larger than the review limit of 150000 diff characters

@xinquiry
Copy link
Collaborator Author

xinquiry commented Feb 1, 2026

@sourcery-ai Review again

@sourcery-ai
Copy link
Contributor

sourcery-ai bot commented Feb 1, 2026

审阅者指南

确保对于需要扩展推理的代理,其思考内容附加在主代理执行消息(或加载消息)上,而不是创建一个单独的空助手消息气泡,并更新流式处理程序以能够正确定位并更新这些附加了思考内容的消息。

将 thinking_start 附加到代理执行或加载消息的时序图

sequenceDiagram
  participant StreamHandler
  participant Channel
  participant Messages

  StreamHandler->>Channel: receive event type thinking_start(id)
  Channel->>Messages: findIndex m.isLoading
  alt loading message found
    Channel->>Messages: attach thinking to loading message
    Messages-->>Channel: message.isThinking = true
  else no loading message
    Channel->>Messages: findLastIndex m.agentExecution.status == running
    alt running agent message found
      Channel->>Messages: attach thinking to running agent message
      Messages-->>Channel: message.isThinking = true
    else no agent execution message
      Channel->>Messages: push new assistant message
      Messages-->>Channel: new message with isThinking = true
    end
  end
  Channel-->>StreamHandler: channel.responding = true
Loading

为 thinking_chunk 和 thinking_end 定位思考消息的时序图

sequenceDiagram
  participant StreamHandler
  participant Channel
  participant Messages

  rect rgb(230,230,250)
    StreamHandler->>Channel: receive event type thinking_chunk(id, content)
    Channel->>Messages: findIndex m.id == id
    alt message found by id
      Channel->>Messages: append content to message.thinkingContent
    else not found by id
      Channel->>Messages: findLastIndex m.isThinking && m.agentExecution.status == running
      alt agent thinking message found
        Channel->>Messages: append content to message.thinkingContent
      else no thinking message
        Channel-->>StreamHandler: ignore chunk
      end
    end
  end

  rect rgb(230,255,230)
    StreamHandler->>Channel: receive event type thinking_end(id)
    Channel->>Messages: findIndex m.id == id
    alt message found by id
      Channel->>Messages: set message.isThinking = false
    else not found by id
      Channel->>Messages: findLastIndex m.isThinking && m.agentExecution
      alt agent thinking message found
        Channel->>Messages: set message.isThinking = false
      else no thinking message
        Channel-->>StreamHandler: ignore end
      end
    end
  end
Loading

文件级变更

Change Details Files
将 thinking_start 的处理逻辑修改为优先复用已有的加载消息或正在运行的代理执行消息,仅在找不到时才创建独立的思考消息。
  • 在接收到 thinking_start 事件后,首先在 channel.messages 中查找 isLoading 消息,如果找到,则将其转换为支持思考的助手消息,并初始化 isThinking 和 thinkingContent。
  • 如果不存在加载消息,则从尾部开始在 channel.messages 中查找 status === "running" 的 agentExecution 消息,并将思考状态(isThinking 和 thinkingContent)附加到该消息上。
  • 如果既没有加载消息,也没有正在运行的代理执行消息,则作为回退创建一个新的助手消息,其中 isThinking: true 且 thinkingContent 为空字符串。
web/src/store/slices/chatSlice.ts
更新 thinking_chunk 的处理,以便在思考内容附加到代理执行消息时,能够定位到正确的消息。
  • 在 thinking_chunk 事件中,首先尝试在 channel.messages 中通过 id 查找目标消息。
  • 如果通过 id 未找到消息,则回退为从尾部开始查找满足 isThinking && agentExecution?.status === "running" 的消息,以支持思考内容附加到进行中的代理执行的场景。
  • 将接收到的思考内容追加到 message.thinkingContent 字符串中,同时保留已有内容。
web/src/store/slices/chatSlice.ts
更新 thinking_end 的处理,以清除可能附加了思考内容的消息(包括代理执行消息)上的思考状态。
  • 在 thinking_end 事件中,首先尝试在 channel.messages 中通过 id 查找目标消息。
  • 如果通过 id 未找到消息,则回退为从尾部开始查找满足 isThinking && m.agentExecution 的消息,以处理附加到代理执行消息上的思考内容。
  • 找到消息后,将 isThinking 设为 false,以停止为该消息渲染思考 UI。
web/src/store/slices/chatSlice.ts

技巧与命令

与 Sourcery 交互

  • 触发新的代码审查: 在拉取请求上评论 @sourcery-ai review
  • 继续讨论: 直接回复 Sourcery 的审查评论。
  • 从审查评论生成 GitHub issue: 在某条审查评论下回复,要求 Sourcery 基于该评论创建一个 issue。你也可以回复 @sourcery-ai issue 来从该评论创建一个 issue。
  • 生成拉取请求标题: 在拉取请求标题的任何位置写上 @sourcery-ai,即可随时生成标题。你也可以在拉取请求上评论 @sourcery-ai title 来(重新)生成标题。
  • 生成拉取请求总结: 在拉取请求正文的任意位置写上 @sourcery-ai summary,即可在该位置生成 PR 总结。你也可以在拉取请求上评论 @sourcery-ai summary 来在任意时间(重新)生成总结。
  • 生成审阅者指南: 在拉取请求上评论 @sourcery-ai guide,即可随时(重新)生成审阅者指南。
  • 一键解决所有 Sourcery 评论: 在拉取请求上评论 @sourcery-ai resolve,即可批量标记所有 Sourcery 评论为已解决。当你已经处理完所有评论且不再希望看到它们时非常有用。
  • 清除所有 Sourcery 审查: 在拉取请求上评论 @sourcery-ai dismiss,以清除所有现有的 Sourcery 审查记录。特别适用于你想要从头开始进行新的审查时——别忘了再评论 @sourcery-ai review 来触发新一轮审查!

自定义你的使用体验

访问你的 控制面板 以:

  • 启用或禁用审查功能,例如 Sourcery 自动生成的拉取请求总结、审阅者指南等。
  • 更改审查语言。
  • 添加、移除或编辑自定义审查指令。
  • 调整其他审查设置。

获取帮助

Original review guide in English

Reviewer's Guide

Ensure thinking content for extended-thinking agents is attached to the main agent execution message (or loading message) instead of creating a separate empty assistant bubble, and update streaming handlers to correctly locate and update these thinking-attached messages.

Sequence diagram for thinking_start attaching to agent execution or loading message

sequenceDiagram
  participant StreamHandler
  participant Channel
  participant Messages

  StreamHandler->>Channel: receive event type thinking_start(id)
  Channel->>Messages: findIndex m.isLoading
  alt loading message found
    Channel->>Messages: attach thinking to loading message
    Messages-->>Channel: message.isThinking = true
  else no loading message
    Channel->>Messages: findLastIndex m.agentExecution.status == running
    alt running agent message found
      Channel->>Messages: attach thinking to running agent message
      Messages-->>Channel: message.isThinking = true
    else no agent execution message
      Channel->>Messages: push new assistant message
      Messages-->>Channel: new message with isThinking = true
    end
  end
  Channel-->>StreamHandler: channel.responding = true
Loading

Sequence diagram for thinking_chunk and thinking_end locating thinking message

sequenceDiagram
  participant StreamHandler
  participant Channel
  participant Messages

  rect rgb(230,230,250)
    StreamHandler->>Channel: receive event type thinking_chunk(id, content)
    Channel->>Messages: findIndex m.id == id
    alt message found by id
      Channel->>Messages: append content to message.thinkingContent
    else not found by id
      Channel->>Messages: findLastIndex m.isThinking && m.agentExecution.status == running
      alt agent thinking message found
        Channel->>Messages: append content to message.thinkingContent
      else no thinking message
        Channel-->>StreamHandler: ignore chunk
      end
    end
  end

  rect rgb(230,255,230)
    StreamHandler->>Channel: receive event type thinking_end(id)
    Channel->>Messages: findIndex m.id == id
    alt message found by id
      Channel->>Messages: set message.isThinking = false
    else not found by id
      Channel->>Messages: findLastIndex m.isThinking && m.agentExecution
      alt agent thinking message found
        Channel->>Messages: set message.isThinking = false
      else no thinking message
        Channel-->>StreamHandler: ignore end
      end
    end
  end
Loading

File-Level Changes

Change Details Files
Change thinking_start handling to prefer existing loading or running agent execution messages and only create a standalone thinking message as a fallback.
  • After receiving a thinking_start event, first search channel.messages for an isLoading message and, if found, convert it into a thinking-enabled assistant message with isThinking and thinkingContent initialized.
  • If no loading message is present, search channel.messages from the end for an agentExecution message with status === "running" and attach thinking state (isThinking and thinkingContent) to that message.
  • If neither a loading nor a running agent execution message is found, create a new assistant message with isThinking: true and empty thinkingContent as a fallback.
web/src/store/slices/chatSlice.ts
Update thinking_chunk handling to locate the correct message when thinking is attached to an agent execution message.
  • On thinking_chunk events, first attempt to find the target message by id in channel.messages.
  • If no message is found by id, fall back to searching from the end for a message with isThinking && agentExecution?.status === "running" to support cases where thinking is attached to an in-progress agent execution.
  • Append incoming thinking content to the message.thinkingContent string while preserving existing content.
web/src/store/slices/chatSlice.ts
Update thinking_end handling to clear thinking state on messages that may have thinking attached, including agent execution messages.
  • On thinking_end events, first try to find the target message by id in channel.messages.
  • If no message is found by id, fall back to searching from the end for a message with isThinking && m.agentExecution to handle thinking attached to agent execution messages.
  • When the message is found, set isThinking to false to stop rendering the thinking UI for that message.
web/src/store/slices/chatSlice.ts

Tips and commands

Interacting with Sourcery

  • Trigger a new review: Comment @sourcery-ai review on the pull request.
  • Continue discussions: Reply directly to Sourcery's review comments.
  • Generate a GitHub issue from a review comment: Ask Sourcery to create an
    issue from a review comment by replying to it. You can also reply to a
    review comment with @sourcery-ai issue to create an issue from it.
  • Generate a pull request title: Write @sourcery-ai anywhere in the pull
    request title to generate a title at any time. You can also comment
    @sourcery-ai title on the pull request to (re-)generate the title at any time.
  • Generate a pull request summary: Write @sourcery-ai summary anywhere in
    the pull request body to generate a PR summary at any time exactly where you
    want it. You can also comment @sourcery-ai summary on the pull request to
    (re-)generate the summary at any time.
  • Generate reviewer's guide: Comment @sourcery-ai guide on the pull
    request to (re-)generate the reviewer's guide at any time.
  • Resolve all Sourcery comments: Comment @sourcery-ai resolve on the
    pull request to resolve all Sourcery comments. Useful if you've already
    addressed all the comments and don't want to see them anymore.
  • Dismiss all Sourcery reviews: Comment @sourcery-ai dismiss on the pull
    request to dismiss all existing Sourcery reviews. Especially useful if you
    want to start fresh with a new review - don't forget to comment
    @sourcery-ai review to trigger a new review!

Customizing Your Experience

Access your dashboard to:

  • Enable or disable review features such as the Sourcery-generated pull request
    summary, the reviewer's guide, and others.
  • Change the review language.
  • Add, remove or edit custom review instructions.
  • Adjust other review settings.

Getting Help

Copy link
Contributor

@sourcery-ai sourcery-ai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - 我发现了 1 个问题,并留下了一些高层反馈:

  • thinking_end 中,用于回退查找 thinking 消息的条件是 m.isThinking && m.agentExecution,而 thinking_chunk 使用的是 status === 'running' 作为约束;建议对齐这两个条件(或说明它们为何不同),以避免对代理消息的定位不一致。
  • thinking_chunkthinking_end 现在都依赖 findLastIndex,这是一个相对较新的方法;如果这段代码会在不支持该方法的运行环境中执行,你可能需要增加保护或 polyfill 来避免运行时错误。
给 AI Agent 的提示
Please address the comments from this code review:

## Overall Comments
-`thinking_end` 中,用于回退查找 thinking 消息的条件是 `m.isThinking && m.agentExecution`,而 `thinking_chunk` 使用的是 `status === 'running'` 作为约束;建议对齐这两个条件(或说明它们为何不同),以避免对代理消息的定位不一致。
- `thinking_chunk``thinking_end` 现在都依赖 `findLastIndex`,这是一个相对较新的方法;如果这段代码会在不支持该方法的运行环境中执行,你可能需要增加保护或 polyfill 来避免运行时错误。

## Individual Comments

### Comment 1
<location> `web/src/store/slices/chatSlice.ts:1343` </location>
<code_context>
+                  channel.messages[loadingIndex] = {
</code_context>

<issue_to_address>
**issue (bug_risk):** 在将 loading 消息转换为 thinking 消息时,保留现有的消息元数据。

像这样直接覆盖 `channel.messages[loadingIndex]` 会丢失该 loading 消息上已有的属性(例如 `agentExecution`、附件、元数据)。在 agent 的场景中你有对已有消息做展开,但这里没有。可以使用类似下面的写法:

```ts
channel.messages[loadingIndex] = {
  ...channel.messages[loadingIndex],
  isThinking: true,
  thinkingContent: '',
  content: '',
};
```

从而保留之前关联的所有数据。
</issue_to_address>

Sourcery 对开源项目是免费的——如果你觉得我们的评审有帮助,欢迎帮忙分享 ✨
帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据这些反馈改进后续的评审。
Original comment in English

Hey - I've found 1 issue, and left some high level feedback:

  • In thinking_end, the fallback lookup for a thinking message uses m.isThinking && m.agentExecution, whereas thinking_chunk constrains to status === 'running'; consider aligning these conditions (or documenting why they differ) to avoid inconsistent targeting of agent messages.
  • Both thinking_chunk and thinking_end now rely on findLastIndex, which is relatively new; if this code runs in environments without native support, you may want to guard or polyfill this to prevent runtime errors.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `thinking_end`, the fallback lookup for a thinking message uses `m.isThinking && m.agentExecution`, whereas `thinking_chunk` constrains to `status === 'running'`; consider aligning these conditions (or documenting why they differ) to avoid inconsistent targeting of agent messages.
- Both `thinking_chunk` and `thinking_end` now rely on `findLastIndex`, which is relatively new; if this code runs in environments without native support, you may want to guard or polyfill this to prevent runtime errors.

## Individual Comments

### Comment 1
<location> `web/src/store/slices/chatSlice.ts:1343` </location>
<code_context>
+                  channel.messages[loadingIndex] = {
</code_context>

<issue_to_address>
**issue (bug_risk):** Preserve existing message metadata when converting a loading message into a thinking message.

Overwriting `channel.messages[loadingIndex]` like this drops any existing properties on the loading message (e.g., `agentExecution`, attachments, metadata). In the agent case you spread the existing message, but not here. Use something like:

```ts
channel.messages[loadingIndex] = {
  ...channel.messages[loadingIndex],
  isThinking: true,
  thinkingContent: '',
  content: '',
};
```

to preserve all previously associated data.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

@@ -1339,28 +1341,56 @@ export const createChatSlice: StateCreator<
thinkingContent: "",
content: "",
};
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

issue (bug_risk): 在将 loading 消息转换为 thinking 消息时,保留现有的消息元数据。

像这样直接覆盖 channel.messages[loadingIndex] 会丢失该 loading 消息上已有的属性(例如 agentExecution、附件、元数据)。在 agent 的场景中你有对已有消息做展开,但这里没有。可以使用类似下面的写法:

channel.messages[loadingIndex] = {
  ...channel.messages[loadingIndex],
  isThinking: true,
  thinkingContent: '',
  content: '',
};

从而保留之前关联的所有数据。

Original comment in English

issue (bug_risk): Preserve existing message metadata when converting a loading message into a thinking message.

Overwriting channel.messages[loadingIndex] like this drops any existing properties on the loading message (e.g., agentExecution, attachments, metadata). In the agent case you spread the existing message, but not here. Use something like:

channel.messages[loadingIndex] = {
  ...channel.messages[loadingIndex],
  isThinking: true,
  thinkingContent: '',
  content: '',
};

to preserve all previously associated data.

Use consistent condition `m.agentExecution?.status === "running"` in both
thinking_chunk and thinking_end handlers for finding agent messages.

Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
@xinquiry xinquiry requested a review from Mile-Away February 1, 2026 09:25
@xinquiry xinquiry merged commit be5c6be into main Feb 1, 2026
2 checks passed
@xinquiry xinquiry deleted the fix/tool-calling-position branch February 1, 2026 09:28
Mile-Away pushed a commit that referenced this pull request Feb 1, 2026
## [1.1.1](v1.1.0...v1.1.1) (2026-02-01)

### 🐛 Bug Fixes

* attach thinking content to agent execution message ([#218](#218)) ([be5c6be](be5c6be)), closes [#206](#206)
@Mile-Away
Copy link
Contributor

🎉 This PR is included in version 1.1.1 🎉

The release is available on GitHub release

Your semantic-release bot 📦🚀

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Projects

None yet

Development

Successfully merging this pull request may close these issues.

3 participants