-
Notifications
You must be signed in to change notification settings - Fork 5
fix: attach thinking content to agent execution message #218
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
…ismatch (#206) * fix: resolve i18next missing key warnings in TierInfoModal Replace hardcoded Chinese strings with proper i18n translation keys in the tier selector component. This fixes console warnings about missing translation keys when using the zh locale. - Add speedLabels, reasoningLabels, and features keys to app.json - Add multiplier key to tierSelector in all locales - Add recommended key to common.json in all locales - Refactor TierInfoModal.tsx to use translation key references Co-Authored-By: Claude <noreply@anthropic.com> * feat: add drag-and-drop agent reordering Integrate dnd-kit into existing AgentList and AgentListItem components to support drag-and-drop reordering in both the sidebar and spatial focused view. Backend: - Add sort_order field to Agent model - Add PATCH endpoint for bulk reordering agents - Add migration for sort_order column Frontend: - Add sortable prop to AgentList with DndContext/SortableContext - Add dragHandleProps to AgentListItem for drag behavior - Use plain div (not motion.div) when sortable to avoid animation conflicts - Use set-based comparison for state sync (only reset on add/remove) - Add reorderAgents action to agentSlice * feat: auto-refresh frontend on version mismatch with backend Add automatic update mechanism that detects when the frontend version doesn't match the backend version and refreshes the page to fetch the latest assets. This ensures users with cached frontends always get updated code without manually clearing their browser cache. - Add useAutoUpdate hook with retry logic (max 3 attempts) - Add UpdateOverlay component for update feedback - Clear service workers and caches before reload - Store retry state in localStorage to prevent infinite loops * fix: address PR review feedback for drag-and-drop agent reordering - Move AgentList state sync from render to useEffect to prevent render loops - Add isDraggingRef to prevent backend sync during active drag operations - Restore keyboard accessibility to CompactAgentListItem (role, tabIndex, aria-pressed, onKeyDown handler) - Guard localStorage writes with try/catch in useAutoUpdate to handle restricted environments --------- Co-authored-by: Claude <noreply@anthropic.com>
…nt (#207) * fix: move the comment to correct position * fix: fix the test environment version * fix: remove auto-update feature to prevent refresh loops The auto-update mechanism causes infinite refresh loops when frontend and backend versions mismatch in test environment. Remove the feature entirely until a more robust solution is implemented. - Delete useAutoUpdate hook and UpdateOverlay component - Remove AutoUpdateWrapper from App.tsx
…resh (#208) * feat: add simple landing page and logout button * fix(web): unify tool call rendering in agent timeline after refresh; add landing page - Render tool calls as pills with a shared details modal (args/results/errors) - Attach historical tool_call tool messages into agentExecution phases instead of standalone messages - Remove legacy ToolCallCard-based rendering path * fix(web): address review feedback for tool call modal + typewriter - Move tool-call UI strings into i18n (app.chat.toolCall.*) for en/zh/ja - Memoize tool result parsing and image URL derivation in ToolCallDetails - Avoid duplicate argument headings in ToolCallDetailsModal for waiting_confirmation - Remove redundant typewriter cursor className conditional and fix unused state var
* feat: add message editing and deletion with truncate-regenerate flow
- Add PATCH /messages/{id} endpoint for editing user messages
- Add DELETE /messages/{id} endpoint for deleting any message
- Add regenerate WebSocket handler for re-running agent after edit
- Add edit/delete UI to ChatBubble with hover actions
- Add i18n translations for en/zh/ja
Includes fixes from code review:
- Fix pre-deduction error handling to skip dispatch on any failure
- Reset responding state before regeneration to prevent stuck UI
- Add message ownership verification before edit/delete operations
* fix: improve the code according to sourcery review
…#211) The previous approach used only the pyproject.toml version (e.g., 1.0.16) which caused Kubernetes to not pull new images when multiple commits used the same version tag. Now uses format: {version}-{short-sha} (e.g., 1.0.16-f13e3c0) Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
* feat: add conversation interrupt/abort functionality - Add stop button and Escape key shortcut to abort streaming generation - Implement Redis-based signaling between API server and Celery worker - Worker checks abort signal every 0.5s and gracefully stops streaming - Save partial content and perform partial billing on abort - Add visual indicator for cancelled/aborted messages - Add timeout fallback (10s) to reset UI if backend doesn't respond - Add i18n strings for stop/stopping/escToStop in en/zh/ja * fix: address abort feature edge cases from code review - Clear stale abort signals at task start to prevent race condition when user reconnects quickly after disconnect - Finalize AgentRun with 'failed' status on unhandled exceptions to ensure consistent DB state across all exit paths - Move time import to module level (was inline import) * fix: address Sourcery review feedback for abort feature - Reuse existing Redis client for abort checks instead of creating new connections on each tick (performance improvement) - Fix potential Redis connection leaks in ABORT and disconnect handlers by using try/finally pattern - Track and cleanup abort timeout in frontend to prevent stale timers from racing with subsequent abort requests
* Preserve phase text when copying streamed output * Extract agent phase content helper * fix: use existing PhaseExecution type and correct useMemo dependencies - Replace inline PhaseWithStreamedContent type with Pick<PhaseExecution, 'streamedContent'> for type consistency across the codebase - Fix useMemo dependency array to use agentExecution?.phases instead of agentExecution to ensure proper recalculation when phases array changes
…ges (#213) * Fix message deletion for agent executions * Address Sourcery review: stricter UUID validation and safer id assignment - Add isValidUuid utility with canonical UUID pattern (8-4-4-4-12 format) to replace loose regex that accepted invalid strings like all-hyphens - Fix streaming_start handler to only set id when eventData.id is truthy, preventing accidental overwrites with undefined/null - Improve delete guard with contextual messages ("still streaming" vs "not saved yet") and change notification type to warning - Add comprehensive tests for isValidUuid covering valid UUIDs, client IDs, invalid formats, and edge cases Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
* fix: emit message_saved event after stream abort When a user interrupts a streaming response, the message is saved to the database but the frontend never receives the message_saved event. This leaves the message with a temporary stream_ prefix ID, preventing deletion until page refresh. Now the MESSAGE_SAVED event is emitted after db.commit() in the abort handler, before STREAM_ABORTED, so the frontend updates the message ID to the real UUID and deletion works immediately. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: always show latest topic when clicking an agent Unify sidebar and spatial workspace to use the same logic for selecting topics. Both now fetch from backend and always show the most recently updated topic (by updated_at) instead of remembering previously active topics. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * feat: improve message editing UX with edit-only option and assistant editing - Add "Edit" and "Edit & Regenerate" dropdown options for user messages - Allow editing assistant messages (content-only, no regeneration) - Add copy button to user messages - Move assistant message actions to top-right for better UX - Add auto-resizing textarea for editing long messages - Update backend to support truncate_and_regenerate flag Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: extract message content resolution into dedicated module Extract scattered content resolution logic into core/chat/messageContent.ts with two main utilities: - resolveMessageContent(): Single source of truth for content priority - getMessageDisplayMode(): Explicit rendering mode determination This refactoring: - Reduces ChatBubble.tsx complexity (60+ line IIFE → 30 line switch) - Fixes inconsistency between copy/edit and display logic - Makes content source priority explicit and documented - Adds guard for empty content to avoid rendering empty divs - Improves maintainability with testable pure functions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com>
- Use function_calling method for structured output in clarify node. The default json_mode doesn't work with Claude models via GPUGEEK provider. Claude supports tool/function calling natively but not OpenAI's response_format JSON mode. - Increase recursion_limit from 25 to 50 in agent.astream() to handle complex research tasks with more iterations. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com>
When agent_start event arrives before thinking_start, it consumes the loading message. The thinking_start handler then couldn't find a loading message and created a separate thinking message, causing the thinking content to appear as a separate bubble below the agent response. Fix the thinking event handlers to also check for running agent execution messages and attach thinking content to them instead of creating separate messages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Sorry @xinquiry, your pull request is larger than the review limit of 150000 diff characters
|
@sourcery-ai Review again |
审阅者指南确保对于需要扩展推理的代理,其思考内容附加在主代理执行消息(或加载消息)上,而不是创建一个单独的空助手消息气泡,并更新流式处理程序以能够正确定位并更新这些附加了思考内容的消息。 将 thinking_start 附加到代理执行或加载消息的时序图sequenceDiagram
participant StreamHandler
participant Channel
participant Messages
StreamHandler->>Channel: receive event type thinking_start(id)
Channel->>Messages: findIndex m.isLoading
alt loading message found
Channel->>Messages: attach thinking to loading message
Messages-->>Channel: message.isThinking = true
else no loading message
Channel->>Messages: findLastIndex m.agentExecution.status == running
alt running agent message found
Channel->>Messages: attach thinking to running agent message
Messages-->>Channel: message.isThinking = true
else no agent execution message
Channel->>Messages: push new assistant message
Messages-->>Channel: new message with isThinking = true
end
end
Channel-->>StreamHandler: channel.responding = true
为 thinking_chunk 和 thinking_end 定位思考消息的时序图sequenceDiagram
participant StreamHandler
participant Channel
participant Messages
rect rgb(230,230,250)
StreamHandler->>Channel: receive event type thinking_chunk(id, content)
Channel->>Messages: findIndex m.id == id
alt message found by id
Channel->>Messages: append content to message.thinkingContent
else not found by id
Channel->>Messages: findLastIndex m.isThinking && m.agentExecution.status == running
alt agent thinking message found
Channel->>Messages: append content to message.thinkingContent
else no thinking message
Channel-->>StreamHandler: ignore chunk
end
end
end
rect rgb(230,255,230)
StreamHandler->>Channel: receive event type thinking_end(id)
Channel->>Messages: findIndex m.id == id
alt message found by id
Channel->>Messages: set message.isThinking = false
else not found by id
Channel->>Messages: findLastIndex m.isThinking && m.agentExecution
alt agent thinking message found
Channel->>Messages: set message.isThinking = false
else no thinking message
Channel-->>StreamHandler: ignore end
end
end
end
文件级变更
技巧与命令与 Sourcery 交互
自定义你的使用体验访问你的 控制面板 以:
获取帮助Original review guide in EnglishReviewer's GuideEnsure thinking content for extended-thinking agents is attached to the main agent execution message (or loading message) instead of creating a separate empty assistant bubble, and update streaming handlers to correctly locate and update these thinking-attached messages. Sequence diagram for thinking_start attaching to agent execution or loading messagesequenceDiagram
participant StreamHandler
participant Channel
participant Messages
StreamHandler->>Channel: receive event type thinking_start(id)
Channel->>Messages: findIndex m.isLoading
alt loading message found
Channel->>Messages: attach thinking to loading message
Messages-->>Channel: message.isThinking = true
else no loading message
Channel->>Messages: findLastIndex m.agentExecution.status == running
alt running agent message found
Channel->>Messages: attach thinking to running agent message
Messages-->>Channel: message.isThinking = true
else no agent execution message
Channel->>Messages: push new assistant message
Messages-->>Channel: new message with isThinking = true
end
end
Channel-->>StreamHandler: channel.responding = true
Sequence diagram for thinking_chunk and thinking_end locating thinking messagesequenceDiagram
participant StreamHandler
participant Channel
participant Messages
rect rgb(230,230,250)
StreamHandler->>Channel: receive event type thinking_chunk(id, content)
Channel->>Messages: findIndex m.id == id
alt message found by id
Channel->>Messages: append content to message.thinkingContent
else not found by id
Channel->>Messages: findLastIndex m.isThinking && m.agentExecution.status == running
alt agent thinking message found
Channel->>Messages: append content to message.thinkingContent
else no thinking message
Channel-->>StreamHandler: ignore chunk
end
end
end
rect rgb(230,255,230)
StreamHandler->>Channel: receive event type thinking_end(id)
Channel->>Messages: findIndex m.id == id
alt message found by id
Channel->>Messages: set message.isThinking = false
else not found by id
Channel->>Messages: findLastIndex m.isThinking && m.agentExecution
alt agent thinking message found
Channel->>Messages: set message.isThinking = false
else no thinking message
Channel-->>StreamHandler: ignore end
end
end
end
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - 我发现了 1 个问题,并留下了一些高层反馈:
- 在
thinking_end中,用于回退查找 thinking 消息的条件是m.isThinking && m.agentExecution,而thinking_chunk使用的是status === 'running'作为约束;建议对齐这两个条件(或说明它们为何不同),以避免对代理消息的定位不一致。 thinking_chunk和thinking_end现在都依赖findLastIndex,这是一个相对较新的方法;如果这段代码会在不支持该方法的运行环境中执行,你可能需要增加保护或 polyfill 来避免运行时错误。
给 AI Agent 的提示
Please address the comments from this code review:
## Overall Comments
- 在 `thinking_end` 中,用于回退查找 thinking 消息的条件是 `m.isThinking && m.agentExecution`,而 `thinking_chunk` 使用的是 `status === 'running'` 作为约束;建议对齐这两个条件(或说明它们为何不同),以避免对代理消息的定位不一致。
- `thinking_chunk` 和 `thinking_end` 现在都依赖 `findLastIndex`,这是一个相对较新的方法;如果这段代码会在不支持该方法的运行环境中执行,你可能需要增加保护或 polyfill 来避免运行时错误。
## Individual Comments
### Comment 1
<location> `web/src/store/slices/chatSlice.ts:1343` </location>
<code_context>
+ channel.messages[loadingIndex] = {
</code_context>
<issue_to_address>
**issue (bug_risk):** 在将 loading 消息转换为 thinking 消息时,保留现有的消息元数据。
像这样直接覆盖 `channel.messages[loadingIndex]` 会丢失该 loading 消息上已有的属性(例如 `agentExecution`、附件、元数据)。在 agent 的场景中你有对已有消息做展开,但这里没有。可以使用类似下面的写法:
```ts
channel.messages[loadingIndex] = {
...channel.messages[loadingIndex],
isThinking: true,
thinkingContent: '',
content: '',
};
```
从而保留之前关联的所有数据。
</issue_to_address>帮我变得更有用!请在每条评论上点 👍 或 👎,我会根据这些反馈改进后续的评审。
Original comment in English
Hey - I've found 1 issue, and left some high level feedback:
- In
thinking_end, the fallback lookup for a thinking message usesm.isThinking && m.agentExecution, whereasthinking_chunkconstrains tostatus === 'running'; consider aligning these conditions (or documenting why they differ) to avoid inconsistent targeting of agent messages. - Both
thinking_chunkandthinking_endnow rely onfindLastIndex, which is relatively new; if this code runs in environments without native support, you may want to guard or polyfill this to prevent runtime errors.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- In `thinking_end`, the fallback lookup for a thinking message uses `m.isThinking && m.agentExecution`, whereas `thinking_chunk` constrains to `status === 'running'`; consider aligning these conditions (or documenting why they differ) to avoid inconsistent targeting of agent messages.
- Both `thinking_chunk` and `thinking_end` now rely on `findLastIndex`, which is relatively new; if this code runs in environments without native support, you may want to guard or polyfill this to prevent runtime errors.
## Individual Comments
### Comment 1
<location> `web/src/store/slices/chatSlice.ts:1343` </location>
<code_context>
+ channel.messages[loadingIndex] = {
</code_context>
<issue_to_address>
**issue (bug_risk):** Preserve existing message metadata when converting a loading message into a thinking message.
Overwriting `channel.messages[loadingIndex]` like this drops any existing properties on the loading message (e.g., `agentExecution`, attachments, metadata). In the agent case you spread the existing message, but not here. Use something like:
```ts
channel.messages[loadingIndex] = {
...channel.messages[loadingIndex],
isThinking: true,
thinkingContent: '',
content: '',
};
```
to preserve all previously associated data.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
| @@ -1339,28 +1341,56 @@ export const createChatSlice: StateCreator< | |||
| thinkingContent: "", | |||
| content: "", | |||
| }; | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
issue (bug_risk): 在将 loading 消息转换为 thinking 消息时,保留现有的消息元数据。
像这样直接覆盖 channel.messages[loadingIndex] 会丢失该 loading 消息上已有的属性(例如 agentExecution、附件、元数据)。在 agent 的场景中你有对已有消息做展开,但这里没有。可以使用类似下面的写法:
channel.messages[loadingIndex] = {
...channel.messages[loadingIndex],
isThinking: true,
thinkingContent: '',
content: '',
};从而保留之前关联的所有数据。
Original comment in English
issue (bug_risk): Preserve existing message metadata when converting a loading message into a thinking message.
Overwriting channel.messages[loadingIndex] like this drops any existing properties on the loading message (e.g., agentExecution, attachments, metadata). In the agent case you spread the existing message, but not here. Use something like:
channel.messages[loadingIndex] = {
...channel.messages[loadingIndex],
isThinking: true,
thinkingContent: '',
content: '',
};to preserve all previously associated data.
Use consistent condition `m.agentExecution?.status === "running"` in both thinking_chunk and thinking_end handlers for finding agent messages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
|
🎉 This PR is included in version 1.1.1 🎉 The release is available on GitHub release Your semantic-release bot 📦🚀 |
Summary
agent_startconsumes the loading message first,thinking_startnow attaches to the agent execution message instead of creating a separate messagethinking_chunkandthinking_endhandlers to also find agent messages withisThinkingflagTest plan
🤖 Generated with Claude Code
Summary by Sourcery
确保「思考内容」附加到对应的代理消息上,而不是作为单独的气泡渲染。
Bug 修复:
增强:
Original summary in English
Summary by Sourcery
Ensure thinking content is attached to the appropriate agent message instead of rendering as a separate bubble.
Bug Fixes:
Enhancements: