-
Notifications
You must be signed in to change notification settings - Fork 5
fix: deep research structured output and recursion limit #216
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Use function_calling method for structured output in clarify node. The default json_mode doesn't work with Claude models via GPUGEEK provider. Claude supports tool/function calling natively but not OpenAI's response_format JSON mode. - Increase recursion_limit from 25 to 50 in agent.astream() to handle complex research tasks with more iterations. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com>
审阅者指南(在小型 PR 上折叠)审阅者指南调整 deep research clarify 节点以使用基于 function calling 的结构化输出,从而获得更好的多模型兼容性,并提升 agent 的递归上限,使其在终止前可以处理更复杂的任务。 使用 function_calling 的 clarify_node 结构化输出时序图sequenceDiagram
participant ClarifyNode
participant LLMFactory
participant LLM
participant StructuredLLM
ClarifyNode->>LLMFactory: llm_factory(temperature=0.3)
LLMFactory-->>ClarifyNode: llm
ClarifyNode->>ClarifyNode: messages_str = get_buffer_string(state.messages)
ClarifyNode->>ClarifyNode: date_str = get_today_str()
ClarifyNode->>ClarifyNode: prompt = CLARIFY_WITH_USER_PROMPT.format(messages, date)
ClarifyNode->>LLM: with_structured_output(ClarifyWithUser, method=function_calling)
LLM-->>ClarifyNode: llm_with_struct
ClarifyNode->>StructuredLLM: ainvoke([HumanMessage(content=prompt)])
StructuredLLM-->>ClarifyNode: response
ClarifyNode->>ClarifyNode: result = ClarifyWithUser.model_validate(response)
ClarifyNode-->>ClarifyNode: use result for clarification state
提升 recursion_limit 后的 agent.astream 时序图sequenceDiagram
participant ChatProcessor
participant Agent
ChatProcessor->>ChatProcessor: prepare history_messages
ChatProcessor->>Agent: astream({messages: history_messages}, stream_mode=[updates, messages], config={recursion_limit: 50})
loop for each chunk
Agent-->>ChatProcessor: (mode, data)
ChatProcessor->>ChatProcessor: chunk_count += 1
ChatProcessor->>ChatProcessor: handle mode and data
end
Agent-->>ChatProcessor: stream completed
deep research clarify 与 agent 流式变更的类图classDiagram
class ClarifyState {
}
class ClarifyWithUser {
}
class LLMFactory {
+llm_factory(temperature)
}
class LLM {
+with_structured_output(output_type, method)
}
class StructuredLLM {
+ainvoke(messages)
}
class ClarifyNode {
+clarify_node(state)
}
class Agent {
+astream(input, stream_mode, config)
}
class ChatProcessor {
+_process_agent_stream(agent, history_messages)
}
ClarifyNode --> ClarifyState
ClarifyNode --> ClarifyWithUser
ClarifyNode --> LLMFactory
LLMFactory --> LLM
LLM --> StructuredLLM
ClarifyNode --> StructuredLLM
ChatProcessor --> Agent
ChatProcessor --> ClarifyNode
Agent ..> ClarifyWithUser
Agent ..> ClarifyState
StructuredLLM ..> ClarifyWithUser
ClarifyWithUser <.. ChatProcessor
文件级变更
提示与命令与 Sourcery 交互
自定义你的体验访问你的 控制面板 以:
获取帮助Original review guide in EnglishReviewer's guide (collapsed on small PRs)Reviewer's GuideAdjusts the deep research clarify node to use function-calling-based structured output for better provider compatibility and raises the agent recursion limit to handle more complex tasks before terminating. Sequence diagram for clarify_node structured output using function_callingsequenceDiagram
participant ClarifyNode
participant LLMFactory
participant LLM
participant StructuredLLM
ClarifyNode->>LLMFactory: llm_factory(temperature=0.3)
LLMFactory-->>ClarifyNode: llm
ClarifyNode->>ClarifyNode: messages_str = get_buffer_string(state.messages)
ClarifyNode->>ClarifyNode: date_str = get_today_str()
ClarifyNode->>ClarifyNode: prompt = CLARIFY_WITH_USER_PROMPT.format(messages, date)
ClarifyNode->>LLM: with_structured_output(ClarifyWithUser, method=function_calling)
LLM-->>ClarifyNode: llm_with_struct
ClarifyNode->>StructuredLLM: ainvoke([HumanMessage(content=prompt)])
StructuredLLM-->>ClarifyNode: response
ClarifyNode->>ClarifyNode: result = ClarifyWithUser.model_validate(response)
ClarifyNode-->>ClarifyNode: use result for clarification state
Sequence diagram for agent.astream with increased recursion_limitsequenceDiagram
participant ChatProcessor
participant Agent
ChatProcessor->>ChatProcessor: prepare history_messages
ChatProcessor->>Agent: astream({messages: history_messages}, stream_mode=[updates, messages], config={recursion_limit: 50})
loop for each chunk
Agent-->>ChatProcessor: (mode, data)
ChatProcessor->>ChatProcessor: chunk_count += 1
ChatProcessor->>ChatProcessor: handle mode and data
end
Agent-->>ChatProcessor: stream completed
Class diagram for deep research clarify and agent streaming changesclassDiagram
class ClarifyState {
}
class ClarifyWithUser {
}
class LLMFactory {
+llm_factory(temperature)
}
class LLM {
+with_structured_output(output_type, method)
}
class StructuredLLM {
+ainvoke(messages)
}
class ClarifyNode {
+clarify_node(state)
}
class Agent {
+astream(input, stream_mode, config)
}
class ChatProcessor {
+_process_agent_stream(agent, history_messages)
}
ClarifyNode --> ClarifyState
ClarifyNode --> ClarifyWithUser
ClarifyNode --> LLMFactory
LLMFactory --> LLM
LLM --> StructuredLLM
ClarifyNode --> StructuredLLM
ChatProcessor --> Agent
ChatProcessor --> ClarifyNode
Agent ..> ClarifyWithUser
Agent ..> ClarifyState
StructuredLLM ..> ClarifyWithUser
ClarifyWithUser <.. ChatProcessor
File-Level Changes
Tips and commandsInteracting with Sourcery
Customizing Your ExperienceAccess your dashboard to:
Getting Help
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hey - 我在这里给出了一些整体性的反馈:
_process_agent_stream中硬编码的recursion_limit=50也许更适合作为可配置的设置或共享常量,这样就可以在不同环境中进行调优,而不需要修改代码。- 在
clarify_node中,如果这个模式在其他地方也会复用,建议对每个 LLM 实例(或在更高层级)只实例化一次llm.with_structured_output(ClarifyWithUser, method="function_calling"),以避免分散的配置,并将与提供商相关的行为集中管理。
AI Agents 的提示词
Please address the comments from this code review:
## Overall Comments
- The hard-coded `recursion_limit=50` in `_process_agent_stream` may be better expressed as a configurable setting or shared constant so it can be tuned per environment without code changes.
- In `clarify_node`, consider instantiating `llm.with_structured_output(ClarifyWithUser, method="function_calling")` once per LLM instance (or at a higher level) if this pattern is reused elsewhere, to avoid scattered configuration and keep provider-specific behavior centralized.帮我变得更有用!请在每条评论上点击 👍 或 👎,我会根据你的反馈改进后续评审。
Original comment in English
Hey - I've left some high level feedback:
- The hard-coded
recursion_limit=50in_process_agent_streammay be better expressed as a configurable setting or shared constant so it can be tuned per environment without code changes. - In
clarify_node, consider instantiatingllm.with_structured_output(ClarifyWithUser, method="function_calling")once per LLM instance (or at a higher level) if this pattern is reused elsewhere, to avoid scattered configuration and keep provider-specific behavior centralized.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The hard-coded `recursion_limit=50` in `_process_agent_stream` may be better expressed as a configurable setting or shared constant so it can be tuned per environment without code changes.
- In `clarify_node`, consider instantiating `llm.with_structured_output(ClarifyWithUser, method="function_calling")` once per LLM instance (or at a higher level) if this pattern is reused elsewhere, to avoid scattered configuration and keep provider-specific behavior centralized.Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
* feat: add drag-and-drop agent reordering and auto-update on version mismatch (#206) * fix: resolve i18next missing key warnings in TierInfoModal Replace hardcoded Chinese strings with proper i18n translation keys in the tier selector component. This fixes console warnings about missing translation keys when using the zh locale. - Add speedLabels, reasoningLabels, and features keys to app.json - Add multiplier key to tierSelector in all locales - Add recommended key to common.json in all locales - Refactor TierInfoModal.tsx to use translation key references Co-Authored-By: Claude <noreply@anthropic.com> * feat: add drag-and-drop agent reordering Integrate dnd-kit into existing AgentList and AgentListItem components to support drag-and-drop reordering in both the sidebar and spatial focused view. Backend: - Add sort_order field to Agent model - Add PATCH endpoint for bulk reordering agents - Add migration for sort_order column Frontend: - Add sortable prop to AgentList with DndContext/SortableContext - Add dragHandleProps to AgentListItem for drag behavior - Use plain div (not motion.div) when sortable to avoid animation conflicts - Use set-based comparison for state sync (only reset on add/remove) - Add reorderAgents action to agentSlice * feat: auto-refresh frontend on version mismatch with backend Add automatic update mechanism that detects when the frontend version doesn't match the backend version and refreshes the page to fetch the latest assets. This ensures users with cached frontends always get updated code without manually clearing their browser cache. - Add useAutoUpdate hook with retry logic (max 3 attempts) - Add UpdateOverlay component for update feedback - Clear service workers and caches before reload - Store retry state in localStorage to prevent infinite loops * fix: address PR review feedback for drag-and-drop agent reordering - Move AgentList state sync from render to useEffect to prevent render loops - Add isDraggingRef to prevent backend sync during active drag operations - Restore keyboard accessibility to CompactAgentListItem (role, tabIndex, aria-pressed, onKeyDown handler) - Guard localStorage writes with try/catch in useAutoUpdate to handle restricted environments --------- Co-authored-by: Claude <noreply@anthropic.com> * fix: resolve version mismatch causing refresh loops in test environment (#207) * fix: move the comment to correct position * fix: fix the test environment version * fix: remove auto-update feature to prevent refresh loops The auto-update mechanism causes infinite refresh loops when frontend and backend versions mismatch in test environment. Remove the feature entirely until a more robust solution is implemented. - Delete useAutoUpdate hook and UpdateOverlay component - Remove AutoUpdateWrapper from App.tsx * Fix: Unify tool call UI (pills in agent timeline) for streaming + refresh (#208) * feat: add simple landing page and logout button * fix(web): unify tool call rendering in agent timeline after refresh; add landing page - Render tool calls as pills with a shared details modal (args/results/errors) - Attach historical tool_call tool messages into agentExecution phases instead of standalone messages - Remove legacy ToolCallCard-based rendering path * fix(web): address review feedback for tool call modal + typewriter - Move tool-call UI strings into i18n (app.chat.toolCall.*) for en/zh/ja - Memoize tool result parsing and image URL derivation in ToolCallDetails - Avoid duplicate argument headings in ToolCallDetailsModal for waiting_confirmation - Remove redundant typewriter cursor className conditional and fix unused state var * feat: message editing and deletion (#209) * feat: add message editing and deletion with truncate-regenerate flow - Add PATCH /messages/{id} endpoint for editing user messages - Add DELETE /messages/{id} endpoint for deleting any message - Add regenerate WebSocket handler for re-running agent after edit - Add edit/delete UI to ChatBubble with hover actions - Add i18n translations for en/zh/ja Includes fixes from code review: - Fix pre-deduction error handling to skip dispatch on any failure - Reset responding state before regeneration to prevent stuck UI - Add message ownership verification before edit/delete operations * fix: improve the code according to sourcery review * fix: use version+SHA for beta image tags to ensure unique deployments (#211) The previous approach used only the pyproject.toml version (e.g., 1.0.16) which caused Kubernetes to not pull new images when multiple commits used the same version tag. Now uses format: {version}-{short-sha} (e.g., 1.0.16-f13e3c0) Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * feat: conversation interrupt/abort functionality (#212) * feat: add conversation interrupt/abort functionality - Add stop button and Escape key shortcut to abort streaming generation - Implement Redis-based signaling between API server and Celery worker - Worker checks abort signal every 0.5s and gracefully stops streaming - Save partial content and perform partial billing on abort - Add visual indicator for cancelled/aborted messages - Add timeout fallback (10s) to reset UI if backend doesn't respond - Add i18n strings for stop/stopping/escToStop in en/zh/ja * fix: address abort feature edge cases from code review - Clear stale abort signals at task start to prevent race condition when user reconnects quickly after disconnect - Finalize AgentRun with 'failed' status on unhandled exceptions to ensure consistent DB state across all exit paths - Move time import to module level (was inline import) * fix: address Sourcery review feedback for abort feature - Reuse existing Redis client for abort checks instead of creating new connections on each tick (performance improvement) - Fix potential Redis connection leaks in ABORT and disconnect handlers by using try/finally pattern - Track and cleanup abort timeout in frontend to prevent stale timers from racing with subsequent abort requests * Preserve phase text when copying streamed output (#210) * Preserve phase text when copying streamed output * Extract agent phase content helper * fix: use existing PhaseExecution type and correct useMemo dependencies - Replace inline PhaseWithStreamedContent type with Pick<PhaseExecution, 'streamedContent'> for type consistency across the codebase - Fix useMemo dependency array to use agentExecution?.phases instead of agentExecution to ensure proper recalculation when phases array changes * Fix streaming agent messages and prevent deleting non-persisted messages (#213) * Fix message deletion for agent executions * Address Sourcery review: stricter UUID validation and safer id assignment - Add isValidUuid utility with canonical UUID pattern (8-4-4-4-12 format) to replace loose regex that accepted invalid strings like all-hyphens - Fix streaming_start handler to only set id when eventData.id is truthy, preventing accidental overwrites with undefined/null - Improve delete guard with contextual messages ("still streaming" vs "not saved yet") and change notification type to warning - Add comprehensive tests for isValidUuid covering valid UUIDs, client IDs, invalid formats, and edge cases Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: emit message_saved event after stream abort (#215) * fix: emit message_saved event after stream abort When a user interrupts a streaming response, the message is saved to the database but the frontend never receives the message_saved event. This leaves the message with a temporary stream_ prefix ID, preventing deletion until page refresh. Now the MESSAGE_SAVED event is emitted after db.commit() in the abort handler, before STREAM_ABORTED, so the frontend updates the message ID to the real UUID and deletion works immediately. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: always show latest topic when clicking an agent Unify sidebar and spatial workspace to use the same logic for selecting topics. Both now fetch from backend and always show the most recently updated topic (by updated_at) instead of remembering previously active topics. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * feat: improve message editing UX with edit-only option and assistant editing - Add "Edit" and "Edit & Regenerate" dropdown options for user messages - Allow editing assistant messages (content-only, no regeneration) - Add copy button to user messages - Move assistant message actions to top-right for better UX - Add auto-resizing textarea for editing long messages - Update backend to support truncate_and_regenerate flag Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: extract message content resolution into dedicated module Extract scattered content resolution logic into core/chat/messageContent.ts with two main utilities: - resolveMessageContent(): Single source of truth for content priority - getMessageDisplayMode(): Explicit rendering mode determination This refactoring: - Reduces ChatBubble.tsx complexity (60+ line IIFE → 30 line switch) - Fixes inconsistency between copy/edit and display logic - Makes content source priority explicit and documented - Adds guard for empty content to avoid rendering empty divs - Improves maintainability with testable pure functions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: deep research structured output and recursion limit (#216) - Use function_calling method for structured output in clarify node. The default json_mode doesn't work with Claude models via GPUGEEK provider. Claude supports tool/function calling natively but not OpenAI's response_format JSON mode. - Increase recursion_limit from 25 to 50 in agent.astream() to handle complex research tasks with more iterations. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: xinquiry(SII) <100398322+xinquiry@users.noreply.github.com> Co-authored-by: Claude <noreply@anthropic.com>
* feat: add drag-and-drop agent reordering and auto-update on version mismatch (#206) * fix: resolve i18next missing key warnings in TierInfoModal Replace hardcoded Chinese strings with proper i18n translation keys in the tier selector component. This fixes console warnings about missing translation keys when using the zh locale. - Add speedLabels, reasoningLabels, and features keys to app.json - Add multiplier key to tierSelector in all locales - Add recommended key to common.json in all locales - Refactor TierInfoModal.tsx to use translation key references Co-Authored-By: Claude <noreply@anthropic.com> * feat: add drag-and-drop agent reordering Integrate dnd-kit into existing AgentList and AgentListItem components to support drag-and-drop reordering in both the sidebar and spatial focused view. Backend: - Add sort_order field to Agent model - Add PATCH endpoint for bulk reordering agents - Add migration for sort_order column Frontend: - Add sortable prop to AgentList with DndContext/SortableContext - Add dragHandleProps to AgentListItem for drag behavior - Use plain div (not motion.div) when sortable to avoid animation conflicts - Use set-based comparison for state sync (only reset on add/remove) - Add reorderAgents action to agentSlice * feat: auto-refresh frontend on version mismatch with backend Add automatic update mechanism that detects when the frontend version doesn't match the backend version and refreshes the page to fetch the latest assets. This ensures users with cached frontends always get updated code without manually clearing their browser cache. - Add useAutoUpdate hook with retry logic (max 3 attempts) - Add UpdateOverlay component for update feedback - Clear service workers and caches before reload - Store retry state in localStorage to prevent infinite loops * fix: address PR review feedback for drag-and-drop agent reordering - Move AgentList state sync from render to useEffect to prevent render loops - Add isDraggingRef to prevent backend sync during active drag operations - Restore keyboard accessibility to CompactAgentListItem (role, tabIndex, aria-pressed, onKeyDown handler) - Guard localStorage writes with try/catch in useAutoUpdate to handle restricted environments --------- Co-authored-by: Claude <noreply@anthropic.com> * fix: resolve version mismatch causing refresh loops in test environment (#207) * fix: move the comment to correct position * fix: fix the test environment version * fix: remove auto-update feature to prevent refresh loops The auto-update mechanism causes infinite refresh loops when frontend and backend versions mismatch in test environment. Remove the feature entirely until a more robust solution is implemented. - Delete useAutoUpdate hook and UpdateOverlay component - Remove AutoUpdateWrapper from App.tsx * Fix: Unify tool call UI (pills in agent timeline) for streaming + refresh (#208) * feat: add simple landing page and logout button * fix(web): unify tool call rendering in agent timeline after refresh; add landing page - Render tool calls as pills with a shared details modal (args/results/errors) - Attach historical tool_call tool messages into agentExecution phases instead of standalone messages - Remove legacy ToolCallCard-based rendering path * fix(web): address review feedback for tool call modal + typewriter - Move tool-call UI strings into i18n (app.chat.toolCall.*) for en/zh/ja - Memoize tool result parsing and image URL derivation in ToolCallDetails - Avoid duplicate argument headings in ToolCallDetailsModal for waiting_confirmation - Remove redundant typewriter cursor className conditional and fix unused state var * feat: message editing and deletion (#209) * feat: add message editing and deletion with truncate-regenerate flow - Add PATCH /messages/{id} endpoint for editing user messages - Add DELETE /messages/{id} endpoint for deleting any message - Add regenerate WebSocket handler for re-running agent after edit - Add edit/delete UI to ChatBubble with hover actions - Add i18n translations for en/zh/ja Includes fixes from code review: - Fix pre-deduction error handling to skip dispatch on any failure - Reset responding state before regeneration to prevent stuck UI - Add message ownership verification before edit/delete operations * fix: improve the code according to sourcery review * fix: use version+SHA for beta image tags to ensure unique deployments (#211) The previous approach used only the pyproject.toml version (e.g., 1.0.16) which caused Kubernetes to not pull new images when multiple commits used the same version tag. Now uses format: {version}-{short-sha} (e.g., 1.0.16-f13e3c0) Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * feat: conversation interrupt/abort functionality (#212) * feat: add conversation interrupt/abort functionality - Add stop button and Escape key shortcut to abort streaming generation - Implement Redis-based signaling between API server and Celery worker - Worker checks abort signal every 0.5s and gracefully stops streaming - Save partial content and perform partial billing on abort - Add visual indicator for cancelled/aborted messages - Add timeout fallback (10s) to reset UI if backend doesn't respond - Add i18n strings for stop/stopping/escToStop in en/zh/ja * fix: address abort feature edge cases from code review - Clear stale abort signals at task start to prevent race condition when user reconnects quickly after disconnect - Finalize AgentRun with 'failed' status on unhandled exceptions to ensure consistent DB state across all exit paths - Move time import to module level (was inline import) * fix: address Sourcery review feedback for abort feature - Reuse existing Redis client for abort checks instead of creating new connections on each tick (performance improvement) - Fix potential Redis connection leaks in ABORT and disconnect handlers by using try/finally pattern - Track and cleanup abort timeout in frontend to prevent stale timers from racing with subsequent abort requests * Preserve phase text when copying streamed output (#210) * Preserve phase text when copying streamed output * Extract agent phase content helper * fix: use existing PhaseExecution type and correct useMemo dependencies - Replace inline PhaseWithStreamedContent type with Pick<PhaseExecution, 'streamedContent'> for type consistency across the codebase - Fix useMemo dependency array to use agentExecution?.phases instead of agentExecution to ensure proper recalculation when phases array changes * Fix streaming agent messages and prevent deleting non-persisted messages (#213) * Fix message deletion for agent executions * Address Sourcery review: stricter UUID validation and safer id assignment - Add isValidUuid utility with canonical UUID pattern (8-4-4-4-12 format) to replace loose regex that accepted invalid strings like all-hyphens - Fix streaming_start handler to only set id when eventData.id is truthy, preventing accidental overwrites with undefined/null - Improve delete guard with contextual messages ("still streaming" vs "not saved yet") and change notification type to warning - Add comprehensive tests for isValidUuid covering valid UUIDs, client IDs, invalid formats, and edge cases Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: emit message_saved event after stream abort (#215) * fix: emit message_saved event after stream abort When a user interrupts a streaming response, the message is saved to the database but the frontend never receives the message_saved event. This leaves the message with a temporary stream_ prefix ID, preventing deletion until page refresh. Now the MESSAGE_SAVED event is emitted after db.commit() in the abort handler, before STREAM_ABORTED, so the frontend updates the message ID to the real UUID and deletion works immediately. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: always show latest topic when clicking an agent Unify sidebar and spatial workspace to use the same logic for selecting topics. Both now fetch from backend and always show the most recently updated topic (by updated_at) instead of remembering previously active topics. Co-Authored-By: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * feat: improve message editing UX with edit-only option and assistant editing - Add "Edit" and "Edit & Regenerate" dropdown options for user messages - Allow editing assistant messages (content-only, no regeneration) - Add copy button to user messages - Move assistant message actions to top-right for better UX - Add auto-resizing textarea for editing long messages - Update backend to support truncate_and_regenerate flag Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * refactor: extract message content resolution into dedicated module Extract scattered content resolution logic into core/chat/messageContent.ts with two main utilities: - resolveMessageContent(): Single source of truth for content priority - getMessageDisplayMode(): Explicit rendering mode determination This refactoring: - Reduces ChatBubble.tsx complexity (60+ line IIFE → 30 line switch) - Fixes inconsistency between copy/edit and display logic - Makes content source priority explicit and documented - Adds guard for empty content to avoid rendering empty divs - Improves maintainability with testable pure functions Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude (Vendor2/Claude-4.5-Opus) <noreply@anthropic.com> * fix: deep research structured output and recursion limit (#216) - Use function_calling method for structured output in clarify node. The default json_mode doesn't work with Claude models via GPUGEEK provider. Claude supports tool/function calling natively but not OpenAI's response_format JSON mode. - Increase recursion_limit from 25 to 50 in agent.astream() to handle complex research tasks with more iterations. Co-authored-by: Claude Opus 4.5 <noreply@anthropic.com> * fix: attach thinking content to agent execution message When agent_start event arrives before thinking_start, it consumes the loading message. The thinking_start handler then couldn't find a loading message and created a separate thinking message, causing the thinking content to appear as a separate bubble below the agent response. Fix the thinking event handlers to also check for running agent execution messages and attach thinking content to them instead of creating separate messages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> * fix: align thinking_end condition with thinking_chunk Use consistent condition `m.agentExecution?.status === "running"` in both thinking_chunk and thinking_end handlers for finding agent messages. Co-Authored-By: Claude Opus 4.5 <noreply@anthropic.com> --------- Co-authored-by: Claude <noreply@anthropic.com>
Summary
method="function_calling"for structured output in deep research clarify nodeProblem
json_modeforwith_structured_output()doesn't work with Claude models via GPUGEEK providerresponse_formatJSON modeSolution
method="function_calling"which works across all providers (OpenAI, Gemini, Claude)recursion_limitto 50 inagent.astream()configTest plan
🤖 Generated with Claude Code
Summary by Sourcery
调整深度研究澄清流程,以使用与提供商无关的结构化输出,并提升智能体在复杂任务中的递归能力。
Bug 修复:
增强功能:
Original summary in English
Summary by Sourcery
Adjust deep research clarification to use provider-agnostic structured output and increase agent recursion capacity for complex tasks.
Bug Fixes:
Enhancements: