fix(provider): preserve reasoning_content for DeepSeek thinking mode#7799
fix(provider): preserve reasoning_content for DeepSeek thinking mode#7799Hola-Gracias wants to merge 14 commits intoAstrBotDevs:masterfrom
Conversation
Message 新增 reasoning_content 字段、validator 及 serializer 适配
3 处创建 assistant Message 时传入 reasoning_content
仅当消息本身无 reasoning_content 时才从 ThinkPart 提取覆
Add test to ensure reasoning-only assistant messages are preserved in queries.
There was a problem hiding this comment.
Hey - I've found 3 issues, and left some high level feedback:
- The logic that determines when an assistant message is considered 'empty' is now split between
_query(checkingcontent,tool_calls,reasoning_content) andMessage.check_content_required; consider centralizing this check or adding a small helper so the conditions stay consistent as more fields are added in the future.
Prompt for AI Agents
Please address the comments from this code review:
## Overall Comments
- The logic that determines when an assistant message is considered 'empty' is now split between `_query` (checking `content`, `tool_calls`, `reasoning_content`) and `Message.check_content_required`; consider centralizing this check or adding a small helper so the conditions stay consistent as more fields are added in the future.
## Individual Comments
### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="201" />
<code_context>
+ Message(
+ role="assistant",
+ content=parts,
+ reasoning_content=llm_resp.reasoning_content or None,
+ )
+ )
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using `or None` may unintentionally discard empty-string reasoning content.
At this and other call sites, `reasoning_content=llm_resp.reasoning_content or None` will turn an empty string into `None`, losing a potentially meaningful value. If the goal is only to normalize an undefined/sentinel value, consider passing `llm_resp.reasoning_content` directly or explicitly checking for that sentinel instead of using truthiness.
Suggested implementation:
```python
self.run_context.messages.append(
Message(
role="assistant",
content=parts,
reasoning_content=llm_resp.reasoning_content,
)
)
```
```python
tool_calls_info=AssistantMessageSegment(
tool_calls=llm_resp.to_openai_to_calls_model(),
content=parts,
reasoning_content=llm_resp.reasoning_content,
),
tool_calls_result=tool_call_result_blocks,
```
If other parts of the codebase also use `reasoning_content=llm_resp.reasoning_content or None` (or similar truthy checks on `reasoning_content`), they should be updated in the same way to preserve empty-string reasoning content consistently. If there is a specific sentinel value used to represent “no reasoning content” (e.g., a special object or constant), those call sites should explicitly check for that sentinel instead of using truthiness.
</issue_to_address>
### Comment 2
<location path="astrbot/core/agent/message.py" line_range="218-219" />
<code_context>
if self.role == "assistant" and self.tool_calls is not None:
return self
+ # assistant + reasoning_content is not None: allow content to be None
+ if self.role == "assistant" and self.reasoning_content:
+ return self
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Truthiness check on `reasoning_content` can mis-handle empty strings.
This treats `reasoning_content == ""` the same as `None`, so `check_content_required` will still enforce non-null `content` and may raise if providers send empty-string reasoning. If any non-`None` reasoning (including empty) should satisfy the requirement, use an explicit `self.reasoning_content is not None` check instead of a truthiness check.
</issue_to_address>
### Comment 3
<location path="astrbot/core/provider/sources/openai_source.py" line_range="981-984" />
<code_context>
# Some providers (Grok, etc.) reject empty content lists.
# When all parts were think blocks, fall back to None.
message["content"] = new_content or None
- if reasoning_content:
+ if reasoning_content and not message.get("reasoning_content"):
message["reasoning_content"] = reasoning_content
</code_context>
<issue_to_address>
**issue (bug_risk):** Conditional on `reasoning_content` may skip valid but empty reasoning payloads.
This mirrors `Message.check_content_required` in that `reasoning_content=''` will be treated as falsy and skipped. If an empty string is a valid value from the provider and should be preserved, this condition will drop it. To only skip when `reasoning_content` is `None`, use `if reasoning_content is not None and not message.get("reasoning_content"):`.
</issue_to_address>Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.
There was a problem hiding this comment.
Code Review
This pull request introduces support for reasoning_content in assistant messages, enabling the handling of thinking mode output from providers like DeepSeek. Changes include updates to the Message model, agent runners, and the OpenAI provider source to ensure reasoning content is correctly captured, serialized, and preserved during API queries. A new test case verifies that assistant messages containing only reasoning content are not filtered out. Feedback suggests using an explicit is not None check for reasoning_content in the validation logic to maintain consistency with existing checks and the accompanying comments.
Add tests to verify handling of empty reasoning content in assistant messages.
Add a method to sanitize assistant messages before sending requests to prevent API errors due to empty content.
Ensure proper closure of the client connection.
|
#7823 fixed with a right solution |
Fixes #7798
修复 DeepSeek 思考模式在多轮对话中失败的问题:含有
reasoning_content的助手历史消息未被保存并回传给 API,导致 API 返回400 Bad Request: The reasoning_content in the thinking mode must be passed back to the API。Modifications / 改动点
为内部
Message模型添加了reasoning_content字段,使持久化的助手历史可以保留思考模式的推理元数据。在工具循环运行器追加普通助手消息、工具调用消息以及被中断的助手消息时,同时保存
reasoning_content。更新 OpenAI 兼容的 payload 转换逻辑,保留已有的顶级
reasoning_content,仅当该字段缺失时才从ThinkPart中提取。更新助手消息清理逻辑,允许仅有
reasoning_content的助手消息被视为有效,而不是被当作空消息丢弃。添加了回归测试,验证 OpenAI 兼容 payload 中仅含
reasoning_content的助手消息能被正确保留。This is NOT a breaking change. / 这不是一个破坏性变更。
Screenshots or Test Results / 运行截图或测试结果
本地手动验证确认持久化的历史消息通过消息绑定和 OpenAI payload 转换后仍保留 reasoning_content:
Checklist / 检查清单
😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
/ 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。
👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
/ 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”。
🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in
requirements.txtandpyproject.toml./ 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到
requirements.txt和pyproject.toml文件相应位置。😮 My changes do not introduce malicious code.
/ 我的更改没有引入恶意代码。
Summary by Sourcery
Preserve DeepSeek-style reasoning metadata across assistant messages so thinking-mode conversations remain valid for OpenAI-compatible providers.
Bug Fixes:
Enhancements:
Tests: