Skip to content

fix(provider): preserve reasoning_content for DeepSeek thinking mode#7799

Closed
Hola-Gracias wants to merge 14 commits intoAstrBotDevs:masterfrom
Hola-Gracias:master
Closed

fix(provider): preserve reasoning_content for DeepSeek thinking mode#7799
Hola-Gracias wants to merge 14 commits intoAstrBotDevs:masterfrom
Hola-Gracias:master

Conversation

@Hola-Gracias
Copy link
Copy Markdown

@Hola-Gracias Hola-Gracias commented Apr 25, 2026

Fixes #7798
修复 DeepSeek 思考模式在多轮对话中失败的问题:含有 reasoning_content 的助手历史消息未被保存并回传给 API,导致 API 返回 400 Bad Request: The reasoning_content in the thinking mode must be passed back to the API

Modifications / 改动点

  • 为内部 Message 模型添加了 reasoning_content 字段,使持久化的助手历史可以保留思考模式的推理元数据。

  • 在工具循环运行器追加普通助手消息、工具调用消息以及被中断的助手消息时,同时保存 reasoning_content

  • 更新 OpenAI 兼容的 payload 转换逻辑,保留已有的顶级 reasoning_content,仅当该字段缺失时才从 ThinkPart 中提取。

  • 更新助手消息清理逻辑,允许仅有 reasoning_content 的助手消息被视为有效,而不是被当作空消息丢弃。

  • 添加了回归测试,验证 OpenAI 兼容 payload 中仅含 reasoning_content 的助手消息能被正确保留。

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果

本地手动验证确认持久化的历史消息通过消息绑定和 OpenAI payload 转换后仍保留 reasoning_content:

{'role': 'assistant', 'content': 'final', 'reasoning_content': 'deepseek reasoning'}
{'role': 'assistant', 'content': 'final', 'reasoning_content': 'deepseek reasoning'}

Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Preserve DeepSeek-style reasoning metadata across assistant messages so thinking-mode conversations remain valid for OpenAI-compatible providers.

Bug Fixes:

  • Ensure assistant history messages with only reasoning_content are not filtered out when building OpenAI-compatible payloads, preventing 400 errors from thinking-mode providers.

Enhancements:

  • Extend the internal Message model and tool loop runner to store and propagate reasoning_content for assistant messages, including tool call and aborted responses.

Tests:

  • Add a regression test confirming reasoning-only assistant messages are preserved when querying via the OpenAI source.

Message 新增 reasoning_content 字段、validator 及 serializer 适配
3 处创建 assistant Message 时传入 reasoning_content
仅当消息本身无 reasoning_content 时才从 ThinkPart 提取覆
Add test to ensure reasoning-only assistant messages are preserved in queries.
@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 25, 2026
Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 3 issues, and left some high level feedback:

  • The logic that determines when an assistant message is considered 'empty' is now split between _query (checking content, tool_calls, reasoning_content) and Message.check_content_required; consider centralizing this check or adding a small helper so the conditions stay consistent as more fields are added in the future.
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- The logic that determines when an assistant message is considered 'empty' is now split between `_query` (checking `content`, `tool_calls`, `reasoning_content`) and `Message.check_content_required`; consider centralizing this check or adding a small helper so the conditions stay consistent as more fields are added in the future.

## Individual Comments

### Comment 1
<location path="astrbot/core/agent/runners/tool_loop_agent_runner.py" line_range="201" />
<code_context>
+            Message(
+                role="assistant",
+                content=parts,
+                reasoning_content=llm_resp.reasoning_content or None,
+            )
+        )
</code_context>
<issue_to_address>
**suggestion (bug_risk):** Using `or None` may unintentionally discard empty-string reasoning content.

At this and other call sites, `reasoning_content=llm_resp.reasoning_content or None` will turn an empty string into `None`, losing a potentially meaningful value. If the goal is only to normalize an undefined/sentinel value, consider passing `llm_resp.reasoning_content` directly or explicitly checking for that sentinel instead of using truthiness.

Suggested implementation:

```python
        self.run_context.messages.append(
            Message(
                role="assistant",
                content=parts,
                reasoning_content=llm_resp.reasoning_content,
            )
        )

```

```python
                tool_calls_info=AssistantMessageSegment(
                    tool_calls=llm_resp.to_openai_to_calls_model(),
                    content=parts,
                    reasoning_content=llm_resp.reasoning_content,
                ),
                tool_calls_result=tool_call_result_blocks,

```

If other parts of the codebase also use `reasoning_content=llm_resp.reasoning_content or None` (or similar truthy checks on `reasoning_content`), they should be updated in the same way to preserve empty-string reasoning content consistently. If there is a specific sentinel value used to represent “no reasoning content” (e.g., a special object or constant), those call sites should explicitly check for that sentinel instead of using truthiness.
</issue_to_address>

### Comment 2
<location path="astrbot/core/agent/message.py" line_range="218-219" />
<code_context>
         if self.role == "assistant" and self.tool_calls is not None:
             return self

+        # assistant + reasoning_content is not None: allow content to be None
+        if self.role == "assistant" and self.reasoning_content:
+            return self
+
</code_context>
<issue_to_address>
**issue (bug_risk):** Truthiness check on `reasoning_content` can mis-handle empty strings.

This treats `reasoning_content == ""` the same as `None`, so `check_content_required` will still enforce non-null `content` and may raise if providers send empty-string reasoning. If any non-`None` reasoning (including empty) should satisfy the requirement, use an explicit `self.reasoning_content is not None` check instead of a truthiness check.
</issue_to_address>

### Comment 3
<location path="astrbot/core/provider/sources/openai_source.py" line_range="981-984" />
<code_context>
                 # Some providers (Grok, etc.) reject empty content lists.
                 # When all parts were think blocks, fall back to None.
                 message["content"] = new_content or None
-                if reasoning_content:
+                if reasoning_content and not message.get("reasoning_content"):
                     message["reasoning_content"] = reasoning_content

</code_context>
<issue_to_address>
**issue (bug_risk):** Conditional on `reasoning_content` may skip valid but empty reasoning payloads.

This mirrors `Message.check_content_required` in that `reasoning_content=''` will be treated as falsy and skipped. If an empty string is a valid value from the provider and should be preserved, this condition will drop it. To only skip when `reasoning_content` is `None`, use `if reasoning_content is not None and not message.get("reasoning_content"):`.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/agent/runners/tool_loop_agent_runner.py Outdated
Comment thread astrbot/core/agent/message.py Outdated
Comment thread astrbot/core/provider/sources/openai_source.py
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for reasoning_content in assistant messages, enabling the handling of thinking mode output from providers like DeepSeek. Changes include updates to the Message model, agent runners, and the OpenAI provider source to ensure reasoning content is correctly captured, serialized, and preserved during API queries. A new test case verifies that assistant messages containing only reasoning content are not filtered out. Feedback suggests using an explicit is not None check for reasoning_content in the validation logic to maintain consistency with existing checks and the accompanying comments.

Comment thread astrbot/core/agent/message.py Outdated
Add a method to sanitize assistant messages before sending requests to prevent API errors due to empty content.
@dosubot dosubot Bot added size:L This PR changes 100-499 lines, ignoring generated files. and removed size:M This PR changes 30-99 lines, ignoring generated files. labels Apr 26, 2026
@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. and removed size:L This PR changes 100-499 lines, ignoring generated files. labels Apr 26, 2026
@Soulter
Copy link
Copy Markdown
Member

Soulter commented Apr 27, 2026

#7823 fixed with a right solution

@Soulter Soulter closed this Apr 27, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug] DeepSeek 模型(thinking mode)因未保存/回传 reasoning_content 字段导致 API 请求失败(400)

2 participants