Skip to content

fix: update reasoning_content handling to support empty string values#7830

Merged
Soulter merged 2 commits intomasterfrom
fix/reasoning-content
Apr 27, 2026
Merged

fix: update reasoning_content handling to support empty string values#7830
Soulter merged 2 commits intomasterfrom
fix/reasoning-content

Conversation

@Soulter
Copy link
Copy Markdown
Member

@Soulter Soulter commented Apr 27, 2026

fixes: #7823
fixes: #7782 #7826

Modifications / 改动点

  • This is NOT a breaking change. / 这不是一个破坏性变更。

Screenshots or Test Results / 运行截图或测试结果


Checklist / 检查清单

  • 😊 If there are new features added in the PR, I have discussed it with the authors through issues/emails, etc.
    / 如果 PR 中有新加入的功能,已经通过 Issue / 邮件等方式和作者讨论过。

  • 👀 My changes have been well-tested, and "Verification Steps" and "Screenshots" have been provided above.
    / 我的更改经过了良好的测试,并已在上方提供了“验证步骤”和“运行截图”

  • 🤓 I have ensured that no new dependencies are introduced, OR if new dependencies are introduced, they have been added to the appropriate locations in requirements.txt and pyproject.toml.
    / 我确保没有引入新依赖库,或者引入了新依赖库的同时将其添加到 requirements.txtpyproject.toml 文件相应位置。

  • 😮 My changes do not introduce malicious code.
    / 我的更改没有引入恶意代码。

Summary by Sourcery

Adjust reasoning content handling across providers and agent runner to distinguish between missing and empty reasoning, and to avoid treating absent reasoning as an error.

Bug Fixes:

  • Fix reasoning content extraction from OpenAI responses to correctly handle None vs empty string and avoid attribute access on missing fields.
  • Prevent errors when checking for reasoning output by safely handling None reasoning_content for OpenAI, Anthropic, and Gemini providers.
  • Ensure tool loop agent runner emits think parts only when reasoning content is present or signed, defaulting to empty string when needed.

Enhancements:

  • Allow messages with only think parts to carry reasoning_content while keeping content lists compatible with providers that reject empty content arrays.
  • Update LLMResponse to represent reasoning_content as an optional field, clarifying the difference between absent and empty reasoning.

@dosubot dosubot Bot added size:M This PR changes 30-99 lines, ignoring generated files. area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. labels Apr 27, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request changes the default value of reasoning_content in the LLMResponse class from an empty string to None. Corresponding updates were made across the codebase, including the tool loop agent runner and various provider sources (Anthropic, Gemini, and OpenAI), to ensure proper handling of null values. Additionally, the reasoning content extraction logic in the OpenAI source was refactored for better robustness, and specific Deepseek v4 handling in payload conversion was replaced with a more generalized approach. I have no feedback to provide.

Copy link
Copy Markdown
Contributor

@sourcery-ai sourcery-ai Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hey - I've found 1 issue, and left some high level feedback:

  • In _finally_convert_payload you’ve removed the DeepSeek v4 special-casing for reasoning_content (forcing a non-empty string of 'none'); please double-check whether this is still required by that API, and if so either reintroduce it or explain the new behavior in terms of provider expectations.
  • Now that reasoning_content is nullable, it would be good to run a quick pass over the codebase to ensure all remaining call sites treat it as str | None (e.g., avoid direct .strip() calls without the (llm_response.reasoning_content or "") pattern you’ve introduced here).
Prompt for AI Agents
Please address the comments from this code review:

## Overall Comments
- In `_finally_convert_payload` you’ve removed the DeepSeek v4 special-casing for `reasoning_content` (forcing a non-empty string of `'none'`); please double-check whether this is still required by that API, and if so either reintroduce it or explain the new behavior in terms of provider expectations.
- Now that `reasoning_content` is nullable, it would be good to run a quick pass over the codebase to ensure all remaining call sites treat it as `str | None` (e.g., avoid direct `.strip()` calls without the `(llm_response.reasoning_content or "")` pattern you’ve introduced here).

## Individual Comments

### Comment 1
<location path="astrbot/core/provider/sources/openai_source.py" line_range="985-994" />
<code_context>
-        deepseek_reasoning_models = {"deepseek-v4-pro", "deepseek-v4-flash"}
</code_context>
<issue_to_address>
**issue (bug_risk):** Consider the impact of dropping Deepseek v4-specific handling for empty reasoning content.

Previously we had a Deepseek v4-specific fallback that forced `message["reasoning_content"] = "none"` when the value was effectively empty, to satisfy their API. With that removed, we may now send empty/whitespace-only reasoning content whenever a `think` part is present, which could reintroduce server-side errors for those models.

Please either retain a minimal Deepseek v4 fallback for empty reasoning content or ensure that when `reasoning_content_present` is true but `reasoning_content.strip()` is empty, we set a known-safe placeholder. Otherwise Deepseek v4 users may see regressions in these cases.
</issue_to_address>

Sourcery is free for open source - if you like our reviews please consider sharing them ✨
Help me be more useful! Please click 👍 or 👎 on each comment and I'll use the feedback to improve your reviews.

Comment thread astrbot/core/provider/sources/openai_source.py
@Soulter Soulter merged commit 415da21 into master Apr 27, 2026
21 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

area:provider The bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner. size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Bug]deepseekv4昨天晚上好像改了什么东西报错了

1 participant