Skip to content

Propagate reasoning_content through LLM pipeline (fixes #563)#565

Merged
eanzhao merged 4 commits intofeature/lark-botfrom
fix/2026-05-04_deepseek-reasoning-content
May 4, 2026
Merged

Propagate reasoning_content through LLM pipeline (fixes #563)#565
eanzhao merged 4 commits intofeature/lark-botfrom
fix/2026-05-04_deepseek-reasoning-content

Conversation

@eanzhao
Copy link
Copy Markdown
Contributor

@eanzhao eanzhao commented May 4, 2026

Summary

  • Adds ReasoningContent field to ChatMessage, LLMResponse, and LLMStreamChunk to carry thinking-mode reasoning content through the pipeline
  • Propagates reasoning content through ChatRuntime streaming rounds and appends it to conversation history for multi-turn conversations
  • Implements ExtractReasoningContent in MEAILLMProvider for both streaming and non-streaming paths
  • Fixes DeepSeek v4-pro HTTP 400: "The reasoning_content in the thinking mode must be passed back to the API"

Impact

  • src/Aevatar.AI.Abstractions/LLMProviders/LLMRequest.csChatMessage.ReasoningContent
  • src/Aevatar.AI.Abstractions/LLMProviders/LLMResponse.csLLMResponse.ReasoningContent, LLMStreamChunk.DeltaReasoningContent
  • src/Aevatar.AI.Core/Chat/ChatRuntime.cs — streaming round propagation + history append
  • src/Aevatar.AI.LLMProviders.MEAI/MEAILLMProvider.csExtractReasoningContent + ConvertResponse wiring

Verification

  • dotnet build — 0 errors
  • dotnet test test/Aevatar.AI.Tests — 500 passed, 0 failed
  • bash tools/ci/architecture_guards.sh — all passed

Closes #563

DeepSeek v4-pro with thinking mode rejects requests when reasoning_content
from prior assistant turns is not echoed back. This change:

- Adds ReasoningContent to ChatMessage, LLMResponse, and LLMStreamChunk
- Propagates reasoning content through ChatRuntime streaming rounds
- Appends reasoning_content to conversation history for multi-turn
- Implements ExtractReasoningContent in MEAILLMProvider
- Wires reasoning content into non-streaming ConvertResponse

Closes #563
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b8128cb977

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".

// ChatResponse.Messages contains all reply messages
var lastMessage = response.Messages.LastOrDefault();
var content = ExtractMessageText(lastMessage);
var reasoningContent = ExtractReasoningContent(lastMessage);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Include reasoning content in outbound MEAI chat messages

This change extracts and stores ReasoningContent from responses, but it does not add the corresponding outbound serialization in ConvertMessages (which still only emits text/content parts/tool calls). In multi-turn thinking-mode sessions, assistant turns are sent back without reasoning_content, so providers that require replaying reasoning (the DeepSeek 400 case described in this commit) will still fail on follow-up requests.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2690a61: ConvertMessages() now emits TextReasoningContent for assistant messages with ReasoningContent, including the tool-calls branch.

Comment thread src/Aevatar.AI.Core/Chat/ChatRuntime.cs Outdated
}

AppendAssistantMessage(messages, pendingHistoryMessages, parsed.CleanedContent, toolCalls: null);
AppendAssistantMessage(messages, pendingHistoryMessages, parsed.CleanedContent, reasoningContent: null, toolCalls: null);
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve reasoning in parsed tool-call fallback history

In the DSML/XML fallback path, assistant history is appended with reasoningContent: null even when the streamed round produced reasoning. That drops the reasoning trace before the next LLM round, so conversations that hit this fallback can still violate providers that require prior reasoning_content to be echoed back, causing the same follow-up request failures this patch is trying to prevent.

Useful? React with 👍 / 👎.

Copy link
Copy Markdown
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed in 2690a61: DSML fallback path now passes roundResult.ReasoningContent instead of null to AppendAssistantMessage.

Copy link
Copy Markdown
Contributor Author

@eanzhao eanzhao left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Review

🔴 Blocking: ConvertMessages() 未回传 ReasoningContent 到 DeepSeek

PR 正确地捕获了 ReasoningContent 并存入 ChatMessage,但 MEAILLMProvider.ConvertMessages() (MEAILLMProvider.cs:229) 在构建 outgoing request 时完全没有读取 msg.ReasoningContent

  • 入站: DeepSeek 返回 reasoning → 捕获 ✓ → 存储 ✓ → 历史追加 ✓
  • 出站: 下次请求 → ConvertMessages() 构建请求 → 未包含 ReasoningContent → 未发送回 DeepSeek

issue #563 的报错原文是 reasoning_content must be passed back to the API。没有出站回传,HTTP 400 依然会出现。

修复路径:ConvertMessages() 中,当 msg.Role == "assistant"msg.ReasoningContent 非空时,将 reasoning 作为 TextReasoningContent 追加到 meaiMsg.Contents

🟡 其他应修复点

  1. ChatHistory.Export/Import + SerializableMessage — 序列化未包含 ReasoningContent,跨会话/重载时历史持久化丢失 reasoning。
  2. ChatRuntime.BuildSyntheticChunks() (L843) — 非流式 response 转合成 chunk 时未 emit DeltaReasoningContent,Terminate/fallback 路径的 reasoning 不会送达流式消费者。
  3. ToolCallLoop.cs — 多处 ChatMessage.Assistant(...) (L114/147/153/171/189/225/244/248) 未传 reasoning,tool call 多轮对话时 reasoning 丢失。

✅ 正确的部分

  • ChatMessage.ReasoningContentLLMResponse.ReasoningContent 模型改动正确
  • StreamLlmRoundCoreAsync 的流式累积和 AppendAssistantMessage 传播完整
  • MEAILLMProvider.ConvertResponse 非流式提取 ExtractReasoningContent 正确

@codecov
Copy link
Copy Markdown

codecov Bot commented May 4, 2026

Codecov Report

❌ Patch coverage is 75.00000% with 14 lines in your changes missing coverage. Please review.
✅ Project coverage is 72.08%. Comparing base (00fa7c7) to head (e757a3b).
⚠️ Report is 8 commits behind head on feature/lark-bot.

Files with missing lines Patch % Lines
src/Aevatar.AI.Core/Chat/ChatRuntime.cs 75.00% 4 Missing and 2 partials ⚠️
...rc/Aevatar.AI.LLMProviders.MEAI/MEAILLMProvider.cs 72.22% 3 Missing and 2 partials ⚠️
src/Aevatar.AI.Core/Tools/ToolCallLoop.cs 62.50% 0 Missing and 3 partials ⚠️
@@                 Coverage Diff                  @@
##           feature/lark-bot     #565      +/-   ##
====================================================
+ Coverage             72.01%   72.08%   +0.07%     
====================================================
  Files                  1255     1255              
  Lines                 90723    90759      +36     
  Branches              11877    11891      +14     
====================================================
+ Hits                  65331    65428      +97     
+ Misses                20706    20646      -60     
+ Partials               4686     4685       -1     
Flag Coverage Δ
ci 72.08% <75.00%> (+0.07%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Files with missing lines Coverage Δ
...Aevatar.AI.Abstractions/LLMProviders/LLMRequest.cs 71.92% <100.00%> (+0.50%) ⬆️
...evatar.AI.Abstractions/LLMProviders/LLMResponse.cs 100.00% <100.00%> (ø)
src/Aevatar.AI.Core/Chat/ChatHistory.cs 81.91% <100.00%> (+0.59%) ⬆️
src/Aevatar.AI.Core/Tools/ToolCallLoop.cs 89.03% <62.50%> (+13.26%) ⬆️
...rc/Aevatar.AI.LLMProviders.MEAI/MEAILLMProvider.cs 46.94% <72.22%> (+1.03%) ⬆️
src/Aevatar.AI.Core/Chat/ChatRuntime.cs 87.00% <75.00%> (+0.21%) ⬆️

... and 5 files with indirect coverage changes

🚀 New features to boost your workflow:
  • ❄️ Test Analytics: Detect flaky tests, report on failures, and find test suite problems.
  • 📦 JS Bundle Analysis: Save yourself from yourself by tracking and limiting bundle sizes in JS merges.

@eanzhao
Copy link
Copy Markdown
Contributor Author

eanzhao commented May 4, 2026

Fix Check Verdict — 2/2 inline comments resolved

# Comment Verdict Detail
1 MEAILLMProvider.cs — outbound serialization missing resolved ConvertMessages now emits TextReasoningContent for assistant messages, including tool-call branch
2 ChatRuntime.cs — DSML/XML fallback drops reasoning resolved Both cited AppendAssistantMessage callsites now pass roundResult.ReasoningContent

Remaining gap (not covered by original comments)

The finalParsed fallback path in ChatRuntime.cs (~line 488) still passes reasoningContent: null:

AppendAssistantMessage(messages, pendingHistoryMessages, finalParsed.CleanedContent, reasoningContent: null, toolCalls: null);

If this path is reachable in thinking-mode sessions, it could cause the same DeepSeek 400 error. Worth a follow-up.


Evaluated by 6 reviewers (deepseek-v4-pro, glm-5.1, mimo-v2.5-pro, kimi, gemini, codex) via /opencode-pr-fix-check

@eanzhao eanzhao merged commit a016c19 into feature/lark-bot May 4, 2026
13 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant