Skip to content

[Bug] DeepSeek 模型(thinking mode)因未保存/回传 reasoning_content 字段导致 API 请求失败(400) #7798

@Hola-Gracias

Description

@Hola-Gracias

What happened / 发生了什么

在使用支持“思考模式”(thinking mode)的 DeepSeek 模型(如 deepseek-v4-flash)时,AstrBot 发送多轮对话请求会收到 API 返回的 400 Bad Request 错误,错误信息为:
The 'reasoning_content' in the thinking mode must be passed back to the API.

经分析,原因是 DeepSeek 要求助手消息中若包含 reasoning_content 字段(模型输出的一部分),后续请求构建消息历史时必须原样回传该字段,而 AstrBot 未保存和回传该字段,导致 API 校验失败。

Reproduce / 如何复现?

  1. 在 AstrBot 中配置 DeepSeek 模型,且模型启用思考模式。

  2. 与机器人进行多轮对话,至少一轮后助手返回了包含 reasoning_content 的响应。

  3. 继续进行下一轮对话,AstrBot 将之前的助手消息(仅包含 content 和可能的 tool_calls)发送给 DeepSeek API。

  4. 收到 HTTP 400 错误,错误信息指明缺少 reasoning_content 的回传。

AstrBot version, deployment method (e.g., Windows Docker Desktop deployment), provider used, and messaging platform used. / AstrBot 版本、部署方式(如 Windows Docker Desktop 部署)、使用的提供商、使用的消息平台适配器

AstrBot 版本:v4.23.5

部署方式:源码运行

Python 版本:3.12

模型提供商:DeepSeek

模型名称:deepseek-v4-flash

消息平台适配器:Telegram

OS

Linux

Logs / 报错日志

[2026-04-25 19:29:38.352] [Core] [WARN] [v4.23.5] [runners.tool_loop_agent_runner:555]: Chat Model deepseek/deepseek-v4-flash request error: Error code: 400 - {'error': {'message': 'The reasoning_content in the thinking mode must be passed back to the API.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}
Traceback (most recent call last):
File "/home/hola/AstrBot/astrbot/core/agent/runners/tool_loop_agent_runner.py", line 510, in _iter_llm_responses_with_fallback
async for attempt in retrying:
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/tenacity/asyncio/init.py", line 170, in anext
do = await self.iter(retry_state=self._retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/tenacity/asyncio/init.py", line 157, in iter
result = await action(retry_state)
^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/tenacity/_utils.py", line 111, in inner
return call(*args, **kwargs)
^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/tenacity/init.py", line 393, in
self._add_action_func(lambda rs: rs.outcome.result())
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 449, in result
return self.__get_result()
^^^^^^^^^^^^^^^^^^^
File "/usr/lib/python3.12/concurrent/futures/_base.py", line 401, in __get_result
raise self._exception
File "/home/hola/AstrBot/astrbot/core/agent/runners/tool_loop_agent_runner.py", line 514, in _iter_llm_responses_with_fallback
async for resp in self._iter_llm_responses(
File "/home/hola/AstrBot/astrbot/core/agent/runners/tool_loop_agent_runner.py", line 477, in _iter_llm_responses
yield await self.provider.text_chat(**payload)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1165, in text_chat
) = await self._handle_api_error(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1111, in _handle_api_error
raise e
File "/home/hola/AstrBot/astrbot/core/provider/sources/openai_source.py", line 1153, in text_chat
llm_response = await self._query(payloads, func_tool)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/astrbot/core/provider/sources/openai_source.py", line 572, in _query
completion = await self.client.chat.completions.create(
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2714, in create
return await self._post(
^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1884, in post
return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/hola/AstrBot/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1669, in request
raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': 'The reasoning_content in the thinking mode must be passed back to the API.', 'type': 'invalid_request_error', 'param': None, 'code': 'invalid_request_error'}}

Are you willing to submit a PR? / 你愿意提交 PR 吗?

  • Yes!

Code of Conduct

Metadata

Metadata

Assignees

No one assigned

    Labels

    area:providerThe bug / feature is about AI Provider, Models, LLM Agent, LLM Agent Runner.bugSomething isn't working

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions