Skip to content

Microsoft.Extensions.AI.OpenAI DeepSeek reasoning model throws HTTP 400 when using tool calls with Tool #7405

@oudi

Description

@oudi

Description

When using IChatClient with Options.Tools with a DeepSeek reasoning model (e.g. deepseek-reasoner) via GetStreamingResponseAsync, an HTTP 400 error is thrown as soon as a tool call is involved:

System.ClientModel.ClientResultException: HTTP 400 (invalid_request_error: invalid_request_error) Missing reasoning_content field in the assistant message at message index 2. For more information, please refer to [https://***.com/***/***](https://%2A%2A%2A.com/***/***) at OpenAI.ClientPipelineExtensions.ProcessMessageAsync(ClientPipeline pipeline, PipelineMessage message, RequestOptions options) at OpenAI.Chat.ChatClient.CompleteChatAsync(BinaryContent content, RequestOptions options) at OpenAI.Chat.ChatClient.<>c__DisplayClass20_0.<<CompleteChatStreamingAsync>b__0>d.MoveNext() --- End of stack trace from previous location --- at OpenAI.AsyncSseUpdateCollection1.GetRawPagesAsync()+MoveNext() at OpenAI.AsyncSseUpdateCollection1.GetRawPagesAsync()+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at System.ClientModel.AsyncCollectionResult1.GetAsyncEnumerator(CancellationToken cancellationToken)+MoveNext() at System.ClientModel.AsyncCollectionResult1.GetAsyncEnumerator(CancellationToken cancellationToken)+MoveNext() at System.ClientModel.AsyncCollectionResult1.GetAsyncEnumerator(CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at Microsoft.Extensions.AI.OpenAIChatClient.FromOpenAIStreamingChatCompletionAsync(IAsyncEnumerable1 updates, ChatCompletionOptions options, CancellationToken cancellationToken)+MoveNext() at Microsoft.Extensions.AI.OpenAIChatClient.FromOpenAIStreamingChatCompletionAsync(IAsyncEnumerable1 updates, ChatCompletionOptions options, CancellationToken cancellationToken)+MoveNext() at Microsoft.Extensions.AI.OpenAIChatClient.FromOpenAIStreamingChatCompletionAsync(IAsyncEnumerable1 updates, ChatCompletionOptions options, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult() at Microsoft.Extensions.AI.FunctionInvokingChatClient.GetStreamingResponseAsync(IEnumerable1 messages, ChatOptions options, CancellationToken cancellationToken)+MoveNext() at Microsoft.Extensions.AI.FunctionInvokingChatClient.GetStreamingResponseAsync(IEnumerable1 messages, ChatOptions options, CancellationToken cancellationToken)+MoveNext() at Microsoft.Extensions.AI.FunctionInvokingChatClient.GetStreamingResponseAsync(IEnumerable1 messages, ChatOptions options, CancellationToken cancellationToken)+System.Threading.Tasks.Sources.IValueTaskSource<System.Boolean>.GetResult()`

DeepSeek reasoning models require that any assistant message in the conversation history that was originally generated with a reasoning_content field must include that field when the conversation is sent back in subsequent requests.
FunctionInvokingChatClient internally appends the assistant's response to the message history after a tool call is detected, but during this reassembly process, the reasoning_content field — which is a DeepSeek-specific extension delivered via AdditionalProperties in the streaming response — is silently dropped. As a result, the second request sent to DeepSeek contains an assistant message without reasoning_content, which DeepSeek rejects with a 400 error.

Reproduction Steps

use GetStreamingResponseAsync, and the ChatOptions.Tools add a tool
Let ai call the tool then the error reproduction.
use model DeepSeek-R1 (model: deepseek-reasoner)

Metadata

Metadata

Assignees

No one assigned

    Labels

    area-aiMicrosoft.Extensions.AI librariesbugThis issue describes a behavior which is not expected - a bug.untriaged

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions