Skip to content

Fix malformed Responses API input items for custom models#313502

Open
DrHazemAli wants to merge 5 commits intomicrosoft:mainfrom
DrHazemAli:main
Open

Fix malformed Responses API input items for custom models#313502
DrHazemAli wants to merge 5 commits intomicrosoft:mainfrom
DrHazemAli:main

Conversation

@DrHazemAli
Copy link
Copy Markdown

Summary

This PR fixes a Responses API payload issue where custom model requests can include an input[] item with an empty type, causing the API to fail with:

Invalid value: ''. param: input[1]

Root cause

The Responses API requires every input[] item to have a valid type.

In some custom model flows, especially with Responses API based models such as GPT-5.4 Pro and similar reasoning models, a message-shaped item can be added to the payload with an empty or missing type. The request reaches the API, but fails validation before the model can process it.

This is a client-side payload serialization issue, not a deployment, authentication, or model availability issue.

Affected models

This can affect custom models that rely on the Responses API instead of the classic Chat Completions API, including GPT-5.4 Pro style deployments and similar reasoning/model-inference deployments, However Chat-completions-compatible models, such as Kimi K2.6 in this tested setup, are not affected by this specific payload shape issue.

Related issues

  • Issue #312086
  • Feedbacks from Community

Changes

  • Preserve all input items with a non-empty type.
  • Normalize message-shaped items with empty/missing type to type: "message".
  • Drop empty placeholder items with no useful content.
  • Avoid hardcoded allowlists so MCP, tool calls, reasoning items, and future Responses API item types are not affected.

Best,
Hazem

Added normalization functions to handle response input items, ensuring no empty or missing type fields.
@raffaeler
Copy link
Copy Markdown

Please consider that providers like DeepSeek need the reasoning_content (see here field back. This additional field is not used by OpenAI since they do not emit the full CoT.

I solved the issue by creating a proxy, but it would definitely better to support reasoning_content management directly in the OpenAI generic client.

HTH

@DrHazemAli
Copy link
Copy Markdown
Author

Hi @raffaeler

The current normalization logic only fixes empty or missing type fields and does not block or strip unknown or provider-specific fields, So fields like reasoning_content ..etc are preserved and passed through as-is.

@stasyu2009-ux
Copy link
Copy Markdown

Треба щоб хтось з колонірував код тоді буде все вірно працювати в мене немає на це права

@raffaeler
Copy link
Copy Markdown

Hi @DrHazemAli

The current normalization logic only fixes empty or missing type fields and does not block or strip unknown or provider-specific fields, So fields like reasoning_content ..etc are preserved and passed through as-is.

I understand that. Take my comment as a suggestion. I believe that more and more people will try to use open reasoning models which need that.

@stasyu2009-ux
Copy link
Copy Markdown

Тоді буде правильно все працювати

@stasyu2009-ux
Copy link
Copy Markdown

Добре. Але там проблема не втому а в самому коді тому треба зробити клон

@DrHazemAli
Copy link
Copy Markdown
Author

Hi @raffaeler

I understand that. Take my comment as a suggestion. I believe that more and more people will try to use open reasoning models which need that.

Thanks for clarifying.
Yeah, I agree this would be valuable to support as an enhancement.
I’ll look into it as a follow-up enhancement after this fix.

@dmitrivMS dmitrivMS removed their assignment Apr 30, 2026
@stasyu2009-ux
Copy link
Copy Markdown

неправильно зроблено фільтрування та переносиння коду тому є фатальною помилкою тому буде покарено

@stasyu2009-ux
Copy link
Copy Markdown

тобто покарано

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

6 participants