Skip to content

Conversation

ihower
Copy link
Contributor

@ihower ihower commented Oct 11, 2025

Fixes a bug in OpenAIConversationsSession.get_items() where GPT-5 reasoning items contained a status=None field, causing the Responses API to reject the request.

Problem

When using GPT-5, the first turn works normally, but on the second turn, reasoning items include status=None. When this field is passed back to the Responses API, the request fails with the following error:

Error getting response: Error code: 400 - {'error': {'message': "Unknown parameter: 'input[1].status'.", 'type': 'invalid_request_error', 'param': 'input[1].status', 'code': 'unknown_parameter'}}. (request_id: req_xxxxx)

This occurs because get_items() used model_dump() without excluding fields that were never explicitly set (such as default None values).

Solution

Changed:

item.model_dump()

to:

item.model_dump(exclude_unset=True)

This update:

Aligns with the existing pattern used in ModelResponse.to_input_items() (see items.py:235 https://github.com/openai/openai-agents-python/blob/main/src/agents/items.py#L235)


Related Issue

Fixes #1882

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

Comment on lines 52 to +53
# calling model_dump() to make this serializable
all_items.append(item.model_dump())
all_items.append(item.model_dump(exclude_unset=True))

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge status=None still serialized from conversation items

Switching to item.model_dump(exclude_unset=True) does not remove keys whose values are explicitly None. GPT‑5 reasoning items returned from the Conversations API include status set to null, so this call still emits {"status": None} and the Responses API continues to reject input[1].status. To avoid the error the code needs to exclude None values (e.g. exclude_none=True, optionally combined with exclude_unset). As written, the bug described in the commit message remains reproducible for second‑turn requests.

Useful? React with 👍 / 👎.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The status=None field comes from the ResponseReasoningItem model, it has a default value of None. So when we call model_dump(exclude_unset=True), it does remove this field as expected.

@seratch seratch added bug Something isn't working feature:sessions labels Oct 14, 2025
@seratch seratch merged commit 1b49f0e into openai:main Oct 14, 2025
5 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

bug Something isn't working feature:sessions

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Using OpenAIConversationsSession causes a bug.

2 participants