Skip to content

Add helper to reuse response.output as follow-up input#3031

Closed
raashish1601 wants to merge 1 commit intoopenai:mainfrom
raashish1601:contributor-22/openai-python-3008
Closed

Add helper to reuse response.output as follow-up input#3031
raashish1601 wants to merge 1 commit intoopenai:mainfrom
raashish1601:contributor-22/openai-python-3008

Conversation

@raashish1601
Copy link
Copy Markdown

Summary

  • add Response.output_as_input() to serialize response.output into follow-up input items with API field names and None values stripped
  • keep model_dump() behavior unchanged while giving manual conversation-state users a safe round-trip helper
  • add a regression covering the reasoning item shape that currently fabricates None fields under model_dump()

Testing

  • python -m ruff check src/openai/types/responses/response.py tests/lib/responses/test_responses.py
  • $env:PYTHONPATH=''src''; python -m pytest -o addopts="" tests/lib/responses/test_responses.py -q -k output_as_input

Closes #3008.

@raashish1601 raashish1601 requested a review from a team as a code owner March 28, 2026 13:26
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: d70effa5c4

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".


return cast(
List[ResponseInputItemParam],
[item.to_dict(mode="json", exclude_none=True) for item in self.output],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Strip output-only fields before returning follow-up input

output_as_input() forwards each output item via item.to_dict(...) without removing fields that are present only on output models (for example created_by on ResponseFunctionShellToolCall/ResponseApplyPatchToolCall and related output items). Those keys are not part of ResponseInputItemParam, and request transformation keeps unknown TypedDict keys unchanged, so follow-up responses.create(input=...) calls can send unsupported fields and fail for responses that include these tool items.

Useful? React with 👍 / 👎.


return cast(
List[ResponseInputItemParam],
[item.to_dict(mode="json", exclude_none=True) for item in self.output],
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Preserve required nullable fields when serializing output items

Using exclude_none=True here drops keys that are required-but-nullable in input schemas, notably ImageGenerationCall.result (Required[Optional[str]]). If an output image generation item has result=None (e.g., generating/failed states), output_as_input() removes result, producing an invalid follow-up input shape that can be rejected on the next responses.create() call.

Useful? React with 👍 / 👎.

@raashish1601 raashish1601 deleted the contributor-22/openai-python-3008 branch March 28, 2026 14:12
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Responses API: multi-turn conversations 400 on turn 2 when passing response.output back as input

1 participant