Add helper to reuse response.output as follow-up input#3031
Add helper to reuse response.output as follow-up input#3031raashish1601 wants to merge 1 commit intoopenai:mainfrom
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: d70effa5c4
ℹ️ About Codex in GitHub
Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".
|
|
||
| return cast( | ||
| List[ResponseInputItemParam], | ||
| [item.to_dict(mode="json", exclude_none=True) for item in self.output], |
There was a problem hiding this comment.
Strip output-only fields before returning follow-up input
output_as_input() forwards each output item via item.to_dict(...) without removing fields that are present only on output models (for example created_by on ResponseFunctionShellToolCall/ResponseApplyPatchToolCall and related output items). Those keys are not part of ResponseInputItemParam, and request transformation keeps unknown TypedDict keys unchanged, so follow-up responses.create(input=...) calls can send unsupported fields and fail for responses that include these tool items.
Useful? React with 👍 / 👎.
|
|
||
| return cast( | ||
| List[ResponseInputItemParam], | ||
| [item.to_dict(mode="json", exclude_none=True) for item in self.output], |
There was a problem hiding this comment.
Preserve required nullable fields when serializing output items
Using exclude_none=True here drops keys that are required-but-nullable in input schemas, notably ImageGenerationCall.result (Required[Optional[str]]). If an output image generation item has result=None (e.g., generating/failed states), output_as_input() removes result, producing an invalid follow-up input shape that can be rejected on the next responses.create() call.
Useful? React with 👍 / 👎.
Summary
Response.output_as_input()to serializeresponse.outputinto follow-upinputitems with API field names andNonevalues strippedmodel_dump()behavior unchanged while giving manual conversation-state users a safe round-trip helperNonefields undermodel_dump()Testing
python -m ruff check src/openai/types/responses/response.py tests/lib/responses/test_responses.py$env:PYTHONPATH=''src''; python -m pytest -o addopts="" tests/lib/responses/test_responses.py -q -k output_as_inputCloses #3008.