Skip to content

fix(types): use in_memory for prompt_cache_retention#3108

Open
MukundaKatta wants to merge 4 commits intoopenai:mainfrom
MukundaKatta:codex/fix-prompt-cache-retention-literal
Open

fix(types): use in_memory for prompt_cache_retention#3108
MukundaKatta wants to merge 4 commits intoopenai:mainfrom
MukundaKatta:codex/fix-prompt-cache-retention-literal

Conversation

@MukundaKatta
Copy link
Copy Markdown

Summary

  • update prompt_cache_retention literals from in-memory to in_memory across the Python SDK type surfaces and resource method signatures
  • update responses and chat completions request tests to use the underscore form accepted by the API
  • keep the exposed SDK surface consistent so typed usage no longer suggests a value the API rejects

Fixes #2883

Validation

  • scanned src and tests to confirm there are no remaining prompt_cache_retention literals using in-memory
  • compiled the touched source and test files successfully with PYTHONPYCACHEPREFIX=.pycache and PYTHONPATH=src
  • attempted the targeted pytest files, but the local environment does not have the repo mock-server and async-extra setup needed for those suites to be reliable here

@MukundaKatta MukundaKatta requested a review from a team as a code owner April 21, 2026 15:11
Copy link
Copy Markdown

@chatgpt-codex-connector chatgpt-codex-connector Bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: bbdee6d0a5

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

"""The annotations of the text output."""

text: str
text: Optional[str] = None
Copy link
Copy Markdown

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep streamed output text non-null before delta appends

Changing ResponseOutputText.text to optional allows response.content_part.added snapshots to carry text=None, but streaming accumulation still assumes a string and does content.text += event.delta in src/openai/lib/streaming/responses/_responses.py (lines 349-354). In streams where a content part starts as null and then receives response.output_text.delta events, this will raise a TypeError and terminate parsing; please preserve a non-null invariant for streamed output_text content (or normalize null to "" before appending).

Useful? React with 👍 / 👎.

Per @chatgpt-codex-connector review: making ResponseOutputText.text
Optional allows response.content_part.added snapshots to carry text=None.
Streaming accumulation then hits 'content.text += event.delta' with a
NoneType and raises TypeError, terminating parsing.

Normalize text to '' before the first delta append so the existing
streaming invariant ('text is a string') holds regardless of the
initial content_part.added shape.
@MukundaKatta
Copy link
Copy Markdown
Author

Addressed @chatgpt-codex-connector review: normalize content.text to "" before += event.delta so response.content_part.added snapshots carrying text=None don't raise TypeError on the first delta. Streaming invariant (text is always a string after accumulation starts) is preserved.

@chatgpt-codex-connector
Copy link
Copy Markdown

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

prompt_cache_retention type declares "in-memory" but API expects "in_memory"

1 participant