Skip to content

fix(types): correct prompt_cache_retention literal from in-memory to in_memory#2991

Open
NIK-TIGER-BILL wants to merge 1 commit intoopenai:mainfrom
NIK-TIGER-BILL:fix/prompt-cache-retention-underscore
Open

fix(types): correct prompt_cache_retention literal from in-memory to in_memory#2991
NIK-TIGER-BILL wants to merge 1 commit intoopenai:mainfrom
NIK-TIGER-BILL:fix/prompt-cache-retention-underscore

Conversation

@NIK-TIGER-BILL
Copy link

Fixes #2883

Problem

The SDK declares prompt_cache_retention with Literal["in-memory", "24h"] (hyphen), but the OpenAI API rejects "in-memory" with 400 and only accepts "in_memory" (underscore).

Fix

Replaced "in-memory" with "in_memory" in all affected type declarations.

Testing

Users relying on the typed value will now get the API-accepted value.

@NIK-TIGER-BILL NIK-TIGER-BILL requested a review from a team as a code owner March 19, 2026 07:02
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 33fdffc119

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

"""

prompt_cache_retention: Optional[Literal["in-memory", "24h"]]
prompt_cache_retention: Optional[Literal["in_memory", "24h"]]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Update public Responses overloads to accept in_memory

This only fixes the generated model/param types; the public Responses.create signatures still declare Literal["in-memory", "24h"] in src/openai/resources/responses/responses.py (for example line 130, and the same literal is repeated in the other overloads in that file). In a typed codebase, client.responses.create(prompt_cache_retention="in_memory") still fails pyright/mypy, and forwarding a ResponseCreateParamsBase["prompt_cache_retention"] value into Responses.create is now an incompatible call even though "in_memory" is the API-accepted value.

Useful? React with 👍 / 👎.

"""

prompt_cache_retention: Optional[Literal["in-memory", "24h"]]
prompt_cache_retention: Optional[Literal["in_memory", "24h"]]

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep chat completion method signatures in sync with this literal

CompletionCreateParamsBase now exposes "in_memory", but the public chat-completions entry points still use Literal["in-memory", "24h"] in src/openai/resources/chat/completions/completions.py (for example lines 112 and 267, with the same mismatch repeated across the overloads). That means typed callers of client.chat.completions.create(...)/.parse(...) are still blocked from passing the corrected value, and code that forwards CompletionCreateParamsBase["prompt_cache_retention"] into those methods no longer type-checks.

Useful? React with 👍 / 👎.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

prompt_cache_retention type declares "in-memory" but API expects "in_memory"

1 participant