Skip to content

fix(types): use in_memory prompt cache retention literal#3109

Closed
MukundaKatta wants to merge 1 commit intoopenai:mainfrom
MukundaKatta:codex/openai-python-prompt-cache-retention-literal-clean
Closed

fix(types): use in_memory prompt cache retention literal#3109
MukundaKatta wants to merge 1 commit intoopenai:mainfrom
MukundaKatta:codex/openai-python-prompt-cache-retention-literal-clean

Conversation

@MukundaKatta
Copy link
Copy Markdown

Summary

  • replace the incorrect prompt_cache_retention literal value in generated Python types
  • update request/response model surfaces to use in_memory instead of in-memory
  • refresh request expectation tests for chat completions and responses

Testing

  • Unable to run locally in this checkout because pytest_asyncio is not installed

Closes #2883

@MukundaKatta MukundaKatta requested a review from a team as a code owner April 21, 2026 15:25
@MukundaKatta
Copy link
Copy Markdown
Author

Superseded by #3108 which covers the same in_memory prompt_cache_retention literal fix with additional test + SDK-surface updates (+170/-48 vs this +41/-41). Closing the narrower copy.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

prompt_cache_retention type declares "in-memory" but API expects "in_memory"

1 participant