fix(realtime): expose max_output_tokens on RealtimeSessionModelSettings#3223
Merged
seratch merged 1 commit intoopenai:mainfrom May 8, 2026
Merged
Conversation
The realtime model already forwards `max_output_tokens` from the settings dict to the underlying `RealtimeSessionCreateRequest`, but the field was missing from the public TypedDict. Type-checked callers had to cast to `Any` to set a per-response token cap. Add the field so it can be passed through cleanly with both an integer cap and the `"inf"` sentinel.
Member
|
@codex review |
|
Codex Review: Didn't find any major issues. 👍 ℹ️ About Codex in GitHubYour team has set up Codex to review pull requests in this repo. Reviews are triggered when you
If Codex has suggestions, it will comment; otherwise it will react with 👍. Codex can also answer questions or update the PR. Try commenting "@codex address that feedback". |
seratch
approved these changes
May 8, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
OpenAIRealtimeWebSocketModel._get_session_configalready readsmax_output_tokensfrom the settings dict and forwards it toRealtimeSessionCreateRequest.max_output_tokens(seesrc/agents/realtime/openai_realtime.py:1474), but the field was never declared onRealtimeSessionModelSettings. Type-checked callers had to cast throughAny(or use# type: ignore) just to limit per-response output tokens, even though the OpenAI Realtime API supports the field natively.This patch adds
max_output_tokens: NotRequired[int | Literal["inf"]]to the TypedDict so callers can pass either an integer cap or the"inf"sentinel directly. No runtime change is required: the existing forwarding logic already handles both shapes.Test plan
test_session_config_passes_max_output_tokenscovers integer caps, the"inf"sentinel, and the unset (server-default) case via_get_session_config.pytest tests/realtime/-> 233 passed.ruff check/ruff format --checkclean on touched files.