What happened?
The context-related settings in config.toml do not appear to take effect. I tried configuring Codex from:
C:\Users\12768\.codex\config.toml
Relevant settings:
model = "gpt-5.5"
model_reasoning_effort = "high"
model_context_window = 960000
model_auto_compact_token_limit = 800000
After setting a large context window, for example around 1M, Codex still automatically reports/uses about 258k instead. This makes it impossible to control the effective context window from the config file.
Expected behavior
model_context_window and model_auto_compact_token_limit should be honored, or Codex should report a clear validation/error message if the configured value exceeds the supported model/client limit.
Actual behavior
The configured value is silently reduced/ignored. For example, setting approximately 1M results in Codex automatically changing/using about 258k.
Environment
- OS: Windows
- Config path:
C:\Users\12768\.codex\config.toml
- Model configured:
gpt-5.5
Why this matters
Users cannot reliably control context behavior through config.toml, and the silent conversion makes it difficult to understand whether the setting is unsupported, capped by the model, capped by the client, or being parsed incorrectly.
What happened?
The context-related settings in
config.tomldo not appear to take effect. I tried configuring Codex from:Relevant settings:
After setting a large context window, for example around
1M, Codex still automatically reports/uses about258kinstead. This makes it impossible to control the effective context window from the config file.Expected behavior
model_context_windowandmodel_auto_compact_token_limitshould be honored, or Codex should report a clear validation/error message if the configured value exceeds the supported model/client limit.Actual behavior
The configured value is silently reduced/ignored. For example, setting approximately
1Mresults in Codex automatically changing/using about258k.Environment
C:\Users\12768\.codex\config.tomlgpt-5.5Why this matters
Users cannot reliably control context behavior through
config.toml, and the silent conversion makes it difficult to understand whether the setting is unsupported, capped by the model, capped by the client, or being parsed incorrectly.