Skip to content

config.toml context window settings are not respected #19185

@kkellyoffical

Description

@kkellyoffical

What happened?

The context-related settings in config.toml do not appear to take effect. I tried configuring Codex from:

C:\Users\12768\.codex\config.toml

Relevant settings:

model = "gpt-5.5"
model_reasoning_effort = "high"
model_context_window = 960000
model_auto_compact_token_limit = 800000

After setting a large context window, for example around 1M, Codex still automatically reports/uses about 258k instead. This makes it impossible to control the effective context window from the config file.

Expected behavior

model_context_window and model_auto_compact_token_limit should be honored, or Codex should report a clear validation/error message if the configured value exceeds the supported model/client limit.

Actual behavior

The configured value is silently reduced/ignored. For example, setting approximately 1M results in Codex automatically changing/using about 258k.

Environment

  • OS: Windows
  • Config path: C:\Users\12768\.codex\config.toml
  • Model configured: gpt-5.5

Why this matters

Users cannot reliably control context behavior through config.toml, and the silent conversion makes it difficult to understand whether the setting is unsupported, capped by the model, capped by the client, or being parsed incorrectly.

Metadata

Metadata

Assignees

No one assigned

    Labels

    bugSomething isn't workingconfigIssues involving config.toml, config keys, config merging, or config updatescontextIssues related to context management (including compaction)

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions