What version of Codex is running?
Codex Desktop / app-backed session
cli_version: 0.108.0-alpha.12
What subscription do you have?
Pro
Which model were you using?
gpt-5.4
What platform is your computer?
macOS (Desktop app-backed session)
What issue are you seeing?
I hit:
Codex ran out of room in the model's context window. Start a new thread or clear earlier history before retrying.
But this does not appear to be the same failure mode as the existing remote compaction failures.
In my case, the turn appears to overflow before compaction runs at all:
- the local session JSONL contains zero
type:"compaction" events
- immediately after the failure, the UI/status showed:
- the turn then ended with
task_complete and last_agent_message: null
So from the user side it looks like Codex both:
- says it ran out of context, and
- simultaneously says there is effectively a fresh / empty context window.
That combination is misleading and makes this look different from:
- remote compaction failing with
context_length_exceeded
- compaction loops
- post-compaction percentage display bugs
Why I think this is a distinct issue
Related issues I found:
Those are close, but this report is specifically about:
- no compaction attempt being recorded at all
- a silent task termination
- a reset/misleading status readout after failure (
100% left, 0 used / 760K)
- likely a single oversized in-flight turn/tool payload that overflows before auto-compaction can trigger
What steps can reproduce the bug?
I do not have a minimal repro yet, but this happened in a fresh thread while the agent was doing web/doc lookups.
The pattern seems to be:
- Start a fresh thread in Codex Desktop
- Ask Codex to inspect/open multiple large docs/web pages and keep working in the same turn
- The turn eventually errors with the normal "ran out of room" message
- Instead of showing a compaction attempt/failure, the session:
- records no compaction events
- resets the visible status to
100% left (0 used / 760K)
- ends with no last agent message
What is the expected behavior?
One of these should happen instead:
- auto-compaction should run before the turn is terminated, or
- Codex should surface a clear pre-compaction overflow error saying the current in-flight request was too large to compact, or
- the UI should not reset to
100% left / 0 used immediately after a context overflow if that is not actually the usable state
Additional information
Affected session ID:
019cc407-9cbf-7f40-9586-1c5f12f5a7bf
Local session inspection found:
compaction_events = 0
- final token/status events showing
model_context_window = 760000
- then a reset-like status event with
0 used / 760K
- then
task_complete with last_agent_message = null
This looks like:
pre-compaction turn overflow + misleading post-failure context status + silent task completion
What version of Codex is running?
Codex Desktop / app-backed session
cli_version: 0.108.0-alpha.12What subscription do you have?
Pro
Which model were you using?
gpt-5.4What platform is your computer?
macOS (Desktop app-backed session)
What issue are you seeing?
I hit:
But this does not appear to be the same failure mode as the existing remote compaction failures.
In my case, the turn appears to overflow before compaction runs at all:
type:"compaction"events100% left0 used / 760Ktask_completeandlast_agent_message: nullSo from the user side it looks like Codex both:
That combination is misleading and makes this look different from:
context_length_exceededWhy I think this is a distinct issue
Related issues I found:
Those are close, but this report is specifically about:
100% left,0 used / 760K)What steps can reproduce the bug?
I do not have a minimal repro yet, but this happened in a fresh thread while the agent was doing web/doc lookups.
The pattern seems to be:
100% left (0 used / 760K)What is the expected behavior?
One of these should happen instead:
100% left / 0 usedimmediately after a context overflow if that is not actually the usable stateAdditional information
Affected session ID:
019cc407-9cbf-7f40-9586-1c5f12f5a7bfLocal session inspection found:
compaction_events = 0model_context_window = 7600000 used / 760Ktask_completewithlast_agent_message = nullThis looks like:
pre-compaction turn overflow + misleading post-failure context status + silent task completion