Skip to content

Context compact error #21343

@JialinLiu-codedance

Description

@JialinLiu-codedance

What version of the Codex App are you using (From “About Codex” dialog)?

codex app: 26.429.61741 (2429)
codex cli: OpenAI Codex (v0.128.0)

What subscription do you have?

Pro $200

What platform is your computer?

Darwin 24.6.0 arm64 arm

What issue are you seeing?

Error message:
Error running remote compact task: stream disconnected before completion: error sending request for url (https://chatgpt.com/backend-api/codex/responses/compact)

Ever since the release of GPT 5.5, my Codex has frequently encountered this error. This issue persists regardless of the context window size—whether it's the 256k context in GPT 5.4 or GPT 5.5, or even the 1M context in GPT 5.4. It occurs very frequently; essentially, about 80% of all "compact" operations fail.

What steps can reproduce the bug?

Uploaded thread: 019dfb31-0531-73f1-a22d-240b5499cbb1

What is the expected behavior?

This issue is severely impacting my workflow. Many conversations get stuck right when the context is about to be compacted, and restarting the app does not recover them. It has become extremely frustrating, and I hope this problem can be fixed as soon as possible.

At the moment, this effectively makes it impossible for me to reliably use Codex for long-running tasks or autonomous development sessions. Once a conversation enters a broken compaction state, the entire workflow is often lost, including task progress, planning context, and tool state. This significantly reduces the practicality of using Codex for complex engineering work that requires sustained context over extended periods of time.

What concerns me most is that the problem does not appear to correlate with the actual context window size. I have encountered the same failure pattern across 256k and 1M contexts, which suggests the issue may be related to the compaction/runtime pipeline itself rather than raw token limits. Given how frequently this occurs, it would be very helpful to have:

more robust recovery mechanisms after compaction failure,
better visibility into what stage failed,
and ideally a way to manually recover or resume corrupted sessions instead of losing the entire conversation state.

Right now, the instability around compaction is the single biggest blocker preventing me from depending on Codex for serious long-context workflows.

Additional information

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    appIssues related to the Codex desktop appbugSomething isn't workingconnectivityIssues involving networking or endpoint connectivity problems (disconnections)contextIssues related to context management (including compaction)

    Type

    No type
    No fields configured for issues without a type.

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions