Skip to content

Conversation

@ethanndickson
Copy link
Member

@ethanndickson ethanndickson commented Dec 2, 2025

Background

Auto-compaction runs after a response completes when context usage exceeds a user-configured threshold (default 70%). It summarizes the conversation to free up context space while preserving important information.

Force-compaction is a safety mechanism that triggers during streaming when context usage gets dangerously high. Unlike auto-compaction which waits for a natural break point, force-compaction interrupts the current response to prevent hitting the context window limit.

The Problem

Previously, force-compaction triggered based on a fixed token buffer from the context window limit. This meant force-compaction timing was disconnected from the user's configured auto-compaction threshold—changing your threshold didn't affect when force-compaction would kick in.

The Change

Force-compaction now triggers at threshold + 5%. With a 70% threshold, force-compaction happens at 75%.

This gives users a predictable buffer zone between when auto-compaction would run (after the response) and when force-compaction will run (during streaming). If a response slightly overshoots the threshold, the chat has some leeway—it won't immediately force-compact just because usage landed a bit over. This avoids unnecessary force-compactions when the stream ends and usage settles back within acceptable bounds.

The buffer is intentionally small (5%) to balance user control with safety margins as context approaches capacity.

UI Change

Also adds a "Force-compacting in N%" countdown during streaming when usage is in the buffer zone, so users know how much room remains before force-compaction triggers.


Generated with mux

@ethanndickson ethanndickson force-pushed the force-compaction-threshold-buffer branch from 9b4ae34 to 8c70240 Compare December 3, 2025 00:53
@ethanndickson ethanndickson force-pushed the force-compaction-threshold-buffer branch from 8c70240 to 0f6c5ae Compare December 3, 2025 05:11
@ethanndickson
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

…ase)

Previously shouldShowWarning only used lastUsagePercentage, so the warning
never appeared during the first streaming response. Now uses max of last
and live usage, ensuring the countdown renders when it should.
@ethanndickson
Copy link
Member Author

@codex review

@chatgpt-codex-connector
Copy link

Codex Review: Didn't find any major issues. You're on a roll.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ethanndickson ethanndickson added this pull request to the merge queue Dec 3, 2025
Merged via the queue into main with commit 5e80c97 Dec 3, 2025
13 checks passed
@ethanndickson ethanndickson deleted the force-compaction-threshold-buffer branch December 3, 2025 06:00
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant