Skip to content

Conversation

@ethanndickson
Copy link
Member

@ethanndickson ethanndickson commented Nov 20, 2025

Stack

  1. 🤖 feat: add auto-compaction configuration UI #685
  2. 🤖 feat: add auto-compaction with progressive warnings #683 ⬅ This PR
  3. 🤖 refactor: move compaction logic to backend #670
  4. 🤖 refactor: use message queue for compact continue messages #650 (base)

Summary

Adds automatic context compaction that triggers at 70% usage, with progressive countdown warnings starting at 60%.

Relates to #651.

Key Changes

Auto-Compaction:

  • Triggers automatically when current context usage reaches 70% of model's context window
  • Queues user's message to send after compaction completes
  • Includes image parts in continue messages

Progressive Warnings:

  • Shows countdown at 60-69% usage: "Context left until Auto-Compact: X% remaining"
  • Shows urgent message at 70%+: "⚠️ Approaching context limit. Next message will trigger auto-compaction."

Implementation:

  • New shouldAutoCompact() utility centralizes threshold logic with configurable constants
  • Returns { shouldShowWarning, usagePercentage, thresholdPercentage }
  • Uses last usage entry (current context size) to match UI token meter display
  • Excludes historical usage from threshold check to prevent infinite compaction loops
  • ContinueMessage type now includes optional imageParts

Technical Details

Usage Calculation:
The auto-compaction check uses the most recent usage entry from usageHistory to calculate the current context size. This matches the percentage displayed in the UI token meter and correctly handles post-compaction scenarios:

  • Before compaction: Last entry represents full context → triggers at 70% correctly
  • After compaction: Last entry excludes historical usage → resets to actual context size
  • Historical usage preserved: Remains in usage history for cost tracking, but not used for threshold calculations

This prevents the infinite loop where post-compaction workspaces would continuously re-compact because historical usage tokens were being included in the threshold check.

Future Work

Future PRs will add user settings to configure auto-compaction (enable/disable, custom threshold).

Generated with mux

@ethanndickson ethanndickson changed the title 🤖 feat: add progressive compaction warnings 🤖 feat: add auto-compaction with progressive warnings Nov 20, 2025
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ethanndickson
Copy link
Member Author

@codex review

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

@ethanndickson
Copy link
Member Author

@codex review

chatgpt-codex-connector[bot]

This comment was marked as resolved.

github-merge-queue bot pushed a commit that referenced this pull request Nov 21, 2025
## Stack
1. #685 
1. #683 
1. #670
1. #650 ⬅ This PR (base)

## Problem

Compact continue messages were handled by a frontend hook that watched
workspace states and manually sent continue messages after compaction.
This was complex, had potential race conditions, and poor separation of
concerns.

Relates to #651.

## Solution

Use the existing message queue system:
- Backend queues continue message when compaction starts
- Queue auto-sends when stream ends (existing behavior)
- Clear queue on error paths

**Benefits:** Simpler (-134 lines), more reliable, better UX (continue
message visible in queue).

_Generated with `mux`_
@ethanndickson ethanndickson force-pushed the fix-compaction-message-race branch from e72e057 to f93f212 Compare November 21, 2025 00:46
github-merge-queue bot pushed a commit that referenced this pull request Nov 21, 2025
_Generated with `mux`_

## Stack
1. #685 
1. #683 
1. #670  ⬅ This PR
1. #650 (base)


## Summary
Moves history compaction handling from WorkspaceStore (frontend) to
agentSession (backend) to centralize server-side operations and fix race
conditions.

Relates to #651.

## Changes

### Backend (agentSession.ts)
- Added `handleCompactionCompletion()` - detects compaction stream-end,
extracts summary from event.parts, performs history replacement
- Added `handleCompactionAbort()` - handles Ctrl+A (accept early with
`[truncated]`) and Ctrl+C (cancel) flows
- Added `performCompaction()` - atomically replaces chat history with
summary message including cumulative usage
- Implemented `abandonPartial` flag flow from IPC through to
StreamAbortEvent
- Extracts truncated message content from history instead of
partialService

### Frontend (WorkspaceStore.ts)
- Removed `handleCompactionCompletion()` and `handleCompactionAbort()`
methods
- Removed `performCompaction()` method
- Removed `processedCompactionRequestIds` Set
- Simplified `cancelCompaction()` - just calls `interruptStream` with
`abandonPartial: true`
- Fixed Ctrl+A keybind to pass `abandonPartial: false` for early accept

### Shared
- Updated `StreamAbortEvent` to include `abandonPartial?: boolean`
- `historyService.clearHistory()` now returns deleted sequence numbers
- Created `calculateCumulativeUsage()` utility in `displayUsage.ts` to
extract and sum usage from messages

## Testing
- [x] Manual: `/compact` completes successfully
- [x] Manual: Ctrl+A during compaction accepts early with `[truncated]`
- [x] Manual: Ctrl+C during compaction cancels and enters edit mode
- [x] Verify cumulative usage preserved across multiple compactions
Base automatically changed from fix-compaction-message-race to main November 21, 2025 01:15
@ethanndickson ethanndickson force-pushed the frontend-auto-compaction-queue branch from 2438689 to 087e85f Compare November 21, 2025 01:56
@ethanndickson
Copy link
Member Author

@codex review

chatgpt-codex-connector[bot]

This comment was marked as resolved.

@ethanndickson
Copy link
Member Author

@codex review

chatgpt-codex-connector[bot]

This comment was marked as resolved.

@ethanndickson
Copy link
Member Author

@codex review

chatgpt-codex-connector[bot]

This comment was marked as resolved.

@ethanndickson
Copy link
Member Author

@codex review

chatgpt-codex-connector[bot]

This comment was marked as resolved.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant