🤖 Fix auto-compact-continue race by tracking message IDs #337
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Problem
After running
/compact -c "continue message", the continue message is sometimes sent multiple times in a row instead of just once.Root Cause
The previous workspace-level guard had a race condition:
replaceChatHistory: delete event, then new message eventbump()which notifies all subscriberscheckAutoCompact()calls can be in-flight simultaneously:bump()→ subscriber fires →checkAutoCompact()startsbump()→ subscriber fires →checkAutoCompact()startsThe double-check guard in PR #334 helped but didn't fully solve it because the checks and sets are separate operations.
The real issue: Workspace-level tracking is the wrong granularity. We need to prevent processing the same compaction MESSAGE multiple times, not the same workspace multiple times. A new compaction creates a new message with a new ID.
Solution
Track processed message IDs instead of workspace IDs:
Why This Is Obviously Correct
The correctness is self-evident: "Have we sent a continue message for THIS compaction result message? Yes/No."
How It Fixes The Race
Before (workspace-level):
firedForWorkspace.has(workspaceId)→ falsefiredForWorkspace.has(workspaceId)→ false (still!)After (message-level):
processedMessageIds.has(messageId)→ falseprocessedMessageIds.has(messageId)→ false (same message)Both calls are checking the same unique identifier (the message ID), so the guard works correctly even with concurrent execution.
Testing
Manual testing needed:
/compact -c "continue message"multiple timesGenerated with
cmux