-
Notifications
You must be signed in to change notification settings - Fork 2.6k
Description
Feature Request: Enhanced Intelligent Condensing with Guaranteed Minimum Token Output
Problem Description
The current intelligent condensing functionality in Roo-Code has a critical limitation: it doesn't guarantee that the condensed output will meet a minimum token requirement. Users have reported cases where large contexts (e.g., 210,000 tokens) are condensed into much smaller outputs (e.g., 12,000 tokens) when they need a minimum of 50,000 tokens to maintain sufficient context for their tasks.
Current Implementation Analysis
The existing condensing system:
- Uses a single LLM summarization call with a detailed prompt structure (
SUMMARY_PROMPT) - Triggers condensing based on context window percentage thresholds (5-100%)
- Validates that new context size is smaller than previous context
- Has no mechanism to enforce minimum output token requirements
Proposed Solution
Implement a multi-request intelligent condensing system that:
Core Features
- Programmatic Token Checking: After each API request, check the actual token count of the condensed output
- Iterative Refinement: If output is below user-defined minimum, make additional API requests to append more context until minimum threshold is met
- Configuration via Settings: Allow users to set minimum token requirements in application settings (rather than pop-ups)
- Smart Context Selection: Use the first API call to determine structure and subsequent calls to append relevant context
Technical Implementation
The enhancement should be implemented in:
src/core/condense/index.ts- Main summarization logicsrc/core/sliding-window/index.ts- Context truncation logicsrc/core/task/Task.ts- Task integration pointspackages/types/src/global-settings.ts- Add new configuration option
Configuration Requirements
Add new setting in global configuration:
// In global-settings.ts
minimumCondenseTokens: z.number().optional(), // Minimum tokens for condensed outputAcceptance Criteria
- Users can configure minimum token requirements via settings
- Condensing process automatically makes multiple API requests if needed
- System validates that final output meets minimum token requirements
- Backward compatibility maintained for existing use cases
- Performance impact minimized through smart retry logic
- Clear user feedback when condensation meets or fails to meet requirements
Impact
This enhancement will ensure that users always receive condensed contexts that meet their minimum token requirements, preventing loss of important contextual information that could affect task completion quality.
Benefits
- Guaranteed Context Preservation: Users will always get sufficient context for their tasks
- Improved Task Completion: Better chance of successful task execution with adequate context
- User Control: Explicit control over minimum token requirements
- Backward Compatibility: Existing functionality remains unchanged
Metadata
Metadata
Assignees
Labels
Type
Projects
Status