Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
310 changes: 275 additions & 35 deletions .roomodes

Large diffs are not rendered by default.

29 changes: 29 additions & 0 deletions docs/features/experimental/codebase-indexing.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -44,6 +44,35 @@ Choose one of these options for generating embeddings:
- Supports any Ollama-compatible embedding model
- Requires Ollama base URL configuration

### Setting Up Ollama for Embeddings

1. **Install Ollama**
- **macOS**: `brew install ollama` or download from [ollama.com](https://ollama.com)
- **Linux**: `curl -fsSL https://ollama.com/install.sh | sh`
- **Windows**: Download installer from [ollama.com](https://ollama.com)

2. **Start Ollama Service**
```bash
ollama serve
```
This starts Ollama on `http://localhost:11434` (default port)

3. **Install Embedding Model**
```bash
ollama pull nomic-embed-text
```
This downloads the recommended embedding model (~274MB)

4. **Verify Installation**
```bash
ollama list
```
You should see `nomic-embed-text` in the list

5. **Configure in Roo Code**
- Set Ollama Base URL: `http://localhost:11434`
- Select Model: `nomic-embed-text`

### Vector Database

**Qdrant** is required for storing and searching embeddings:
Expand Down
1 change: 0 additions & 1 deletion docs/features/experimental/experimental-features.md
Original file line number Diff line number Diff line change
Expand Up @@ -17,7 +17,6 @@ To enable or disable experimental features:
The following experimental features are currently available:

- [Codebase Indexing](/features/experimental/codebase-indexing) - Semantic search through AI-powered codebase indexing
- [Intelligently Condense the Context Window](/features/experimental/intelligent-context-condensing)
- [Power Steering](/features/experimental/power-steering)

## Providing Feedback
Expand Down
Original file line number Diff line number Diff line change
Expand Up @@ -3,22 +3,27 @@ sidebar_label: 'Intelligent Context Condensing'
---
import Codicon from '@site/src/components/Codicon';

# Intelligent Context Condensing (Experimental)

The Intelligent Context Condensing feature helps manage long conversations by summarizing earlier parts of the dialogue. This prevents important information from being lost when the context window nears its limit. This is an **experimental feature** and is **disabled by default**.

<div style={{width: '50%', margin: 'auto'}}>
<div style={{position: 'relative', paddingBottom: '177.77%', height: 0, overflow: 'hidden'}}>
<iframe
src="https://www.youtube.com/embed/o5xgO9N8vVU"
title="YouTube Short"
frameBorder="0"
allow="accelerometer; autoplay; clipboard-write; encrypted-media; gyroscope; picture-in-picture; web-share"
allowFullScreen
style={{position: 'absolute', top: 0, left: 0, width: '100%', height: '100%'}}
></iframe>
</div>
# Intelligent Context Condensing

The Intelligent Context Condensing feature helps manage long conversations by summarizing earlier parts of the dialogue. This prevents important information from being lost when the context window nears its limit. This feature is **enabled by default**.

<div style={{ position: 'relative', paddingBottom: '56.25%', height: 0, overflow: 'hidden' }}>
<iframe
src="https://www.youtube.com/embed/9k8OAXlszak"
style={{
position: 'absolute',
top: 0,
left: 0,
width: '100%',
height: '100%',
}}
frameBorder="0"
allow="autoplay; encrypted-media"
allowFullScreen
></iframe>
</div>

<br />
## How It Works

As your conversation with Roo Code grows, it might approach the context window limit of the underlying AI model. When this happens, older messages would typically be removed to make space. Intelligent Context Condensing aims to prevent this abrupt loss by:
Expand All @@ -31,29 +36,32 @@ As your conversation with Roo Code grows, it might approach the context window l
* **Summarization Impact:** While original messages are preserved if you use [Checkpoints](/features/checkpoints) to rewind, the summarized version is what's used in ongoing LLM calls to keep the context manageable.
* **Cost:** The AI call to perform the summarization incurs a cost. This cost is included in the context condensing metrics displayed in the UI.

## Enabling This Feature
## Configuration

As an experimental feature, Intelligent Context Condensing is **disabled by default**.
Intelligent Context Condensing is **enabled by default** and offers several configuration options:

1. Open Roo Code settings (<Codicon name="gear" /> icon in the top right corner of the Roo Code panel).
2. Navigate to the "Experimental" section.
3. Toggle the "Automatically trigger intelligent context condensing" (`autoCondenseContext`) option to enable it.
4. Optionally, adjust the "Threshold to trigger intelligent context condensing" (`autoCondenseContextPercent`) slider to control the trigger point for automatic context condensing.
5. Save your changes.

<img src="/img/intelligent-context-condensation/intelligent-context-condensation-1.png" alt="Settings for Intelligent Context Condensing" width="600" />
*The image above shows settings for Intelligent Context Condensing: the toggle to "Automatically trigger intelligent context condensing" and the "Threshold to trigger intelligent context condensing" slider.*

2. Navigate to the "Context" settings section.
3. Configure the available options:
- **Automatically trigger intelligent context condensing**: Enabled by default, this controls whether condensing happens automatically
- **Threshold to trigger intelligent context condensing**: A percentage slider (default 100%) that determines when condensing activates based on context window usage
- **API Configuration for Context Condensing**: Choose which API configuration to use for condensing operations (defaults to your current active configuration)
- **Custom Context Condensing Prompt**: Customize the system prompt used for context condensing operations

<img src="/img/intelligent-context-condensing/intelligent-context-condensing.png" alt="Settings for Intelligent Context Condensing" width="600" />
*Intelligent Context Condensing configuration options: automatic triggering toggle, threshold slider, API configuration selection, and custom prompt customization.*
## Controlling and Understanding Context Condensing

Roo Code provides several ways to control and understand the Intelligent Context Condensing feature:

### Controlling Context Condensing
* **Automatic Threshold:** In Roo Code Settings (<Codicon name="gear" />) > "Experimental," the `autoCondenseContextPercent` setting allows you to define a percentage (e.g., 80%). Roo Code will attempt to condense the context automatically when the conversation reaches this level of the context window's capacity.
* **Manual Trigger:** A **Condense Context** button (<Codicon name="fold" /> icon) is available when a task is expanded, typically located at the bottom of the task view, next to other task action icons like the trash can. This allows you to initiate the context condensing process at any time.
* **Automatic Threshold:** The threshold slider in "Context" settings allows you to define a percentage (e.g., 80%) of context window usage. Roo Code will attempt to condense the context automatically when the conversation reaches this level of capacity.
* **API Configuration:** Select which API configuration to use for context condensing operations. This allows you to use a different provider or model specifically for condensing if desired.
* **Custom Prompts:** Modify the system prompt used for condensing to better suit your workflow or to emphasize certain aspects of conversation summarization.
* **Manual Trigger:** A **Condense Context** button is available at the top of the task, positioned to the right of the context bar. This allows you to initiate the context condensing process at any time.

<img src="/img/intelligent-context-condensation/intelligent-context-condensation-2.png" alt="Manual Condense Context button in expanded task view" width="600" />
*The Manual Condense Context button (highlighted with a yellow arrow) appears in the expanded task view.*
<img src="/img/intelligent-context-condensing/intelligent-context-condensing-1.png" alt="Manual Condense Context button in expanded task view" width="600" />
*The Manual Condense Context button (highlighted with a yellow arrow) is easily accessible for manual control.*

### Understanding Context Condensing Activity
* **Context Condensing Metrics:** When context condensing occurs, Roo Code displays:
Expand All @@ -72,13 +80,7 @@ Roo Code provides several ways to control and understand the Intelligent Context

* The task header also displays the current context condensing status.
* The `ContextWindowProgress` bar offers a visual representation of token distribution, including current usage, space reserved for the AI's output, available space, and raw token numbers.
* **Interface Clarity:** The "Condense Context" button includes a tooltip explaining its function, available in all supported languages. The icon for context condensing-related actions is `codicon-compress`.

### Accurate Token Information
* Roo Code employs accurate token counting methods, with some AI providers utilizing their native token counting endpoints. This ensures that context size and associated costs are calculated reliably.

### Internationalization
* All user interface elements for this feature, such as button labels, tooltips, status messages, and settings descriptions, are available in multiple supported languages.
* **Interface Clarity:** The "Condense Context" button includes a tooltip explaining its function, available in all supported languages.

## Technical Implementation

Expand All @@ -96,24 +98,3 @@ Roo Code uses a sophisticated token counting system that:
- This reservation can be overridden by model-specific settings
- The system automatically calculates available space while maintaining this reservation

## Performance Considerations

### Optimization
- The system optimizes token counting to minimize performance impact
- Token calculations are cached where possible
- Background processing prevents UI blocking during context condensing

### Resource Usage
- Context condensing operations are performed asynchronously
- The UI remains responsive during the process
- System resources are managed to prevent excessive memory usage

## Feedback

Your experience with experimental features is valuable. When reporting issues, please include:
- The current threshold setting
- The token counts before and after context condensing
- Any error messages displayed
- Steps to reproduce the issue

Please report any issues or suggestions regarding Intelligent Context Condensing on the [Roo Code GitHub Issues page](https://github.com/RooCodeInc/Roo-Code/issues).
6 changes: 6 additions & 0 deletions docs/update-notes/index.md
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,14 @@

This section contains notes about recent updates to Roo Code, listed by version number.

## Version 3.19

* [3.19.0](/update-notes/v3.19.0) (2025-05-30)
* [3.19](/update-notes/v3.19) (2025-05-30)

## Version 3.18

* [3.18.5](/update-notes/v3.18.5) (2025-05-27)
* [3.18.4](/update-notes/v3.18.4) (2025-05-25)
* [3.18.3](/update-notes/v3.18.3) (2025-05-24)
* [3.18.2](/update-notes/v3.18.2) (2025-05-23)
Expand Down
2 changes: 1 addition & 1 deletion docs/update-notes/v3.17.0.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Here's how it works:

<img src="/img/intelligent-context-condensation/intelligent-context-condensation.png" alt="Settings for Intelligent Context Condensation" width="600" />

For more details on this experimental feature, including how to enable it, please see the [Intelligent Context Condensing documentation](/features/experimental/intelligent-context-condensing).
For more details on this feature, please see the [Intelligent Context Condensing documentation](/features/intelligent-context-condensing).
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

There is a slight inconsistency in the feature name: the image alt text uses "Intelligent Context Condensation" while the link text here says "Intelligent Context Condensing". It might be worth reviewing the naming for consistency.

Suggested change
For more details on this feature, please see the [Intelligent Context Condensing documentation](/features/intelligent-context-condensing).
For more details on this feature, please see the [Intelligent Context Condensation documentation](/features/intelligent-context-condensing).


## Smoother Chat and Fewer Interruptions! (thanks Cline!)

Expand Down
2 changes: 1 addition & 1 deletion docs/update-notes/v3.17.md
Original file line number Diff line number Diff line change
Expand Up @@ -35,7 +35,7 @@ Here's how it works:

<img src="/img/intelligent-context-condensation/intelligent-context-condensation.png" alt="Settings for Intelligent Context Condensation" width="600" />

For more details on this experimental feature, including how to enable it, please see the [Intelligent Context Condensing documentation](/features/experimental/intelligent-context-condensing).
For more details on this feature, please see the [Intelligent Context Condensing documentation](/features/intelligent-context-condensing).

## Smoother Chat and Fewer Interruptions! (thanks Cline!)

Expand Down
12 changes: 6 additions & 6 deletions docs/update-notes/v3.18.0.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -2,8 +2,8 @@

This release introduces comprehensive context condensing improvements, YAML support for custom modes, new AI model integrations, and numerous quality-of-life improvements and bug fixes.

## Context Condensing Upgrades (Experimental)
Our **experimental** Intelligent Context Condensing feature sees significant enhancements for better control and clarity. Remember, these are **disabled by default** (enable in Settings (⚙️) > "Experimental").
## Context Condensing Upgrades
Our Intelligent Context Condensing feature sees significant enhancements for better control and clarity. **Note**: As of version 3.19.0, this feature is enabled by default.

Watch a quick overview:
<div style={{width: '50%', margin: 'auto'}}>
Expand All @@ -19,11 +19,11 @@ Watch a quick overview:
</div>
</div>
Key updates:
* **Adjustable Condensing Threshold & Manual Control**: Fine-tune automatic condensing or trigger it manually. [Learn more](/features/experimental/intelligent-context-condensing#controlling-context-condensing).
* **Clear UI Indicators**: Better visual feedback during condensing. [Details](/features/experimental/intelligent-context-condensing#understanding-context-condensing-activity).
* **Accurate Token Counting**: Improved accuracy for context and cost calculations. [More info](/features/experimental/intelligent-context-condensing#accurate-token-information).
* **Adjustable Condensing Threshold & Manual Control**: Fine-tune automatic condensing or trigger it manually. [Learn more](/features/intelligent-context-condensing#controlling-context-condensing).
* **Clear UI Indicators**: Better visual feedback during condensing. [Details](/features/intelligent-context-condensing#understanding-context-condensing-activity).
* **Accurate Token Counting**: Improved accuracy for context and cost calculations. [More info](/features/intelligent-context-condensing#accurate-token-information).

For full details, see the main [Intelligent Context Condensing documentation](/features/experimental/intelligent-context-condensing).
For full details, see the main [Intelligent Context Condensing documentation](/features/intelligent-context-condensing).


## Custom Modes: YAML Support
Expand Down
2 changes: 1 addition & 1 deletion docs/update-notes/v3.18.2.mdx
Original file line number Diff line number Diff line change
Expand Up @@ -4,7 +4,7 @@ This release introduces context condensing enhancements and several important bu

## Context Condensing Enhancements

Enhanced the experimental context condensing feature with new settings and improved functionality:
Enhanced the context condensing feature with new settings and improved functionality:

* **Advanced Controls**: New experimental settings for fine-tuning conversation compression behavior
* **Improved Compression**: Better conversation summarization while preserving important context
Expand Down
18 changes: 18 additions & 0 deletions docs/update-notes/v3.18.5.mdx
Original file line number Diff line number Diff line change
@@ -0,0 +1,18 @@
# Roo Code 3.18.5 Release Notes (2025-05-27)

This release brings enhanced provider support, improved localization, and telemetry re-enablement.

## Provider Updates

* **Requesty Thinking Controls**: Add thinking controls for [Requesty provider](/providers/requesty) (thanks dtrugman!)
* **LiteLLM Metadata**: Improve model metadata for [LiteLLM provider](/providers/litellm)

## QOL Improvements

* **Traditional Chinese Locale**: Improve zh-TW Traditional Chinese locale (thanks PeterDaveHello and chrarnoldus!)

## Misc Improvements

* **Telemetry**: Re-enable telemetry

Thank you to all our contributors for making Roo Code better with each release!
Loading