Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem / Motivation
The current Context panel only shows a coarse breakdown (User, Assistant, Other), making it difficult to understand what's actually consuming input context. Since context composition directly impacts model performance and cost, this information should be more visible and actionable.
Users currently can't easily answer questions like:
- How much context comes from provider/system instructions?
- How much comes from a loaded skill or MCP server?
- Which content types are consuming the most tokens?
- Which skills were actually loaded in this session?
Proposed Solution
Add a Context Sources breakdown module to the session context panel. This would:
Backend — Calculate and expose per-source token distribution via a new or extended endpoint.
Frontend — Add a lightweight UI component to the session view that dynamically visualizes this distribution (e.g. grouped by source type).
Interactivity — Allow sorting and filtering of context entries to help users quickly identify the largest contributors.
Benefits
Improves transparency by showing exactly what gets sent to the model.
Helps users optimize token usage and structure prompts and skills more effectively.
Reduces debugging friction when context limits are hit unexpectedly.
A Preview will be look like this:

Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem / Motivation
The current Context panel only shows a coarse breakdown (User, Assistant, Other), making it difficult to understand what's actually consuming input context. Since context composition directly impacts model performance and cost, this information should be more visible and actionable.
Users currently can't easily answer questions like:
Proposed Solution
Add a Context Sources breakdown module to the session context panel. This would:
Backend — Calculate and expose per-source token distribution via a new or extended endpoint.
Frontend — Add a lightweight UI component to the session view that dynamically visualizes this distribution (e.g. grouped by source type).
Interactivity — Allow sorting and filtering of context entries to help users quickly identify the largest contributors.
Benefits
Improves transparency by showing exactly what gets sent to the model.
Helps users optimize token usage and structure prompts and skills more effectively.
Reduces debugging friction when context limits are hit unexpectedly.
A Preview will be look like this:
