Tracking issue for re-enabling the Copilot-powered AI summary feature that was temporarily removed in 0.11.0 (PR #23, in response to #20). All mitigations below MUST be in place before any Copilot call is reintroduced.
Mitigation requirements (all must ship before re-enable)
- User MUST explicitly select a model. No auto-pick. No silent default. With no saved model, the extension does not contact
vscode.lm at all. The picker that selects the model is the only entry point for arming the feature.
- Premium / metered models require a separate explicit confirmation step. The picker labels each model with its tier; selecting an Opus-class or other premium model requires confirming a second prompt that names the model and warns about per-request cost.
- Cheaper models recommended at the top of the picker. A
*-mini-class (or equivalent low-cost) model is presented as the recommended default. Premium models are visually flagged.
- Up-front usage warning before any bulk run. Before processing more than a small threshold of commands, a modal names the chosen model and the exact request count and requires explicit confirmation. Cancellation closes the run with zero requests sent.
- Batched consent during the run. Process summaries in a small initial batch (~5–10), pause, and ask whether to continue. Each continuation prompt re-shows the running total of requests sent so far. Cancellation is one click and propagates to in-flight requests.
- No silent re-runs from file watchers. Watcher-triggered re-summarisation must respect the same consent gates as the initial run — never silently fan out a fresh batch on save.
- No activation-time Copilot call. Activation never contacts
vscode.lm, regardless of any setting. The user must invoke an explicit command (e.g. "Generate AI Summaries") to start a run.
- One-click disable. A status-bar item or palette command disables AI summary behaviour immediately and persists across sessions.
Follow-up candidates (post-re-enable)
- Per-run and per-day request caps as user settings
- Estimated-cost line in the consent modal
- Local audit log of request counts visible to the user
- CHANGELOG / README disclosure of the previous default and the fix on the version that re-enables the feature
Investigation — what went wrong (from #20)
Activation issued a bulk Copilot run with no consent gate, and fell through to a premium model when none was configured.
Code paths that produced the regression (all removed in PR #23):
runBackgroundStartup → initAiSummaries → runSummarisation in automatic mode.
- One Copilot request per discovered command whose content hash was new or changed. Real repos easily hit hundreds.
resolveModelAutomatically with no saved model selected the first concrete model returned by vscode.lm.selectChatModels. Reported case landed on Claude Opus 4.7 (premium, metered).
- File watchers re-triggered the same pipeline on edits. A single save fanned out another bulk run.
Compounding defaults:
commandtree.enableAiSummaries defaulted to true.
commandtree.aiModel defaulted to \"\", triggering the auto-pick above.
- No per-run cap, no consent modal, no estimated-usage preview, no rate limit, no progress UI with cancel.
Net effect: a single "Allow Copilot access" click authorised hundreds of premium-model requests with no further confirmation.
Close criteria — regression tests required before this issue is closed
- With no saved model, the extension does not call
vscode.lm on activation, on file change, or on any other implicit trigger — only an explicit user command may initiate model selection.
- A run above the small-batch threshold cannot proceed without an explicit user confirmation that names the chosen model and the exact request count.
- Selecting a premium / Opus-class model requires a second explicit confirmation distinct from the model picker itself.
- Watcher-triggered runs respect the same consent gates as the initial run — no silent re-summarisation.
- The disable command takes effect immediately and is honoured by every code path that could otherwise call
vscode.lm.
References
Tracking issue for re-enabling the Copilot-powered AI summary feature that was temporarily removed in 0.11.0 (PR #23, in response to #20). All mitigations below MUST be in place before any Copilot call is reintroduced.
Mitigation requirements (all must ship before re-enable)
vscode.lmat all. The picker that selects the model is the only entry point for arming the feature.*-mini-class (or equivalent low-cost) model is presented as the recommended default. Premium models are visually flagged.vscode.lm, regardless of any setting. The user must invoke an explicit command (e.g. "Generate AI Summaries") to start a run.Follow-up candidates (post-re-enable)
Investigation — what went wrong (from #20)
Activation issued a bulk Copilot run with no consent gate, and fell through to a premium model when none was configured.
Code paths that produced the regression (all removed in PR #23):
runBackgroundStartup→initAiSummaries→runSummarisationinautomaticmode.resolveModelAutomaticallywith no saved model selected the first concrete model returned byvscode.lm.selectChatModels. Reported case landed on Claude Opus 4.7 (premium, metered).Compounding defaults:
commandtree.enableAiSummariesdefaulted totrue.commandtree.aiModeldefaulted to\"\", triggering the auto-pick above.Net effect: a single "Allow Copilot access" click authorised hundreds of premium-model requests with no further confirmation.
Close criteria — regression tests required before this issue is closed
vscode.lmon activation, on file change, or on any other implicit trigger — only an explicit user command may initiate model selection.vscode.lm.References