Conversation
There was a problem hiding this comment.
Pull request overview
Adds an opt-in chat.autoReply setting to automatically answer chat question carousels using the currently selected language model, and centralizes stream-to-text extraction logic for reuse (including terminal monitoring).
Changes:
- Introduce
chat.autoReplyconfiguration and wire it into question carousel rendering with an opt-in warning dialog. - Add shared
getTextResponseFromStreamhelper inchat/common/languageModels.tsand adopt it in terminal output monitoring. - Refactor terminal monitoring code to import the shared helper instead of a local implementation.
Reviewed changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.
Show a summary per file
| File | Description |
|---|---|
| src/vs/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/utils.ts | Replaces local helper implementation with a re-export of the shared stream parsing utility. |
| src/vs/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/outputMonitor.ts | Switches terminal monitoring to use the shared getTextResponseFromStream. |
| src/vs/workbench/contrib/chat/common/languageModels.ts | Adds shared getTextResponseFromStream helper for extracting concatenated text from LM streaming responses. |
| src/vs/workbench/contrib/chat/common/constants.ts | Adds ChatConfiguration.AutoReply constant for the new setting. |
| src/vs/workbench/contrib/chat/browser/widget/chatListRenderer.ts | Implements question-carousel auto-reply behavior gated by chat.autoReply and an opt-in dialog; adds model selection + response parsing for generated answers. |
| src/vs/workbench/contrib/chat/browser/chat.contribution.ts | Registers the chat.autoReply setting in configuration schema. |
Comments suppressed due to low confidence (1)
src/vs/workbench/contrib/chat/common/languageModels.ts:248
- New shared helper
getTextResponseFromStreamis added in a common module but isn't covered by unit tests. Sincesrc/vs/workbench/contrib/chat/test/common/languageModels.test.tsalready exists, please add a focused test that verifies it concatenates streamed text parts (including array parts) and handles failures/cancellation as intended.
export async function getTextResponseFromStream(response: ILanguageModelChatResponse): Promise<string> {
let responseText = '';
const streaming = (async () => {
if (!response?.stream) {
return;
}
for await (const part of response.stream) {
if (Array.isArray(part)) {
for (const item of part) {
if (item.type === 'text') {
responseText += item.value;
}
}
} else if (part.type === 'text') {
responseText += part.value;
}
}
})();
try {
await Promise.all([response.result, streaming]);
return responseText;
} catch (err) {
return 'Error occurred ' + err;
}
src/vs/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/utils.ts
Show resolved
Hide resolved
...s/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/outputMonitor.ts
Show resolved
Hide resolved
rwoll
left a comment
There was a problem hiding this comment.
This appears to still be triggering a confirmation. The prompt I'm using is use ask_questions tool to see if I want to run with sleep 30s or sleep 60s, then run the sleep command.
While it does auto-accept, workbench.action.chat.open#blockOnResponse is returning a confirmation status instead of waiting for the turn to finish (i.e. sleep, etc).
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@meganrogge - here's a minimal repro: https://github.com/microsoft/vscode/compare/merogge/auto-reply-chat...rwoll/wip-repro-chat-confirmation?expand=1.
We instead expect it to have an LLM response, etc. |
…ating while waiting for an autoReply (#294733) * rwoll/wip-repro-chat-confirmation * add more info in the response and wait for confirmation sometimes * remove test code * remove dead code
rwoll
left a comment
There was a problem hiding this comment.
LGTM (and I tested this fixed the eval issue), but I'm less familiar with this section of code so I suggest a review from @karthiknadig, @bpasero, or someone else who's worked on askQuestion.
I disabled auto-merge due to my unfamiliarity as well as the CCR comments. |
…d::resolveId so the same logical carousel is recognized across re-renders, preventing duplicate auto-replies and notifications. Marks the key before the async opt-in check and rolls back on decline to close the race window where concurrent re-renders could trigger multiple prompts.
fixes #294714
Enables questions to be responded to when in YOLO mode. Previously, we just skipped those so that evals worked.
Now, for evals (or users), we have an auto reply feature, which only runs if
chat.autoReplyis enabled and the user has opted in via a dialog (stored in application storage).Model selection: uses the current widget model name to select a concrete model id (exact id first, then Copilot family match).
Prompting: builds a JSON-only prompt with question metadata and optional original request text, then asks the model; if parsing fails, retries with strict JSON instructions.
Parsing: per question, resolves text/singleSelect/multiSelect answers, matching options by index, id, label, or partial label; invalid or empty values are dropped.
Merging: any question with an explicit default keeps that default; otherwise model answers are used; remaining gaps use deterministic fallbacks (first option or freeform “OK”/request text).
questions.mov