Skip to content

add chat.autoReply#294715

Merged
meganrogge merged 7 commits intomainfrom
merogge/auto-reply-chat
Feb 12, 2026
Merged

add chat.autoReply#294715
meganrogge merged 7 commits intomainfrom
merogge/auto-reply-chat

Conversation

@meganrogge
Copy link
Collaborator

@meganrogge meganrogge commented Feb 11, 2026

fixes #294714

Enables questions to be responded to when in YOLO mode. Previously, we just skipped those so that evals worked.

Now, for evals (or users), we have an auto reply feature, which only runs if chat.autoReply is enabled and the user has opted in via a dialog (stored in application storage).

Model selection: uses the current widget model name to select a concrete model id (exact id first, then Copilot family match).

Prompting: builds a JSON-only prompt with question metadata and optional original request text, then asks the model; if parsing fails, retries with strict JSON instructions.

Parsing: per question, resolves text/singleSelect/multiSelect answers, matching options by index, id, label, or partial label; invalid or empty values are dropped.

Merging: any question with an explicit default keeps that default; otherwise model answers are used; remaining gaps use deterministic fallbacks (first option or freeform “OK”/request text).

Screenshot 2026-02-11 at 4 50 41 PM Screenshot 2026-02-11 at 4 50 28 PM
questions.mov

Copilot AI review requested due to automatic review settings February 11, 2026 22:51
@meganrogge meganrogge self-assigned this Feb 11, 2026
@vs-code-engineering vs-code-engineering bot added this to the February 2026 milestone Feb 11, 2026
@meganrogge meganrogge enabled auto-merge (squash) February 11, 2026 22:53
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Adds an opt-in chat.autoReply setting to automatically answer chat question carousels using the currently selected language model, and centralizes stream-to-text extraction logic for reuse (including terminal monitoring).

Changes:

  • Introduce chat.autoReply configuration and wire it into question carousel rendering with an opt-in warning dialog.
  • Add shared getTextResponseFromStream helper in chat/common/languageModels.ts and adopt it in terminal output monitoring.
  • Refactor terminal monitoring code to import the shared helper instead of a local implementation.

Reviewed changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 4 comments.

Show a summary per file
File Description
src/vs/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/utils.ts Replaces local helper implementation with a re-export of the shared stream parsing utility.
src/vs/workbench/contrib/terminalContrib/chatAgentTools/browser/tools/monitoring/outputMonitor.ts Switches terminal monitoring to use the shared getTextResponseFromStream.
src/vs/workbench/contrib/chat/common/languageModels.ts Adds shared getTextResponseFromStream helper for extracting concatenated text from LM streaming responses.
src/vs/workbench/contrib/chat/common/constants.ts Adds ChatConfiguration.AutoReply constant for the new setting.
src/vs/workbench/contrib/chat/browser/widget/chatListRenderer.ts Implements question-carousel auto-reply behavior gated by chat.autoReply and an opt-in dialog; adds model selection + response parsing for generated answers.
src/vs/workbench/contrib/chat/browser/chat.contribution.ts Registers the chat.autoReply setting in configuration schema.
Comments suppressed due to low confidence (1)

src/vs/workbench/contrib/chat/common/languageModels.ts:248

  • New shared helper getTextResponseFromStream is added in a common module but isn't covered by unit tests. Since src/vs/workbench/contrib/chat/test/common/languageModels.test.ts already exists, please add a focused test that verifies it concatenates streamed text parts (including array parts) and handles failures/cancellation as intended.
export async function getTextResponseFromStream(response: ILanguageModelChatResponse): Promise<string> {
	let responseText = '';
	const streaming = (async () => {
		if (!response?.stream) {
			return;
		}
		for await (const part of response.stream) {
			if (Array.isArray(part)) {
				for (const item of part) {
					if (item.type === 'text') {
						responseText += item.value;
					}
				}
			} else if (part.type === 'text') {
				responseText += part.value;
			}
		}
	})();

	try {
		await Promise.all([response.result, streaming]);
		return responseText;
	} catch (err) {
		return 'Error occurred ' + err;
	}

Copy link
Member

@rwoll rwoll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This appears to still be triggering a confirmation. The prompt I'm using is use ask_questions tool to see if I want to run with sleep 30s or sleep 60s, then run the sleep command.

While it does auto-accept, workbench.action.chat.open#blockOnResponse is returning a confirmation status instead of waiting for the turn to finish (i.e. sleep, etc).

Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
@rwoll
Copy link
Member

rwoll commented Feb 11, 2026

This appears to still be triggering a confirmation. The prompt I'm using is use ask_questions tool to see if I want to run with sleep 30s or sleep 60s, then run the sleep command.

While it does auto-accept, workbench.action.chat.open#blockOnResponse is returning a confirmation status instead of waiting for the turn to finish (i.e. sleep, etc).

@meganrogge - here's a minimal repro: https://github.com/microsoft/vscode/compare/merogge/auto-reply-chat...rwoll/wip-repro-chat-confirmation?expand=1.

  1. Checkout my branch.
  2. Start the Debugger.
  3. Set chat.autoReply to true.
  4. Trigger Open Chat (Auto Reply Test) in the command pallette.
  5. Observe editor opens with { "type": "confirmation" }.

We instead expect it to have an LLM response, etc.

…ating while waiting for an autoReply (#294733)

* rwoll/wip-repro-chat-confirmation

* add more info in the response and wait for confirmation sometimes

* remove test code

* remove dead code
rwoll
rwoll previously approved these changes Feb 12, 2026
Copy link
Member

@rwoll rwoll left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

LGTM (and I tested this fixed the eval issue), but I'm less familiar with this section of code so I suggest a review from @karthiknadig, @bpasero, or someone else who's worked on askQuestion.

@rwoll rwoll disabled auto-merge February 12, 2026 00:55
@rwoll
Copy link
Member

rwoll commented Feb 12, 2026

LGTM (and I tested this fixed the eval issue), but I'm less familiar with this section of code so I suggest a review from @karthiknadig, @bpasero, or someone else who's worked on askQuestion.

I disabled auto-merge due to my unfamiliarity as well as the CCR comments.

…d::resolveId so the same logical carousel is recognized across re-renders, preventing duplicate auto-replies and notifications. Marks the key before the async opt-in check and rolls back on decline to close the race window where concurrent re-renders could trigger multiple prompts.
@meganrogge meganrogge disabled auto-merge February 12, 2026 01:08
karthiknadig
karthiknadig previously approved these changes Feb 12, 2026
@meganrogge meganrogge enabled auto-merge (squash) February 12, 2026 01:18
@meganrogge meganrogge merged commit 2354a3c into main Feb 12, 2026
18 checks passed
@meganrogge meganrogge deleted the merogge/auto-reply-chat branch February 12, 2026 01:33
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Add chat.autoReply to question carousel feature/setting

3 participants

Comments