Simplify chat.autoReply: skip questions instead of LLM-answering them#300926
Merged
meganrogge merged 2 commits intomainfrom Mar 12, 2026
Merged
Simplify chat.autoReply: skip questions instead of LLM-answering them#300926meganrogge merged 2 commits intomainfrom
chat.autoReply: skip questions instead of LLM-answering them#300926meganrogge merged 2 commits intomainfrom
Conversation
When `chat.autoReply` is enabled, the questions tool now returns the same 'user is not available, use your best judgment' response used in autopilot mode — instead of sending questions to a separate LLM call for answer resolution. This removes ~450 lines of LLM prompt engineering, JSON parsing with retry, fuzzy option matching, fallback answer generation, and opt-in dialog management.
Contributor
There was a problem hiding this comment.
Pull request overview
This PR simplifies the chat.autoReply feature by removing the separate LLM-based auto-answering flow for question carousels and instead returning the existing “user not available, use best judgment” autopilot-style response directly from the askQuestions tool when auto-reply is enabled.
Changes:
- Update
AskQuestionsToolto treatchat.autoReplysimilarly to Autopilot: immediately append a completed carousel and return the autopilot response without waiting for user input. - Remove UI-side auto-reply orchestration from
ChatListItemRenderer(and delete the dedicated auto-reply implementation). - Update configuration description and adjust unit tests for the new
AskQuestionsToolconstructor.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| src/vs/workbench/contrib/chat/common/tools/builtinTools/askQuestionsTool.ts | Adds IConfigurationService and auto-responds when chat.autoReply is enabled. |
| src/vs/workbench/contrib/chat/browser/widget/chatQuestionCarouselAutoReply.ts | Deletes the previous LLM-based auto-reply implementation. |
| src/vs/workbench/contrib/chat/browser/widget/chatListRenderer.ts | Removes auto-reply orchestration logic from the renderer. |
| src/vs/workbench/contrib/chat/browser/chat.contribution.ts | Updates chat.autoReply setting description to reflect “skip” behavior. |
| src/vs/workbench/contrib/chat/test/common/tools/builtinTools/askQuestionsTool.test.ts | Updates test construction to pass a configuration service. |
…id-session Listen for chat.autoReply config changes in chatListRenderer and skip all pending question carousels when the setting becomes enabled. This handles the edge case where a carousel is already awaiting user input and the user enables auto-reply or switches to autopilot afterward.
Contributor
Author
|
Good catch! Addressed in a76b7f7 — added a |
meganrogge
approved these changes
Mar 12, 2026
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Summary
When
chat.autoReplyis enabled, the questions tool now returns the same "user is not available, use your best judgment" response used in autopilot mode — instead of sending questions to a separate LLM call for answer resolution.Removes ~450 lines of complexity (prompt engineering, JSON parsing with retry, fuzzy option matching, fallback answer generation, opt-in dialog, and storage management).
Motivation
The previous auto-reply system sent each question carousel to a separate LLM call to try to pick answers. This has a few problems:
Lower quality answers — The auto-reply model sees far less context than the agent that asked the question. The agent already has the full conversation, tool results, and user intent; the auto-reply model just gets the carousel questions and a snippet of the original request. Returning to the agent with "use your best judgment" lets it leverage all that context.
Extra cost and latency — Each carousel triggered an additional model request (with retry on parse failure), adding wait time and token cost for an answer that was often worse than what the agent would choose on its own.
Complexity — ~450 lines of prompt engineering, JSON parsing, fuzzy option matching, fallback generation, opt-in dialog management, and storage state — all to approximate what the agent can do natively.
The simpler approach: when auto-reply is on, the tool immediately returns the autopilot response (
"The user is not available to respond and will review your work later. Work autonomously and make good decisions.") and appends a completed carousel in the UI so users can see what was skipped.Changes
askQuestionsTool.tschatQuestionCarouselAutoReply.tschatListRenderer.tsmaybeAutoReplyToQuestionCarousel(),getRequestMessageText(),_isAutopilotForContext(), and unused importschat.contribution.tsaskQuestionsTool.test.tsIConfigurationServiceto the new constructorTesting
askQuestionsToolunit tests pass