From cadd1d0f174a2543116bffc29986a00f51e83cc5 Mon Sep 17 00:00:00 2001 From: Nick Trogh Date: Thu, 13 Nov 2025 10:19:33 +0100 Subject: [PATCH] Revert deletion of content --- release-notes/v1_103.md | 38 ++++++++++++++++++++++++++++++++++++++ 1 file changed, 38 insertions(+) diff --git a/release-notes/v1_103.md b/release-notes/v1_103.md index d315c7e4fd..b852f04b10 100644 --- a/release-notes/v1_103.md +++ b/release-notes/v1_103.md @@ -151,6 +151,44 @@ Early terminal auto-approve settings were introduced last month. This release, t - The auto approve reasoning is now logged to the Terminal Output channel. We plan to [surface this in the UI soon](https://github.com/microsoft/vscode/issues/256780). +### Input request detection for terminals and tasks + +When you run a task or terminal command in agent mode, the agent now detects when the process requests user input. You will be prompted to respond in chat, with the default or first option surfaced as the primary action and other options available in a dropdown. This works for scripts and commands that require multiple confirmations, across all supported shells and both foreground and background terminals. If you type in the terminal while a prompt is present, the prompt will hide automatically. When options and descriptions are provided (such as `[Y] Yes [N] No`), these are surfaced in the confirmation prompt. + +In the below example, a script containing multiple prompts for user input is run via the agent. Confirmation prompts appear and I accept the actions, allowing the terminal script to finish running and the agent to provide a summary of what transpired. + +![Example of input being detected and responded to](images/1_103/prompt-input-demo.mp4) + +### Improved error detection for tasks with problem matchers + +For tasks that use problem matchers, the agent now collects and surfaces errors based on the problem matcher results, rather than relying on the language model to evaluate output. Problems are presented in a dropdown within the chat progress message, allowing you to navigate directly to the problem location. This ensures that errors are reported only when relevant to the current task execution. + +### Compound task support in agent mode + +Agent mode now supports running compound tasks. When you run a compound task, the agent indicates progress and output for each dependent task, including any prompts for user input. This enables more complex workflows and better visibility into multi-step task execution. + +In the example below, the VS Code - Build task is run. Output is assessed for each dependency task and a problem is surfaced to the user in the response and in the progress message dropdown. + +![Example of agent running the VS Code - Build task](images/1_103/build-task.mp4) + +### Accessibility: Focus chat confirmation action + +We've added a command, **Focus Chat Confirmation** `(kb(workbench.action.chat.focusConfirmation))`, which will focus the confirmation dialog, if any, or announce to screen reader users that confirmation is not required. + +### Track progress with task lists (Experimental) + +**Setting**: `setting(chat.todoListTool.enabled)` + +The great thing about agent mode is that you can give it a high-level task and have it implement it. As it plans the work and breaks it down into smaller tasks, it can be overwhelming to track the progress of all these individual tasks. + +This milestone, we are introducing the task/todo list feature in chat to better help you see which tasks are completed and which ones are still pending. You can view the task list at the top of the Chat view, so you always have visibility into the progress being made. As the agent progresses through its work, it updates the task list. + +Get started by giving the agent a high-level task and ask it to track its work in a todo list! + + + +This feature is still experimental and you can enable it with the `setting(chat.todoListTool.enabled)` setting. + ### Improved model management experience This iteration, we've revamped the chat provider API, which is responsible for language model access. Users are now able to select which models appear in their model picker, creating a more personalized and focused experience.