Problem
Extension developers contributing tools via the languageModelTools contribution point have no way to ensure their tool is actually called by the LLM. The tool is registered and available, but the LLM (Copilot) decides autonomously whether to invoke it—often skipping the tool entirely even when instructed to call it.
This is problematic for tools that are required for a workflow to function correctly. Academic research shows LLM instruction compliance in agent scenarios is below 30%, making strong modelDescription text insufficient.
Current State
VS Code already has LanguageModelChatToolMode.Required in the API:
export enum LanguageModelChatToolMode {
Auto = 1,
Required = 2
}
However, this enum is only accessible to callers making LLM requests (extensions using lm.sendChatRequest()). Tool providers registering via languageModelTools or lm.registerTool() have no mechanism to influence how Copilot invokes the LLM with their tool.
Proposed Solutions
Option A: Required Invocation Mode
Allow tool contributors to declare required invocation behavior:
"languageModelTools": [
{
"name": "my_tool",
"modelDescription": "...",
"invocationMode": "required",
"inputSchema": { ... }
}
]
When a tool declares invocationMode: "required", Copilot would use LanguageModelChatToolMode.Required when sending requests that include this tool.
Option B: Priority Hints (Lighter Touch)
If hard requirements are too strong, a softer "priority" system could help:
"languageModelTools": [
{
"name": "my_tool",
"modelDescription": "...",
"priority": "high", // or "normal", "low"
"inputSchema": { ... }
}
]
Copilot could use priority hints to:
- Boost the tool's prominence in the system prompt
- Add internal instructions emphasizing the tool's importance
- Weight tool selection when multiple tools are available
Option C: Both
Support both options—priority for soft hints and invocationMode: "required" for strict workflows where the extension knows the tool must be called.
Alternatives Considered
- Stronger
modelDescription: Already tried with maximum-strength wording. Models still skip the tool frequently.
- Custom agent modes via
.agent.md: Adds instructional text to the system prompt but cannot override Copilot's LLM invocation settings.
- Injecting instructions via tool results: Adding reminder instructions in tool output; still not reliable.
Use Cases
- Workflow continuation tools: Tools that must be called to proceed to the next step.
- Approval gates: Tools requiring explicit confirmation before proceeding.
- Mandatory checkpoints: Tools that enforce safety checks or logging.
- Notification tools: Tools that must report results or status.
Environment
- VS Code Version: 1.112.0-insider
- OS: macOS
Related
LanguageModelChatToolMode enum exists but is caller-side only
languageModelTools contribution point lacks invocation control
- Similar pattern to MCP's proposed
tool_choice parameter
Problem
Extension developers contributing tools via the
languageModelToolscontribution point have no way to ensure their tool is actually called by the LLM. The tool is registered and available, but the LLM (Copilot) decides autonomously whether to invoke it—often skipping the tool entirely even when instructed to call it.This is problematic for tools that are required for a workflow to function correctly. Academic research shows LLM instruction compliance in agent scenarios is below 30%, making strong
modelDescriptiontext insufficient.Current State
VS Code already has
LanguageModelChatToolMode.Requiredin the API:However, this enum is only accessible to callers making LLM requests (extensions using
lm.sendChatRequest()). Tool providers registering vialanguageModelToolsorlm.registerTool()have no mechanism to influence how Copilot invokes the LLM with their tool.Proposed Solutions
Option A: Required Invocation Mode
Allow tool contributors to declare required invocation behavior:
When a tool declares
invocationMode: "required", Copilot would useLanguageModelChatToolMode.Requiredwhen sending requests that include this tool.Option B: Priority Hints (Lighter Touch)
If hard requirements are too strong, a softer "priority" system could help:
Copilot could use priority hints to:
Option C: Both
Support both options—
priorityfor soft hints andinvocationMode: "required"for strict workflows where the extension knows the tool must be called.Alternatives Considered
modelDescription: Already tried with maximum-strength wording. Models still skip the tool frequently..agent.md: Adds instructional text to the system prompt but cannot override Copilot's LLM invocation settings.Use Cases
Environment
Related
LanguageModelChatToolModeenum exists but is caller-side onlylanguageModelToolscontribution point lacks invocation controltool_choiceparameter