Enable thinking and thinking effort picker for OpenRouter models#312675
Enable thinking and thinking effort picker for OpenRouter models#312675atirutw wants to merge 3 commits intomicrosoft:mainfrom
Conversation
More specifically, it enable thinking for models that support toggling it or setting the effort, and enables the effort picker for models that actually support the effort parameter.
|
@microsoft-github-policy-service agree |
There was a problem hiding this comment.
Pull request overview
This PR updates the BYOK OpenRouter integration to surface “thinking”/reasoning support and expose a “Thinking Effort” configuration picker when the model advertises support for the reasoning_effort parameter.
Changes:
- Extend OpenRouter model capability detection to mark models as supporting thinking/adaptive thinking and (optionally) reasoning effort levels.
- Switch OpenAI-compatible BYOK providers to return model info enriched with an effort configuration schema (when supported).
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 3 comments.
| File | Description |
|---|---|
| extensions/copilot/src/extension/byok/vscode-node/openRouterProvider.ts | Adds OpenRouter capability detection for thinking and effort support based on supported_parameters. |
| extensions/copilot/src/extension/byok/vscode-node/abstractLanguageModelChatProvider.ts | Uses byokKnownModelsToAPIInfoWithEffort so models can include a “Thinking Effort” configuration schema when supported. |
| protected override resolveModelCapabilities(modelData: unknown): BYOKModelCapabilities | undefined { | ||
| const openRouterModelData = modelData as OpenRouterModelData; | ||
| return { | ||
| name: openRouterModelData.name, | ||
| toolCalling: openRouterModelData.supported_parameters?.includes('tools') ?? false, | ||
| vision: openRouterModelData.architecture?.input_modalities?.includes('image') ?? false, | ||
| maxInputTokens: openRouterModelData.top_provider.context_length - 16000, | ||
| maxOutputTokens: 16000 | ||
| maxOutputTokens: 16000, | ||
| thinking: (openRouterModelData.supported_parameters?.includes('reasoning') || openRouterModelData.supported_parameters?.includes('reasoning_effort')) ?? false, | ||
| adaptiveThinking: openRouterModelData.supported_parameters?.includes('reasoning_effort') ?? false, | ||
| supportsReasoningEffort: openRouterModelData.supported_parameters?.includes('reasoning_effort') ? ['none', 'low', 'medium', 'high'] : undefined |
There was a problem hiding this comment.
New capability detection for OpenRouter models (thinking/reasoning effort) is currently untested. There are unit tests for other BYOK providers under extensions/copilot/src/extension/byok/vscode-node/test/, so it would be good to add coverage for resolveModelCapabilities to ensure the thinking flags and supportsReasoningEffort are set correctly for representative supported_parameters payloads.
Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com>
…lInformation.capabilities.supports.reasoning_effort`
More specifically, it enable thinking for models that support toggling it or setting the effort, and enables the effort picker for models that actually support the effort parameter.
For example, MiniMax M2.7 can use reasoning (although it is not shown; a problem which is out of scope) but doesn't offer the picker (reasoning effort not in supported params list), while MoonshotAI's Kimi K2.6 shows None, Low, Medium, and High. Do note that Minimal and Xhigh has been excluded because it would require more advanced schema configurations to properly show the descriptions.