-
Notifications
You must be signed in to change notification settings - Fork 11.4k
Description
Description
When using thinking models (e.g. moonshotai/kimi-k2-thinking via NVIDIA NIM), the reasoning blocks wrapped in think tags are rendered as raw plaintext in the response output instead of being collapsed or hidden.
Steps to Reproduce
- Configure a thinking model (e.g. moonshotai/kimi-k2-thinking via NVIDIA NIM provider)
- Send any prompt (e.g. 'what is today s date?')
- Observe the response includes the raw think tag content as plaintext
Expected Behavior
The content between think tags should be:
- Collapsed/hidden by default, OR
- Rendered in a toggleable reasoning section (similar to how other AI coding clients handle thinking model output)
Actual Behavior
The raw think tags and all reasoning content are displayed as plaintext in the response, making the output noisy and hard to read. The internal reasoning of the model (e.g. 'The user is asking a simple non-code question about today s date...') is shown directly in the chat.
Environment
- OpenCode version: latest
- Provider: NVIDIA NIM
- Model: moonshotai/kimi-k2-thinking
- OS: Windows 11
Context
Many thinking/reasoning models use think blocks (DeepSeek R1, Kimi K2-Thinking, QwQ, etc.). Most AI coding clients (Cursor, VS Code Copilot, etc.) handle these tags by collapsing or hiding the reasoning content. Adding this support would improve the experience for all thinking models.