Skip to content

Thinking models: raw <think> tag content displayed instead of being collapsed #15380

@erwinh22

Description

@erwinh22

Description

When using thinking models (e.g. moonshotai/kimi-k2-thinking via NVIDIA NIM), the reasoning blocks wrapped in think tags are rendered as raw plaintext in the response output instead of being collapsed or hidden.

Steps to Reproduce

  1. Configure a thinking model (e.g. moonshotai/kimi-k2-thinking via NVIDIA NIM provider)
  2. Send any prompt (e.g. 'what is today s date?')
  3. Observe the response includes the raw think tag content as plaintext

Expected Behavior

The content between think tags should be:

  • Collapsed/hidden by default, OR
  • Rendered in a toggleable reasoning section (similar to how other AI coding clients handle thinking model output)

Actual Behavior

The raw think tags and all reasoning content are displayed as plaintext in the response, making the output noisy and hard to read. The internal reasoning of the model (e.g. 'The user is asking a simple non-code question about today s date...') is shown directly in the chat.

Environment

  • OpenCode version: latest
  • Provider: NVIDIA NIM
  • Model: moonshotai/kimi-k2-thinking
  • OS: Windows 11

Context

Many thinking/reasoning models use think blocks (DeepSeek R1, Kimi K2-Thinking, QwQ, etc.). Most AI coding clients (Cursor, VS Code Copilot, etc.) handle these tags by collapsing or hiding the reasoning content. Adding this support would improve the experience for all thinking models.

Metadata

Metadata

Assignees

Labels

coreAnything pertaining to core functionality of the application (opencode server stuff)

Type

No type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions