Skip to content

fix: strip reasoning parts when switching to non-interleaved models#11572

Open
okossa wants to merge 1 commit intoanomalyco:devfrom
okossa:fix/strip-reasoning-cross-model
Open

fix: strip reasoning parts when switching to non-interleaved models#11572
okossa wants to merge 1 commit intoanomalyco:devfrom
okossa:fix/strip-reasoning-cross-model

Conversation

@okossa
Copy link

@okossa okossa commented Feb 1, 2026

Summary

Fixes cross-model switching errors when moving from a thinking model (e.g., Claude Opus) to a non-thinking model (e.g., GPT 5.2, Claude Sonnet).

  • Strip reasoning parts from message history at the start of normalizeMessages() for models with interleaved: false
  • Prevents API errors like messages.1.content.0.thinking: each thinking block must contain thinking

Fixes #11571

What Changed

packages/opencode/src/provider/transform.ts:

  • Moved reasoning stripping logic to the beginning of normalizeMessages() so it runs before any model-specific early returns (Claude, Mistral, etc.)
  • When all content is reasoning, inserts a placeholder text to avoid empty messages

packages/opencode/test/provider/transform.test.ts:

  • Added 3 new tests for the cross-model reasoning stripping behavior
  • Updated existing tests to reflect the new behavior

How I Verified It Works

  1. Started OpenCode with Claude Opus (extended thinking enabled)
  2. Got a response with thinking blocks
  3. Switched to GPT 5.2 mid-session using Ctrl+M
  4. Sent another message
  5. ✅ No error - reasoning parts were stripped from history

Also ran the full test suite: bun test test/provider/transform.test.ts → 97 pass, 0 fail

Strip reasoning parts from message history before any model-specific
transformations to prevent cross-model errors when switching from a
model with thinking (e.g., Claude Opus) to one without (e.g., GPT 5.2).

The fix moves the interleaved check to the start of normalizeMessages()
so it runs before any early returns for Claude, Mistral, etc.

Fixes anomalyco#11571
@github-actions
Copy link
Contributor

github-actions bot commented Feb 1, 2026

The following comment was made by an LLM, it may be inaccurate:

Based on the search results, here are the potentially related PRs (excluding the current PR #11572):

  1. fix: strip incompatible thinking blocks when switching to Anthropic models #6748 - fix: strip incompatible thinking blocks when switching to Anthropic models

  2. fix: strip incompatible thinking blocks when switching to Claude #8958 - fix: strip incompatible thinking blocks when switching to Claude

  3. fix(anthropic): ensure reasoning blocks precede tool_use in assistant messages #10474 - fix(anthropic): ensure reasoning blocks precede tool_use in assistant messages

These PRs appear to be related efforts to handle reasoning/thinking blocks across different models and contexts. PR #11572 seems to be a more comprehensive fix that handles stripping reasoning for all non-interleaved models, while the earlier PRs addressed specific provider cases.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]: Error switching from thinking model to non-thinking model mid-session

1 participant