fix: strip reasoning parts when switching to non-interleaved models#11572
fix: strip reasoning parts when switching to non-interleaved models#11572okossa wants to merge 1 commit intoanomalyco:devfrom
Conversation
Strip reasoning parts from message history before any model-specific transformations to prevent cross-model errors when switching from a model with thinking (e.g., Claude Opus) to one without (e.g., GPT 5.2). The fix moves the interleaved check to the start of normalizeMessages() so it runs before any early returns for Claude, Mistral, etc. Fixes anomalyco#11571
|
The following comment was made by an LLM, it may be inaccurate: Based on the search results, here are the potentially related PRs (excluding the current PR #11572):
These PRs appear to be related efforts to handle reasoning/thinking blocks across different models and contexts. PR #11572 seems to be a more comprehensive fix that handles stripping reasoning for all non-interleaved models, while the earlier PRs addressed specific provider cases. |
Summary
Fixes cross-model switching errors when moving from a thinking model (e.g., Claude Opus) to a non-thinking model (e.g., GPT 5.2, Claude Sonnet).
reasoningparts from message history at the start ofnormalizeMessages()for models withinterleaved: falsemessages.1.content.0.thinking: each thinking block must contain thinkingFixes #11571
What Changed
packages/opencode/src/provider/transform.ts:normalizeMessages()so it runs before any model-specific early returns (Claude, Mistral, etc.)packages/opencode/test/provider/transform.test.ts:How I Verified It Works
Also ran the full test suite:
bun test test/provider/transform.test.ts→ 97 pass, 0 fail