Description
When using Devstral (Mistral model) via LiteLLM with @ai-sdk/openai-compatible,
Mistral returns error 3230: "Expected last role User or Tool (or Assistant with prefix True)"
(Opencode diagnosed this itself with devstral model base on the 1.3.2 codebase)
The issue occurs because:
- Agent performs tool calls (ends with tool role)
- Agent continues with assistant message (no user after)
- Mistral expects either a user/tool message OR an assistant with prefix=true
OpenCode's Mistral transformation (transform.ts lines 90-134) handles tool→user
transitions but NOT assistant→(end) transitions.
Expected: OpenCode should either insert a continuation user message OR add
prefix:true to the last assistant message when sending to Mistral.
Stuff works if I am using the mistral provider explicitly, but then the token usage (context) is not working.
So basically 2 routes to resolution for me: either mistral provider can deal with the context / cost informations from litellm (like openai provider) or the openai provider is able to fix this ordering issue for models which have mistral / devstral in its name
Plugins
No response
OpenCode version
No response
Steps to reproduce
No response
Screenshot and/or share link
No response
Operating System
No response
Terminal
No response
Description
When using Devstral (Mistral model) via LiteLLM with @ai-sdk/openai-compatible,
Mistral returns error 3230: "Expected last role User or Tool (or Assistant with prefix True)"
(Opencode diagnosed this itself with devstral model base on the 1.3.2 codebase)
The issue occurs because:
OpenCode's Mistral transformation (transform.ts lines 90-134) handles tool→user
transitions but NOT assistant→(end) transitions.
Expected: OpenCode should either insert a continuation user message OR add
prefix:true to the last assistant message when sending to Mistral.
Stuff works if I am using the mistral provider explicitly, but then the token usage (context) is not working.
So basically 2 routes to resolution for me: either mistral provider can deal with the context / cost informations from litellm (like openai provider) or the openai provider is able to fix this ordering issue for models which have mistral / devstral in its name
Plugins
No response
OpenCode version
No response
Steps to reproduce
No response
Screenshot and/or share link
No response
Operating System
No response
Terminal
No response