-
Notifications
You must be signed in to change notification settings - Fork 29
Fix OpenAI reasoning errors by clearing provider metadata #68
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
The previous fix added 'reasoning.encrypted_content' to the include option, but the root cause was that reasoning parts from history were being sent back to OpenAI's Responses API. When reasoning parts are included in messages sent to OpenAI, the SDK creates separate reasoning items with IDs (e.g., rs_*). These orphaned reasoning items cause errors: 'Item rs_* of type reasoning was provided without its required following item.' Solution: Strip reasoning parts from CmuxMessages BEFORE converting to ModelMessages. Reasoning content is only for display/debugging and should never be sent back to the API in subsequent turns. This happens in filterEmptyAssistantMessages() which runs before convertToModelMessages(), ensuring reasoning parts never reach the API.
Per Anthropic documentation, reasoning content SHOULD be sent back to Anthropic models via the sendReasoning option (defaults to true). However, OpenAI's Responses API uses encrypted reasoning items (IDs like rs_*) that are managed automatically via previous_response_id. Anthropic-style text-based reasoning parts sent to OpenAI create orphaned reasoning items that cause 'reasoning without following item' errors. Changes: - Reverted filterEmptyAssistantMessages() to only filter reasoning-only messages - Added new stripReasoningForOpenAI() function for OpenAI-specific stripping - Apply reasoning stripping only for OpenAI provider in aiService.ts - Added detailed comments explaining the provider-specific differences
OpenAI's Responses API uses encrypted reasoning items (rs_*) managed via previous_response_id. Sending stale provider metadata from history causes: - "Item 'rs_*' of type 'reasoning' was provided without its required following item" - "referenced reasoning on a function_call was not provided" Solution: Blank out providerMetadata on all content parts for OpenAI after convertToModelMessages(). This preserves reasoning content while preventing metadata conflicts. Also fixed splitMixedContentMessages to treat reasoning parts as text parts (they stay together with text, not with tool calls). Fixes #7099 (Vercel AI SDK issue) Reference: https://github.com/gvkhna/vibescraper
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
| // Process content array and clear provider metadata | ||
| const cleanedContent = assistantMsg.content.map((part) => { | ||
| // Clear providerMetadata for reasoning parts | ||
| if (part.type === "text" && "providerMetadata" in part) { | ||
| return { | ||
| ...part, | ||
| providerMetadata: {}, | ||
| }; | ||
| } |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Clear provider metadata on reasoning parts
The new clearProviderMetadataForOpenAI is supposed to blank provider metadata from reasoning items before sending messages back to OpenAI, but the implementation only executes the clearing branch when part.type === "text". Reasoning parts actually have type === "reasoning", so they pass through untouched with their original providerMetadata. When a stored conversation contains reasoning segments, those stale rs_* references will still be submitted and the original API errors (“Item 'rs_*' …” / “referenced reasoning …”) remain. The check needs to include reasoning parts so the metadata is stripped from them as well.
Useful? React with 👍 / 👎.
# Conflicts: # src/services/aiService.ts # src/utils/messages/modelMessageTransform.ts
- Change 'let filteredMessages' to 'const' (no longer reassigned) - Remove unused 'provider' parameter from transformModelMessages() - Fix clearProviderMetadataForOpenAI to actually clear reasoning parts (was only checking part.type === 'text', now checks both 'text' and 'reasoning') - Update all test calls to remove provider parameter - Update docstrings to reflect new behavior
Tool result messages (role: 'tool') can also contain stale providerMetadata on ToolResultPart that references the parent tool-call. This metadata can cause the same 'reasoning without following item' errors when sent back to OpenAI. Extended clearProviderMetadataForOpenAI() to also process tool messages. Evidence: - LanguageModelV3ToolResultPart has providerOptions field - @kristoph noted error occurs when items 'immediately after reasoning' lack IDs - Tool results are sent immediately after tool calls, completing the chain This makes the fix comprehensive for all message types that can have stale metadata.
Additional Fix: Tool Message HandlingAfter reviewing the GitHub issue #7099 more carefully, I found that the original implementation was missing a critical case: tool result messages. What Was AddedExtended Why This Matters
EvidenceFrom interface LanguageModelV3ToolResultPart {
type: 'tool-result';
toolCallId: string;
toolName: string;
result: unknown;
isError?: boolean;
content?: Array<...>;
providerOptions?: SharedV2ProviderOptions; // ⚠️ Can have metadata
}Testing RecommendationThe fix should now handle:
All CI checks passing ✓ |
This PR fixes the intermittent OpenAI API error using Vercel AI SDK's
middleware pattern to intercept and transform messages before
transmission.
## Problem
OpenAI's Responses API intermittently returns this error during
streaming:
```
Item 'rs_*' of type 'reasoning' was provided without its required following item
```
The error occurs during **multi-step tool execution** when:
- Model generates reasoning + tool calls
- SDK automatically executes tools and prepares next step
- Tool-call parts contain OpenAI item IDs that reference reasoning items
- When reasoning is stripped but tool-call IDs remain, OpenAI rejects
the malformed input
## Root Cause
OpenAI's Responses API uses internal item IDs (stored in
`providerOptions.openai.itemId`) to link:
- Reasoning items (`rs_*`)
- Function call items (`fc_*`)
When the SDK reconstructs conversation history for multi-step execution:
1. Assistant message includes `[reasoning, tool-call]` parts
2. Tool-call has `providerOptions.openai.itemId: "fc_*"` referencing
`rs_*`
3. Previous middleware stripped reasoning but left tool-call with
dangling reference
4. OpenAI API rejects: "function_call fc_* was provided without its
required reasoning item rs_*"
## Solution
Enhanced **OpenAI reasoning middleware** to strip item IDs when removing
reasoning:
**File: `src/utils/ai/openaiReasoningMiddleware.ts`**
1. Detects assistant messages containing reasoning parts
2. Filters out reasoning parts (OpenAI manages via `previousResponseId`)
3. **NEW:** Strips `providerOptions.openai` from remaining parts to
remove item IDs
4. Prevents dangling references that cause API errors
**Applied in: `src/services/aiService.ts`**
- Wraps OpenAI models with `wrapLanguageModel({ model, middleware })`
- Middleware intercepts messages before API transmission
- Only affects OpenAI (not Anthropic or other providers)
## Testing Results
Tested against real chat history that reliably reproduced the error:
✅ **Turn 1: PASSED** - Previously failed 100% of the time, now succeeds
✅ **Turn 2: PASSED** - Multi-step tool execution works correctly
The middleware successfully:
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 1)
- Stripped 15 OpenAI item IDs from tool-call parts (Turn 2)
- Allowed multi-step tool execution without reasoning errors
## Technical Details
**Multi-step execution flow:**
1. User sends message
2. Model generates reasoning + tool calls (Step 1)
3. SDK auto-executes tools
4. SDK prepares Step 2 input: `[system, user,
assistant(reasoning+tools), tool-results]`
5. Middleware strips reasoning + item IDs before sending
6. Step 2 proceeds without API errors
**Why this fixes it:**
- OpenAI Responses API validates item ID references on input
- Removing `providerOptions.openai.itemId` prevents validation errors
- OpenAI tracks context via `previousResponseId`, not message content
- SDK's automatic tool execution works correctly with cleaned messages
## Files Changed
- `src/services/aiService.ts`: Apply middleware to OpenAI models (7
lines)
- `src/utils/ai/openaiReasoningMiddleware.ts`: New middleware with item
ID stripping (112 lines)
## Related Issues
- Fixes OpenAI reasoning errors from vercel/ai SDK issues #7099, #8031,
#8977
- Supersedes previous approaches (PR #61, #68) that didn't use SDK
middleware
_Generated with `cmux`_
Problem
Users are experiencing intermittent OpenAI API errors when using reasoning models with tool calls:
Item 'rs_*' of type 'reasoning' was provided without its required following itemreferenced reasoning on a function_call was not providedThe previous fix (PR #61) stripped reasoning parts entirely, but this caused new errors and was too aggressive.
Root Cause
OpenAI's Responses API uses encrypted reasoning items (IDs like
rs_*) that are managed automatically viaprevious_response_id. When provider metadata from stored history is sent back to OpenAI, it references reasoning items that no longer exist in the current context, causing API errors.Solution
Instead of stripping reasoning content, we now blank out provider metadata on all content parts for OpenAI:
providerMetadataon text and reasoning partscallProviderMetadataon tool-call partsThis preserves the reasoning content (which is useful for debugging and context) while preventing stale metadata references from causing errors.
Changes
clearProviderMetadataForOpenAI()- operates on `ModelMessage[]splitMixedContentMessages()now treats reasoning parts as text parts (they stay together)References
Testing