Skip to content

Conversation

@ammario
Copy link
Member

@ammario ammario commented Oct 7, 2025

Problem

OpenAI Responses API was intermittently failing with error:

Item 'rs_00d363a611783a350068e4764c5f68819c8777140c3248eff4' of type 'reasoning' was provided without its required following item.

This occurred when using reasoning models (gpt-5, o3, o4-mini) with tool calls in multi-turn conversations.

Root Cause

From the Vercel AI SDK documentation:

When using reasoning models (o1, o3, o4-mini) with multi-step tool calls and store: false, include ['reasoning.encrypted_content'] in the include option to ensure reasoning content is available across conversation steps.

Even though we're using the default store: true and previousResponseId for persistence, we still need to explicitly include reasoning encrypted content when tool calls are involved. The reasoning items have IDs (like rs_*) that must be properly linked to their following items.

Solution

Added include: ['reasoning.encrypted_content'] to OpenAI provider options when reasoningEffort is configured (meaning reasoning is enabled).

This ensures reasoning context is properly preserved across multi-turn conversations with tool calls.

Changes

  • Updated buildProviderOptions() in src/utils/ai/providerOptions.ts
  • Only adds include when reasoning is actually enabled
  • Added comments explaining why this is required

Testing

Not adding automated tests as the error was intermittent and difficult to reliably reproduce. The fix is based directly on OpenAI/Vercel SDK documentation requirements.

@ammario ammario enabled auto-merge (squash) October 7, 2025 02:20
OpenAI Responses API was failing with error:
'Item rs_* of type reasoning was provided without its required following item'

This occurred when using reasoning models (gpt-5, o3, o4-mini) with tool calls
in multi-turn conversations.

Root cause: When using reasoning models with tool calls, OpenAI requires
including 'reasoning.encrypted_content' in the include option to properly
preserve reasoning context across conversation steps.

Solution: Add include: ['reasoning.encrypted_content'] to OpenAI provider
options when reasoningEffort is configured.

Refs: https://sdk.vercel.ai/providers/ai-sdk-providers/openai#responses-models
@ammario ammario merged commit 69f8cf9 into main Oct 7, 2025
6 checks passed
@ammario ammario deleted the tokens branch October 7, 2025 02:23
ammario added a commit that referenced this pull request Oct 7, 2025
## Problem

After merging PR #59, the OpenAI reasoning error still occurred:
```
Item 'rs_01e84f5b161f6bce0068e480e7778481a3a2c3b6234d6bb7c6' of type 'reasoning' was provided without its required following item.
```

## Root Cause Analysis

**The issue was OpenAI-specific.** Anthropic reasoning and OpenAI
reasoning work differently:

### Anthropic
- Uses **text-based reasoning parts** in message content
- These **SHOULD be sent back** to the API (via `sendReasoning: true`
option, which defaults to true)
- The model uses historical reasoning context to improve responses

### OpenAI  
- Uses **encrypted reasoning items** with IDs (e.g., `rs_*`)
- These are managed automatically via `previous_response_id` 
- Sending Anthropic-style text reasoning parts creates **orphaned
reasoning items** that cause API errors
- Per OpenAI docs: "In typical multi-turn conversations, you don't need
to include reasoning items or tokens—the model is trained to produce the
best output without them"

## Solution

**Strip reasoning parts ONLY for OpenAI**, before converting
CmuxMessages to ModelMessages.

- Added `stripReasoningForOpenAI()` function for OpenAI-specific
processing
- Apply it conditionally based on provider in `aiService.ts`
- Keeps Anthropic behavior intact (reasoning sent via `sendReasoning`)

## Changes

1. **New function**: `stripReasoningForOpenAI()` in
`modelMessageTransform.ts`
2. **Updated `aiService.ts`**: Apply reasoning stripping only for OpenAI
provider
3. **Reverted previous change**: `filterEmptyAssistantMessages()` no
longer strips all reasoning
4. **Added detailed comments** explaining provider-specific differences

This ensures:
- ✅ OpenAI doesn't get orphaned reasoning items
- ✅ Anthropic still receives reasoning context
- ✅ Provider-specific behavior is clearly documented
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant