Skip to content

OpenAIConverseConverter produces wrong content block ordering for reasoningContent, breaking Bedrock extended thinking #416

@mkmeral

Description

@mkmeral

Describe the bug

When using AgentCoreMemorySessionManager with converter=OpenAIConverseConverter and Bedrock models that have extended thinking enabled (e.g., us.anthropic.claude-sonnet-4-20250514-v1:0), multi-turn conversations fail with a ValidationException on the second turn.

The _openai_to_bedrock() function in converters/openai.py reconstructs content blocks in the wrong order — reasoningContent blocks are appended after text and toolUse blocks, but Bedrock requires that if an assistant message contains any thinking blocks, the first block must be a thinking block.

This causes:

botocore.errorfactory.ValidationException: An error occurred (ValidationException) when calling the Converse operation: If an assistant message contains any thinking blocks, the first block must be thinking. Found text

To Reproduce

  1. Create a Strands agent with extended thinking enabled on a Bedrock model:

    from strands import Agent
    from strands.models.bedrock import BedrockModel
    from bedrock_agentcore.memory.integrations.strands import AgentCoreMemorySessionManager
    from bedrock_agentcore.memory.integrations.strands.converters.openai import OpenAIConverseConverter
    from bedrock_agentcore.memory.integrations.strands.config import AgentCoreMemoryConfig
    
    model = BedrockModel(
        model_id="us.anthropic.claude-sonnet-4-20250514-v1:0",
        streaming=True,
        additional_request_fields={
            "thinking": {"type": "enabled", "budget_tokens": 5000}
        },
    )
    
    config = AgentCoreMemoryConfig(
        memory_id="your-memory-id",
        session_id="test-session",
        namespace="default",
    )
    
    session_manager = AgentCoreMemorySessionManager(
        agentcore_memory_config=config,
        converter=OpenAIConverseConverter,
    )
    
    agent = Agent(model=model, session_manager=session_manager)
  2. Send a first message (succeeds):

    agent("What is 2+2?")

    Bedrock returns an assistant message with content blocks ordered as: [reasoningContent, text]

  3. The session manager serializes this to OpenAI format via _bedrock_to_openai(), storing reasoning in _strands_reasoning_content

  4. On the next turn, the session manager restores the conversation via _openai_to_bedrock(), which reconstructs the content blocks as: [text, reasoningContent]wrong order

  5. Send a second message (fails):

    agent("Now multiply that by 3")  # ValidationException

Root cause in code (converters/openai.py, lines 76–126):

def _openai_to_bedrock(openai_msg: dict) -> dict:
    content_items: list[dict[str, Any]] = []

    # Step 1: text goes first
    text_content = openai_msg.get("content")
    if text_content and isinstance(text_content, str):
        content_items.append({"text": text_content})       # index 0

    # Step 2: tool_calls go second
    for tc in openai_msg.get("tool_calls", []):
        content_items.append({"toolUse": ...})              # index 1+

    # Step 3: reasoning appended LAST ← BUG
    for rc in openai_msg.get("_strands_reasoning_content", []):
        if isinstance(rc, dict) and "reasoningContent" in rc:
            content_items.append(rc)                        # index N (should be 0)

    return {"role": bedrock_role, "content": content_items}

Reasoning blocks must be prepended, not appended.

Expected behavior

The _openai_to_bedrock() function should reconstruct content blocks with reasoningContent blocks first, matching the original ordering that Bedrock produced. The restored message should be [reasoningContent, text], not [text, reasoningContent].

Suggested fix

def _openai_to_bedrock(openai_msg: dict) -> dict:
    content_items: list[dict[str, Any]] = []
+   reasoning_items: list[dict[str, Any]] = []

    # ... text and tool_call handling unchanged ...

    for rc in openai_msg.get("_strands_reasoning_content", []):
        if isinstance(rc, dict) and "reasoningContent" in rc:
-           content_items.append(rc)
+           reasoning_items.append(rc)

+   # Reasoning blocks must come first per Bedrock API contract
+   content_items = reasoning_items + content_items

    bedrock_role = "assistant" if role == "assistant" else "user"
    return {"role": bedrock_role, "content": content_items}

Additional context

  • The default AgentCoreMemoryConverter (native Strands format) does not have this bug — it roundtrips via SessionMessage.to_dict()/from_dict() which preserves JSON array ordering. This bug is specific to OpenAIConverseConverter.
  • Related upstream issue: strands-agents/sdk-python#1698
  • The Strands SDK's built-in session managers (FileSessionManager, S3SessionManager) also do not have this problem — they preserve all content block types and ordering.
  • The SessionManager interface in Strands operates on the full Message TypedDict which includes reasoningContent — the interface contract is correct; this is an implementation bug in the OpenAI converter.

Environment:

  • bedrock-agentcore version: 1.6.2
  • strands-agents version: latest
  • Python: 3.10+
  • OS: Linux (tested on Amazon Linux 2 / Ubuntu)
  • Bedrock model: us.anthropic.claude-sonnet-4-20250514-v1:0 with extended thinking enabled

Metadata

Metadata

Assignees

No one assigned

    Labels

    No labels
    No labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions