Skip to content

Python: Bug: Expected message to be of type 'StreamingChatMessageContent', but got 'ChatMessageContent' instead #12593

Open
@smokhasi

Description

@smokhasi

Describe the bug
When using input_transform/agent_callback with the latest version of semantic kernel in python,
running into this error.

  • Tried variations of using streaming_agent_response_callback instead of agent_response_callback and run into the same error.
  • Tried variations of returning a StreamingChatMessageContent from input_transform function but that runs into the same error.
2025-06-25 22:37:19 [ERROR] root: Error during orchestration: Expected message to be of type 'StreamingChatMessageContent', but got 'ChatMessageContent' instead.

To Reproduce

Sample script to reproduce the issue


class AgentInput(BaseModel):
    """
    Model class for the input parameters required by the agent.
    """
    run_id: str = Field(..., description="The unique identifier for the run.")
    target_id: str = Field(..., description="The unique identifier for the target.")
    branch_name: str = Field(..., description="The name of the github branch")


def custom_input_transform(input_message: AgentInput) -> ChatMessageContent:
    """
    Custom input transformation function that converts the AgentInput model into a ChatMessageContent object.
    """
    return ChatMessageContent(
        role=AuthorRole.USER,
        content=input_message.model_dump_json(),
    )


def agent_response_callback(message: ChatMessageContent) -> None:
    print(f"Agent {message.name}-{message.role} Response: '{message.content}'")


    sequential_orchestration = SequentialOrchestration[AgentInput, ChatMessageContent](
        members=agents,
        input_transform=custom_input_transform,
        agent_response_callback=custom_agent_response_callback
    )

    
    try:
        runtime = InProcessRuntime()
        runtime.start()

        orchestration_result = await sequential_orchestration.invoke(
            task=agent_input,
            runtime=runtime,
        )
        value = await orchestration_result.get(timeout=20)
        logging.info(f"\n------\n***** Final Result *****\n{value}")

        return response

    except Exception as e:
        logging.error(f"Error during orchestration: {e}")
    finally:
        # 5. Stop the runtime when idle
        await runtime.stop_when_idle()

Expected behavior
Should accept either ChatMessageContent or StreamingChatMessageContent object.

Platform

  • Language: python
  • Source: 1.34.0
  • AI model: Azure OpenAI-4.1
  • API version: 2025-04-01-preview
  • using responses api
  • IDE: VS Code
  • OS: Mac

Additional context
Worked fine on 1.32.2

Metadata

Metadata

Labels

bugSomething isn't workingpythonPull requests for the Python Semantic Kernel

Type

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions