-
Notifications
You must be signed in to change notification settings - Fork 4.1k
Open
Labels
pythonPull requests for the Python Semantic KernelPull requests for the Python Semantic KernelstaleIssue is stale because it has been open for a while and has no activityIssue is stale because it has been open for a while and has no activity
Description
When attempting to use AzureAIInferenceChatCompletion
in place of AzureChatCompletion
within the step6_chat_completion_agent_group_chat.py
example, the invoke_stream()
method on AgentGroupChat
fails with a ValueError: SSE event not supported (line b'\r\n')
. The standard invoke()
method works as expected with AzureAIInferenceChatCompletion
.
Steps to Reproduce
- Modify
python/samples/getting_started_with_agents/chat_completion/step6_chat_completion_agent_group_chat.py
. - Replace the
AzureChatCompletion
service withAzureAIInferenceChatCompletion
for the agents.
kernel.add_service(
AzureAIInferenceChatCompletion(
ai_model_id=MODEL_NAME,
api_key=OAI_API_KEY,
endpoint=OAI_ENDPOINT,
service_id=service_id,
)
)
return kernel
- Run the example, ensuring it reaches the
group_chat.invoke_stream()
call.
async for content in group_chat.invoke_stream():
print(f"# {content.name}: {content.content}")
Expected Behavior
The invoke_stream()
method should stream responses from the agents using AzureAIInferenceChatCompletion
without error.
Actual Behavior
A ValueError
is raised during the streaming process.
# User: a slogan for a new line of electric cars.
Traceback (most recent call last):
File "/Users/nihat/Documents/semantic-kernel/python/samples/getting_started_with_agents/chat_completion/step6_chat_completion_agent_group_chat.py", line 127, in <module>
asyncio.run(main())
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/asyncio/runners.py", line 195, in run
return runner.run(main)
^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/asyncio/runners.py", line 118, in run
return self._loop.run_until_complete(task)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
return future.result()
^^^^^^^^^^^^^^^
File "/Users/nihat/Documents/semantic-kernel/python/samples/getting_started_with_agents/chat_completion/step6_chat_completion_agent_group_chat.py", line 113, in main
async for content in group_chat.invoke_stream():
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/agents/group_chat/agent_group_chat.py", line 207, in invoke_stream
async for message in super().invoke_agent_stream(selected_agent):
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/agents/group_chat/agent_chat.py", line 172, in invoke_agent_stream
async for message in channel.invoke_stream(agent, messages):
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/agents/channels/chat_history_channel.py", line 109, in invoke_stream
async for response_message in agent.invoke_stream(self):
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/utils/telemetry/agent_diagnostics/decorators.py", line 43, in wrapper_decorator
async for response in invoke_func(*args, **kwargs):
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/agents/chat_completion/chat_completion_agent.py", line 262, in invoke_stream
async for response_list in responses:
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/connectors/ai/chat_completion_client_base.py", line 261, in get_streaming_chat_message_contents
async for messages in self._inner_get_streaming_chat_message_contents(
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/semantic_kernel/connectors/ai/azure_ai_inference/services/azure_ai_inference_chat_completion.py", line 170, in _inner_get_streaming_chat_message_contents
async for chunk in response:
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/azure/ai/inference/models/_patch.py", line 501, in __anext__
self._done = await self._read_next_block_async()
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/azure/ai/inference/models/_patch.py", line 514, in _read_next_block_async
return self._deserialize_and_add_to_queue(element)
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/opt/homebrew/anaconda3/envs/manager-agent-sdk-12029/lib/python3.12/site-packages/azure/ai/inference/models/_patch.py", line 413, in _deserialize_and_add_to_queue
raise ValueError(f"SSE event not supported (line `{repr(line)}`)")
ValueError: SSE event not supported (line `b'\r\n'`)
Unclosed client session
client_session: <aiohttp.client.ClientSession object at 0x117c9a630>
Unclosed connector
connections: ['deque([(<aiohttp.client_proto.ResponseHandler object at 0x110ce8410>, 139616.546812541)])']
connector: <aiohttp.connector.TCPConnector object at 0x117c9a600>
Environment
- Semantic Kernel Version: 1.23.0
azure-ai-inference
Package Version: 1.0.0b9- Python Version: 3.12.9
- Operating System: macOS
Metadata
Metadata
Assignees
Labels
pythonPull requests for the Python Semantic KernelPull requests for the Python Semantic KernelstaleIssue is stale because it has been open for a while and has no activityIssue is stale because it has been open for a while and has no activity