Skip to content

No tools are passed in reflecting tool use flow and raise param error from llm server #6328

@Khinyu2000

Description

@Khinyu2000

What happened?

Describe the bug
In processing model result of assistant agent, at the step of reflecting the tool use, no tools are passed and raise UnsupportedParamsError on the llm server which in turn again cause BadRequestError in autogen.

To Reproduce
When I tried to run quick start example of agent chat with ollama and litellm backed llm server.

from autogen_agentchat.agents import AssistantAgent
from autogen_agentchat.ui import Console
from autogen_ext.models.openai import OpenAIChatCompletionClient
import asyncio
from autogen_core.models import ModelInfo, ModelFamily

# Define a model client. You can use other model client that implements
# the `ChatCompletionClient` interface.
model_client = OpenAIChatCompletionClient(
    model="anthropic.claude-3-5-sonnet-20240620-v1:0",
    base_url="http://my_llm_server",
    api_key="my_api_key",
    model_info=ModelInfo(
        vision=True,
        function_calling=True,
        json_output=True,
        family=ModelFamily.CLAUDE_3_5_SONNET,
        structured_output=False,
    ),
)


# Define a simple function tool that the agent can use.
# For this example, we use a fake weather tool for demonstration purposes.
async def get_weather(city: str) -> str:
    """Get the weather for a given city."""
    return f"The weather in {city} is 73 degrees and Sunny."


# Define an AssistantAgent with the model, tool, system message, and reflection enabled.
# The system message instructs the agent via natural language.
agent = AssistantAgent(
    name="weather_agent",
    model_client=model_client,
    tools=[get_weather, sum],
    system_message="You are a helpful assistant.",
    reflect_on_tool_use=True,
    model_client_stream=True,  # Enable streaming tokens from the model client.
)


# Run the agent and stream the messages to the console.
async def main() -> None:
    await Console(agent.run_stream(task="What is the weather in New York?"))
    # Close the connection to the model client.
    await model_client.close()


# NOTE: if running this inside a Python script you'll need to use asyncio.run(main()).
if __name__ == "__main__":
    asyncio.run(main())

This is the logs.

[Single-Agent Team](https://microsoft.github.io/autogen/stable/user-guide/agentchat-user-guide/tutorial/teams.html#single-agent-team) for more details.
  agent = AssistantAgent(
---------- TextMessage (user) ----------
What is the weather in New York?
---------- ModelClientStreamingChunkEvent (weather_agent) ----------
Certainly! I can help you find out the weather in New York. To get this information, I'll use the get_weather function. Let me fetch that for you right away.
---------- ToolCallRequestEvent (weather_agent) ----------
[FunctionCall(id='tooluse_GCHoZfjoS1OiePVqLMZLFA', arguments='{"city": "New York"}', name='get_weather')]
---------- ToolCallExecutionEvent (weather_agent) ----------
[FunctionExecutionResult(content='The weather in New York is 73 degrees and Sunny.', name='get_weather', call_id='tooluse_GCHoZfjoS1OiePVqLMZLFA', is_error=False)]
Traceback (most recent call last):
  File "/home/tris/Desktop/autogen-test2/main.py", line 55, in <module>
    asyncio.run(main())
  File "/home/tris/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 195, in run
    return runner.run(main)
           ^^^^^^^^^^^^^^^^
  File "/home/tris/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/runners.py", line 118, in run
    return self._loop.run_until_complete(task)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tris/.local/share/uv/python/cpython-3.12.9-linux-x86_64-gnu/lib/python3.12/asyncio/base_events.py", line 691, in run_until_complete
    return future.result()
           ^^^^^^^^^^^^^^^
  File "/home/tris/Desktop/autogen-test2/main.py", line 48, in main
    await Console(agent.run_stream(task="What is the weather in New York?"))
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_agentchat/ui/_console.py", line 117, in Console
    async for message in stream:
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_agentchat/agents/_base_chat_agent.py", line 175, in run_stream
    async for message in self.on_messages_stream(input_messages, cancellation_token):
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 840, in on_messages_stream
    async for output_event in self._process_model_result(
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 1043, in _process_model_result
    async for reflection_response in cls._reflect_on_tool_use_flow(
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_agentchat/agents/_assistant_agent.py", line 1156, in _reflect_on_tool_use_flow
    async for chunk in model_client.create_stream(
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py", line 809, in create_stream
    async for chunk in chunks:
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/autogen_ext/models/openai/_openai_client.py", line 1000, in _create_stream_chunks
    stream = await stream_future
             ^^^^^^^^^^^^^^^^^^^
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/openai/resources/chat/completions/completions.py", line 2032, in create
    return await self._post(
           ^^^^^^^^^^^^^^^^^
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1805, in post
    return await self.request(cast_to, opts, stream=stream, stream_cls=stream_cls)
           ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1495, in request
    return await self._request(
           ^^^^^^^^^^^^^^^^^^^^
  File "/home/tris/Desktop/autogen-test2/.venv/lib/python3.12/site-packages/openai/_base_client.py", line 1600, in _request
    raise self._make_status_error_from_response(err.response) from None
openai.BadRequestError: Error code: 400 - {'error': {'message': "litellm.UnsupportedParamsError: Bedrock doesn't support tool calling without `tools=` param specified. Pass `tools=` param OR set `litellm.modify_params = True` // `litellm_settings::modify_params: True` to add dummy tool to the request.\nReceived Model Group=anthropic.claude-3-5-sonnet-20240620-v1:0\nAvailable Model Group Fallbacks=None", 'type': 'None', 'param': None, 'code': '400'}}

Expected behavior
I tried to pass tools in _reflect_on_tool_use_flow which resolve the error but I am not sure it is the right solution since there are various llm that support tool calling or not.

Screenshots
Image

Which packages was the bug in?

Python AgentChat (autogen-agentchat>=0.4.0)

AutoGen library version.

Python dev (main branch)

Other library version.

No response

Model used

anthropic.claude-3-5-sonnet-20240620-v1:0

Model provider

Other (please specify below)

Other model provider

self hosted with ollama and litellm

Python version

3.12

.NET version

None

Operating system

Other

Metadata

Metadata

Assignees

No one assigned

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions