Skip to content

Custom model provider agent does not use tool calling when an output type is specified #332

@anakin-05

Description

@anakin-05

Please read this first

  • Have you read the custom model provider docs, including the 'Common issues' section? Model provider docs
  • Have you searched for related issues? Others may have faced similar issues.

Yes, I have reviewed all the relevant documentation.

Describe the question

When using a custom model provider, the agent generates a response without calling a tool if an output type is specified (1st run). However, when I remove the output type, the model correctly calls the intended tool (2nd run). I also attempted to enforce tool usage by setting tool_use_behavior to 'required' and specifying a named tool (3rd run), but this did not resolve the issue.

Image

With the OpenAI model, the agent behaves as expected.

Image

Debug information

  • Agents SDK version: v0.0.6
  • Python version: 3.12.9
  • Model provider: Ollama
  • Model name: qwq:32b (supports tool calling)

Repro steps

from __future__ import annotations

import asyncio
import os
from dataclasses import dataclass

import mlflow
from openai import AsyncOpenAI

from agents import (
    Agent,
    Model,
    ModelProvider,
    OpenAIChatCompletionsModel,
    RunConfig,
    Runner,
    function_tool,
    set_tracing_disabled,
)

BASE_URL = os.getenv("EXAMPLE_BASE_URL")
API_KEY = os.getenv("EXAMPLE_API_KEY")
MODEL_NAME = os.getenv("EXAMPLE_MODEL_NAME")

if not BASE_URL or not API_KEY or not MODEL_NAME:
    raise ValueError(
        "Please set EXAMPLE_BASE_URL, EXAMPLE_API_KEY, EXAMPLE_MODEL_NAME via env var or code."
    )

client = AsyncOpenAI(base_url=BASE_URL, api_key=API_KEY)


class CustomModelProvider(ModelProvider):
    def get_model(self, model_name: str | None) -> Model:
        return OpenAIChatCompletionsModel(
            model=model_name or MODEL_NAME, openai_client=client
        )


CUSTOM_MODEL_PROVIDER = CustomModelProvider()


@dataclass
class Weather:
    city: str
    date: str
    weather: str
    temperature: int | None = None


@function_tool
def get_weather(city: str, date: str):
    print(f"[debug] getting weather for {city} on {date}")
    return f"The weather in {city} on {date} is sunny with a high of 75°F."


async def main():
    agent = Agent(
        name="Assistant",
        instructions="You are a helpful assistant.",
        tools=[get_weather],
        output_type=Weather,
        # tool_use_behavior="required",
        # tool_use_behavior="get_weather",
    )

    # This will use the custom model provider
    result = await Runner.run(
        agent,
        "What's the weather in Tokyo on September 3rd 2021?",
        run_config=RunConfig(model_provider=CUSTOM_MODEL_PROVIDER),
    )
    print(result.final_output)


if __name__ == "__main__":
    asyncio.run(main())

Expected behavior

The agent should call the intended tool and generate a response in the specified output format, just as it does with the OpenAI model.

Metadata

Metadata

Assignees

No one assigned

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions