Skip to content

Python: [Bug]: agent-framework-foundry: Cannot use response_format at runtime (run/options) - raises 400 error #5467

@mhmtkcm

Description

@mhmtkcm

Description

Summary

When using agent-framework-foundry, providing response_format at runtime (e.g., via run() options) results in an error. With the previous approach (AzureAIProjectAgentProvider.get_agent() / create_agent()), we could pass response_format through the options parameter and receive an already-parsed response.


Background

Previously, via AzureAIProjectAgentProvider, we could:

  • Create an Agent if it didn’t exist (create_agent()), or reuse an existing one (get_agent()).
  • Call run() and specify response_format in runtime options, and the returned response would be parsed for us.

After moving to agent-framework-foundry, Agents are only used (creation is delegated to azure-ai-projects). However, we can no longer achieve the same runtime response_format behavior.


Problem

If response_format is provided at runtime, the call fails with an API/validation error (the same underlying limitation discussed in previous issues). We understand this may be an API limit, but the parsing does not need to happen server-side. The agent response could simply return the raw content in the text field, and the SDK/client could handle parsing locally.


Related Issues


Why This Matters

Our integration relies on specifying output structure at runtime (for different tasks/validators). Without runtime response_format, we must implement custom parsing/validation outside the SDK, losing the convenience and consistency we had before.

Code Sample

import logging

from agent_framework_foundry import FoundryAgent
from azure.ai.projects.aio import AIProjectClient
from azure.ai.projects.models import (
    PromptAgentDefinition,
    PromptAgentDefinitionTextOptions,
    TextResponseFormatJsonSchema,
)
from azure.core.exceptions import ResourceNotFoundError
from azure.identity.aio import (
    AzureCliCredential,
    ChainedTokenCredential,
    ManagedIdentityCredential,
)
from pydantic import BaseModel

logger = logging.getLogger(__name__)
logging.basicConfig(level=logging.INFO)

for noisy in (
    "agent_framework",
    "azure",
):
    logging.getLogger(noisy).setLevel(logging.WARNING)

AZURE_AI_PROJECT_ENDPOINT = "..."
AGENT_NAME = "TEST-AGENT"
MODEL = "gpt-5.4"
INSTRUCTION = "You are a helpful assistant for accounting questions."


class AgentResponseModel(BaseModel):
    answer: str


async def main():
    async with (
        ChainedTokenCredential(
            ManagedIdentityCredential(),
            AzureCliCredential(),
        ) as credential,
        AIProjectClient(
            endpoint=AZURE_AI_PROJECT_ENDPOINT,
            credential=credential,
            allow_preview=True,
        ) as client,
    ):
        try:
            agent_project = await client.agents.get(agent_name=AGENT_NAME)
            agent = FoundryAgent(
                project_client=client,
                agent_name=AGENT_NAME,
                agent_version=agent_project.versions.latest.version,
            )
            logger.info("Existing agent found: %s", agent.client.agent_name)
        except ResourceNotFoundError:
            agent_project = await client.agents.create_version(
                agent_name=AGENT_NAME,
                definition=PromptAgentDefinition(
                    model=MODEL,
                    instructions=INSTRUCTION,
                    text=PromptAgentDefinitionTextOptions(
                        format=TextResponseFormatJsonSchema(
                            name=AgentResponseModel.__name__,
                            schema=AgentResponseModel.model_json_schema(),
                            strict=True,
                        )
                    ),
                ),
            )
            agent = FoundryAgent(
                project_client=client,
                agent_name=AGENT_NAME,
                agent_version=agent_project.version,
            )
            logger.info("New agent created: %s", agent.client.agent_name)

        response = await agent.run(
            "What is the tax rate for small businesses?",
            options={"response_format": AgentResponseModel},
        )
        logger.info("Agent response: %s", response.value)


if __name__ == "__main__":
    import asyncio

    asyncio.run(main())

Error Messages / Stack Traces

agent_framework.exceptions.ChatClientException: <class 'agent_framework_foundry._agent._FoundryAgentChatClient'> service failed to complete the prompt: Error code: 400 - {'error': {'code': 'invalid_payload', 'message': 'Not allowed when agent is specified. [Request ID: ...]', 'param': 'text', 'type': 'invalid_request_error', 'details': [], 'additionalInfo': {'request_id': '...'}}}

Package Versions

agent-framework-core: 1.1.1, agent-framework-foundry: 1.1.1, azure-ai-projects: 2.1.0

Python Version

Python 3.13.13

Additional Context

No response

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions