Skip to content

Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools#4867

Open
giles17 wants to merge 4 commits intomicrosoft:mainfrom
giles17:agent/fix-4861-1
Open

Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools#4867
giles17 wants to merge 4 commits intomicrosoft:mainfrom
giles17:agent/fix-4861-1

Conversation

@giles17
Copy link
Contributor

@giles17 giles17 commented Mar 23, 2026

Motivation and Context

The agent_with_hosted_mcp sample was using AzureOpenAIChatClient, which sends requests to the Chat Completions API. MCP tool types ("type": "mcp") are only supported by the Responses API, so the sample failed at runtime with an unsupported tool type error.

Fixes #4861

Description

The root cause was that the sample incorrectly used AzureOpenAIChatClient instead of AzureOpenAIResponsesClient. The fix switches the sample to use AzureOpenAIResponsesClient and its get_mcp_tool() helper to properly construct the MCP tool configuration. The agent.yaml environment variable was updated from AZURE_OPENAI_CHAT_DEPLOYMENT_NAME to AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME, and the dependencies in requirements.txt were updated. A regression test was also added documenting that the chat client passes dict-based tools through without filtering, confirming callers must use the correct client for MCP tools.

Contribution Checklist

  • The code builds clean without any errors or warnings
  • The PR follows the Contribution Guidelines
  • All unit tests pass, and I have added new tests where possible
  • Is this a breaking change? If yes, add "[BREAKING]" prefix to the title of the PR.

Note: PR autogenerated by giles17's agent

Copilot and others added 2 commits March 23, 2026 19:38
…icrosoft#4861)

The agent_with_hosted_mcp sample used AzureOpenAIChatClient with an MCP tool
dict, but the Chat Completions API only supports 'function' and 'custom' tool
types, not 'mcp'. This caused a 400 error at runtime.

Switch the sample to AzureOpenAIResponsesClient which natively supports MCP
tools via the Responses API. Use get_mcp_tool() to construct the tool config.

Changes:
- main.py: Replace AzureOpenAIChatClient with AzureOpenAIResponsesClient
- requirements.txt: Update azure-ai-agentserver-agentframework to 1.0.0b16
  and use agent-framework-azure-ai package
- agent.yaml: Use AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME env var
- Add regression test documenting chat client MCP tool passthrough behavior

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Copilot AI review requested due to automatic review settings March 23, 2026 19:42
@giles17 giles17 self-assigned this Mar 23, 2026
@markwallace-microsoft markwallace-microsoft added documentation Improvements or additions to documentation python labels Mar 23, 2026
Copy link
Contributor Author

@giles17 giles17 left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Automated Code Review

Reviewers: 4 | Confidence: 94%

✗ Correctness

This PR adds a reproduction report markdown file into the test directory (python/packages/core/tests/openai/REPRODUCTION_REPORT.md). While the investigation documented in the report is sound, this file is a diagnostic artifact — not a code fix, not a test, and not documentation that belongs in the repository's test directory. It references tests that were written but are not included in this diff, and it does not fix the actual issue (#4861). Shipping investigation notes as committed files in the test tree sets a bad precedent and adds clutter.

✗ Security Reliability

The PR adds only a markdown file (REPRODUCTION_REPORT.md) inside the test directory. This is a debugging/investigation artifact, not production code. It poses no security or reliability risk. However, it should not be committed to the repository — reproduction reports and investigation notes are ephemeral artifacts that belong in issue comments or session notes, not checked into the test tree where they add permanent clutter with no functional value.

✗ Test Coverage

This PR adds only a markdown reproduction report (REPRODUCTION_REPORT.md) to the test directory. The report claims two tests were written and pass, but no actual test file (e.g., test_issue_4861_mcp_tool_chat_client.py) is included in the PR. A documentation-only file describing a bug reproduction does not belong in the tests directory, and the referenced tests that would actually validate the bug are missing from the changeset. There is zero test coverage added by this PR.

✗ Design Approach

This PR adds only a REPRODUCTION_REPORT.md to the test directory. The actual sample fix (main.py already uses AzureOpenAIResponsesClient) appears to be applied separately. The core design concern is that the report references test file test_issue_4861_mcp_tool_chat_client.py with passing output, but that file was never committed — making the report an orphaned, unverifiable artifact. Beyond that, committing an investigation/reproduction report as a permanent file in the test directory is the wrong artifact for this location: reproduction reports are ephemeral investigation notes, not persistent repository content. If the goal is to prevent regression, the two reproduction tests described in the report should be committed as actual test cases (ideally in test_openai_chat_client.py). If the goal is just to document the investigation, the file should not be committed at all.

Flagged Issues

  • REPRODUCTION_REPORT.md is a diagnostic/investigation artifact and should not be committed into the repository's test directory. It is not a test, not executable, not picked up by pytest, and adds no automated verification. Investigation notes like this belong in the GitHub issue (#4861) or PR description, not in the source tree.
  • The report references test_issue_4861_mcp_tool_chat_client.py and its passing output, but that file is not included in the PR and does not exist in the repository. The report is orphaned and unverifiable — either commit the referenced test file or remove the report.

Suggestions

  • Remove REPRODUCTION_REPORT.md from the PR and post its content as a comment on issue #4861 instead.
  • Convert the two described reproduction tests into a permanent regression test (e.g., add to test_openai_chat_client.py) that guards against MCP dict pass-through to the Chat Completions API, so CI can catch regressions.
  • Implement the actual fix: update python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/main.py to use AzureOpenAIResponsesClient with get_mcp_tool().

Automated review by giles17's agents

@markwallace-microsoft
Copy link
Member

markwallace-microsoft commented Mar 23, 2026

Python Test Coverage

Python Test Coverage Report •
FileStmtsMissCoverMissing
TOTAL27295321988% 
report-only-changed-files is enabled. No files were changed during this commit :)

Python Unit Test Overview

Tests Skipped Failures Errors Time
5349 20 💤 0 ❌ 0 🔥 1m 25s ⏱️

Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull request overview

Updates the agent_with_hosted_mcp hosted-agent Python sample to use the Azure OpenAI Responses client (instead of Chat Completions) so that hosted MCP tools (type: "mcp") are supported and the sample no longer fails at runtime (fixes #4861).

Changes:

  • Switch sample code to AzureOpenAIResponsesClient and construct the MCP tool via get_mcp_tool().
  • Update the hosted agent manifest to use AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME.
  • Update sample dependencies and add a regression test documenting that Chat Completions tooling prep does not filter dict-based MCP tools.

Reviewed changes

Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/requirements.txt Align sample deps with Azure connector package needed for Azure Responses client.
python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/main.py Use AzureOpenAIResponsesClient + get_mcp_tool() and pass tools as a list.
python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/agent.yaml Rename env var to AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME for hosted deployment injection.
python/packages/core/tests/openai/test_openai_chat_client.py Add test documenting MCP dict tools pass through unchanged in Chat Completions client prep.
python/packages/core/tests/openai/REPRODUCTION_REPORT.md Adds a reproduction report markdown file under tests (currently inconsistent with repo state).

Copilot and others added 2 commits March 23, 2026 19:47
Remove the reproduction report markdown file from the test directory.
Investigation notes belong in the GitHub issue or PR description,
not as committed files in the source tree. The regression test in
test_openai_chat_client.py already provides automated verification.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add test_mcp_tool_dict_causes_api_rejection to verify that MCP tool
dicts passed through to the Chat Completions API result in a clear
ChatClientException rather than being silently dropped. This completes
the regression test coverage requested in code review.

Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
@giles17 giles17 changed the title Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools Mar 23, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

documentation Improvements or additions to documentation python

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Python: agent_with_hosted_mcp sample fails - AzureOpenAIChatClient sends unsupported mcp tool type to API

3 participants