Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools#4867
Python: Fix agent_with_hosted_mcp sample to use Responses API client for MCP tools#4867giles17 wants to merge 4 commits intomicrosoft:mainfrom
agent_with_hosted_mcp sample to use Responses API client for MCP tools#4867Conversation
…icrosoft#4861) The agent_with_hosted_mcp sample used AzureOpenAIChatClient with an MCP tool dict, but the Chat Completions API only supports 'function' and 'custom' tool types, not 'mcp'. This caused a 400 error at runtime. Switch the sample to AzureOpenAIResponsesClient which natively supports MCP tools via the Responses API. Use get_mcp_tool() to construct the tool config. Changes: - main.py: Replace AzureOpenAIChatClient with AzureOpenAIResponsesClient - requirements.txt: Update azure-ai-agentserver-agentframework to 1.0.0b16 and use agent-framework-azure-ai package - agent.yaml: Use AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME env var - Add regression test documenting chat client MCP tool passthrough behavior Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
…for MCP tools Fixes microsoft#4861
giles17
left a comment
There was a problem hiding this comment.
Automated Code Review
Reviewers: 4 | Confidence: 94%
✗ Correctness
This PR adds a reproduction report markdown file into the test directory (
python/packages/core/tests/openai/REPRODUCTION_REPORT.md). While the investigation documented in the report is sound, this file is a diagnostic artifact — not a code fix, not a test, and not documentation that belongs in the repository's test directory. It references tests that were written but are not included in this diff, and it does not fix the actual issue (#4861). Shipping investigation notes as committed files in the test tree sets a bad precedent and adds clutter.
✗ Security Reliability
The PR adds only a markdown file (REPRODUCTION_REPORT.md) inside the test directory. This is a debugging/investigation artifact, not production code. It poses no security or reliability risk. However, it should not be committed to the repository — reproduction reports and investigation notes are ephemeral artifacts that belong in issue comments or session notes, not checked into the test tree where they add permanent clutter with no functional value.
✗ Test Coverage
This PR adds only a markdown reproduction report (REPRODUCTION_REPORT.md) to the test directory. The report claims two tests were written and pass, but no actual test file (e.g., test_issue_4861_mcp_tool_chat_client.py) is included in the PR. A documentation-only file describing a bug reproduction does not belong in the tests directory, and the referenced tests that would actually validate the bug are missing from the changeset. There is zero test coverage added by this PR.
✗ Design Approach
This PR adds only a
REPRODUCTION_REPORT.mdto the test directory. The actual sample fix (main.pyalready usesAzureOpenAIResponsesClient) appears to be applied separately. The core design concern is that the report references test filetest_issue_4861_mcp_tool_chat_client.pywith passing output, but that file was never committed — making the report an orphaned, unverifiable artifact. Beyond that, committing an investigation/reproduction report as a permanent file in the test directory is the wrong artifact for this location: reproduction reports are ephemeral investigation notes, not persistent repository content. If the goal is to prevent regression, the two reproduction tests described in the report should be committed as actual test cases (ideally intest_openai_chat_client.py). If the goal is just to document the investigation, the file should not be committed at all.
Flagged Issues
- REPRODUCTION_REPORT.md is a diagnostic/investigation artifact and should not be committed into the repository's test directory. It is not a test, not executable, not picked up by pytest, and adds no automated verification. Investigation notes like this belong in the GitHub issue (#4861) or PR description, not in the source tree.
- The report references
test_issue_4861_mcp_tool_chat_client.pyand its passing output, but that file is not included in the PR and does not exist in the repository. The report is orphaned and unverifiable — either commit the referenced test file or remove the report.
Suggestions
- Remove REPRODUCTION_REPORT.md from the PR and post its content as a comment on issue #4861 instead.
- Convert the two described reproduction tests into a permanent regression test (e.g., add to
test_openai_chat_client.py) that guards against MCP dict pass-through to the Chat Completions API, so CI can catch regressions. - Implement the actual fix: update
python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/main.pyto useAzureOpenAIResponsesClientwithget_mcp_tool().
Automated review by giles17's agents
There was a problem hiding this comment.
Pull request overview
Updates the agent_with_hosted_mcp hosted-agent Python sample to use the Azure OpenAI Responses client (instead of Chat Completions) so that hosted MCP tools (type: "mcp") are supported and the sample no longer fails at runtime (fixes #4861).
Changes:
- Switch sample code to
AzureOpenAIResponsesClientand construct the MCP tool viaget_mcp_tool(). - Update the hosted agent manifest to use
AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME. - Update sample dependencies and add a regression test documenting that Chat Completions tooling prep does not filter dict-based MCP tools.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/requirements.txt | Align sample deps with Azure connector package needed for Azure Responses client. |
| python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/main.py | Use AzureOpenAIResponsesClient + get_mcp_tool() and pass tools as a list. |
| python/samples/05-end-to-end/hosted_agents/agent_with_hosted_mcp/agent.yaml | Rename env var to AZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME for hosted deployment injection. |
| python/packages/core/tests/openai/test_openai_chat_client.py | Add test documenting MCP dict tools pass through unchanged in Chat Completions client prep. |
| python/packages/core/tests/openai/REPRODUCTION_REPORT.md | Adds a reproduction report markdown file under tests (currently inconsistent with repo state). |
Remove the reproduction report markdown file from the test directory. Investigation notes belong in the GitHub issue or PR description, not as committed files in the source tree. The regression test in test_openai_chat_client.py already provides automated verification. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
Add test_mcp_tool_dict_causes_api_rejection to verify that MCP tool dicts passed through to the Chat Completions API result in a clear ChatClientException rather than being silently dropped. This completes the regression test coverage requested in code review. Co-authored-by: Copilot <223556219+Copilot@users.noreply.github.com>
agent_with_hosted_mcp sample to use Responses API client for MCP tools
Motivation and Context
The
agent_with_hosted_mcpsample was usingAzureOpenAIChatClient, which sends requests to the Chat Completions API. MCP tool types ("type": "mcp") are only supported by the Responses API, so the sample failed at runtime with an unsupported tool type error.Fixes #4861
Description
The root cause was that the sample incorrectly used
AzureOpenAIChatClientinstead ofAzureOpenAIResponsesClient. The fix switches the sample to useAzureOpenAIResponsesClientand itsget_mcp_tool()helper to properly construct the MCP tool configuration. Theagent.yamlenvironment variable was updated fromAZURE_OPENAI_CHAT_DEPLOYMENT_NAMEtoAZURE_OPENAI_RESPONSES_DEPLOYMENT_NAME, and the dependencies inrequirements.txtwere updated. A regression test was also added documenting that the chat client passes dict-based tools through without filtering, confirming callers must use the correct client for MCP tools.Contribution Checklist