You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
When a declarative workflow invokes an Azure agent with externalLoop: true and that agent has a tool wrapped in ApprovalRequiredAIFunction, the first turn correctly surfaces an mcp_approval_request to the client. On the second turn (after the client posts the mcp_approval_response), the request fails with:
HTTP 400: No tool output found for function call call_J9Y...
This blocks the canonical pattern of "agent with a tool that requires approval" inside a declarative workflow on Foundry hosting. The HITL bridge added in #5589 works correctly for the InvokeFunctionTool + requireApproval shape (workflow-level tool); only the agent-level / externalLoop shape is affected.
Repro
Repo: alliscode/foundry-samples-pr (or any equivalent declarative-workflow hosted-agent setup using 1.4.x packages).
In workflow.yaml, define a single agent step using InvokeAzureAgent with externalLoop: true (mirroring microsoft/agent-framework/dotnet/samples/03-workflows/Declarative/FunctionTools/FunctionTools.yaml).
In Program.cs, register the target Azure agent with a tool wrapped in ApprovalRequiredAIFunction, e.g.:
varissueRefund=AIFunctionFactory.Create(IssueRefund);varapprovalRequired=newApprovalRequiredAIFunction(issueRefund);// register approvalRequired as a tool on the agent definition
Deploy to a Foundry hosted-agent project (azd up or equivalent).
Invoke the agent with a prompt that triggers the tool (e.g. "Issue a refund for order 123").
Observed
Turn 1: agent emits function_call for IssueRefund. ApprovalRequiredAIFunction halts; bridge surfaces mcp_approval_request{name=IssueRefund, arguments=...} to the client. ✅
Turn 2: client POSTs mcp_approval_response{approve=true}. Server returns:
HTTP 400 BadRequest
No tool output found for function call call_J9Y...
Expected
Turn 2 resumes the agent loop with the approval result, the tool actually executes, and the assistant returns a final reply that incorporates the tool output.
Root cause (working hypothesis)
The Azure Conversations API persists the function_call item as soon as the model emits it (during turn 1, before the local tool would normally execute). ApprovalRequiredAIFunction halts the agent execution flow before any function_call_output is written back. From the Conversations API's perspective there is now a dangling function_call with no matching output. On the next request against that conversation, the API's invariant check fails and rejects the call — no matter what the new request actually contains.
This is independent of the declarative-workflow HITL bridge (IExternalRequestEnvelope / WorkflowSession bridge) introduced in #5589: that bridge correctly emits the mcp_approval_request and would correctly consume an mcp_approval_response, but never gets a chance because the conversation is already wedged at the storage layer.
The same ApprovalRequiredAIFunction shape works fine when the agent is not running under a hosted Conversations-backed thread (e.g. ChatClientAgent with in-memory thread), which is consistent with this hypothesis.
Possible fixes (need design input)
Synthesize a placeholder function_call_output on halt. When ApprovalRequiredAIFunction halts, write a sentinel function_call_output (e.g. {"status":"pending_approval"}) so the conversation stays consistent. On resume, emit a newfunction_call with the same name + arguments, then write the real output. Pro: keeps the storage invariant intact. Con: surfaces a synthetic event in conversation history.
Defer function_call persistence until after the function executes (or is approved + executes). Buffer the function_call item locally until either (a) the function returns and we can write the pair atomically, or (b) approval is denied and we write a function_call_output with a refusal payload. Pro: cleaner conversation history. Con: requires plumbing through the Conversations write path.
Special-case ApprovalRequiredAIFunction in the agent loop. Don't write the function_call to Conversations at all when the function is ApprovalRequiredAIFunction and is halting; instead translate the halt into an mcp_approval_request envelope and write only that. On resume, write the matched function_call/function_call_output pair atomically once the function actually runs. Probably the cleanest semantically.
Option 3 most closely matches what the workflow-level InvokeFunctionTool + requireApproval path already does (no function_call persisted; only the bridge envelope is exchanged).
Workaround
Use InvokeFunctionTool at the workflow level with requireApproval: true instead of ApprovalRequiredAIFunction at the agent-tool level. This works end-to-end today (see samples/csharp/hosted-agents/agent-framework/declarative-workflow-approval).
Issue:
externalLoop+ApprovalRequiredAIFunctionleaves danglingfunction_callin Azure Conversations API, breaks resumeSummary
When a declarative workflow invokes an Azure agent with
externalLoop: trueand that agent has a tool wrapped inApprovalRequiredAIFunction, the first turn correctly surfaces anmcp_approval_requestto the client. On the second turn (after the client posts themcp_approval_response), the request fails with:This blocks the canonical pattern of "agent with a tool that requires approval" inside a declarative workflow on Foundry hosting. The HITL bridge added in #5589 works correctly for the
InvokeFunctionTool + requireApprovalshape (workflow-level tool); only the agent-level /externalLoopshape is affected.Repro
Repo:
alliscode/foundry-samples-pr(or any equivalent declarative-workflow hosted-agent setup using 1.4.x packages).In
workflow.yaml, define a single agent step usingInvokeAzureAgentwithexternalLoop: true(mirroringmicrosoft/agent-framework/dotnet/samples/03-workflows/Declarative/FunctionTools/FunctionTools.yaml).In
Program.cs, register the target Azure agent with a tool wrapped inApprovalRequiredAIFunction, e.g.:Deploy to a Foundry hosted-agent project (
azd upor equivalent).Invoke the agent with a prompt that triggers the tool (e.g. "Issue a refund for order 123").
Observed
Turn 1: agent emits
function_callforIssueRefund.ApprovalRequiredAIFunctionhalts; bridge surfacesmcp_approval_request{name=IssueRefund, arguments=...}to the client. ✅Turn 2: client POSTs
mcp_approval_response{approve=true}. Server returns:Expected
Root cause (working hypothesis)
The Azure Conversations API persists the
function_callitem as soon as the model emits it (during turn 1, before the local tool would normally execute).ApprovalRequiredAIFunctionhalts the agent execution flow before anyfunction_call_outputis written back. From the Conversations API's perspective there is now a danglingfunction_callwith no matching output. On the next request against that conversation, the API's invariant check fails and rejects the call — no matter what the new request actually contains.This is independent of the declarative-workflow HITL bridge (
IExternalRequestEnvelope/ WorkflowSession bridge) introduced in #5589: that bridge correctly emits themcp_approval_requestand would correctly consume anmcp_approval_response, but never gets a chance because the conversation is already wedged at the storage layer.The same
ApprovalRequiredAIFunctionshape works fine when the agent is not running under a hosted Conversations-backed thread (e.g.ChatClientAgentwith in-memory thread), which is consistent with this hypothesis.Possible fixes (need design input)
function_call_outputon halt. WhenApprovalRequiredAIFunctionhalts, write a sentinelfunction_call_output(e.g.{"status":"pending_approval"}) so the conversation stays consistent. On resume, emit a newfunction_callwith the same name + arguments, then write the real output. Pro: keeps the storage invariant intact. Con: surfaces a synthetic event in conversation history.function_callpersistence until after the function executes (or is approved + executes). Buffer thefunction_callitem locally until either (a) the function returns and we can write the pair atomically, or (b) approval is denied and we write afunction_call_outputwith a refusal payload. Pro: cleaner conversation history. Con: requires plumbing through the Conversations write path.ApprovalRequiredAIFunctionin the agent loop. Don't write thefunction_callto Conversations at all when the function isApprovalRequiredAIFunctionand is halting; instead translate the halt into anmcp_approval_requestenvelope and write only that. On resume, write the matchedfunction_call/function_call_outputpair atomically once the function actually runs. Probably the cleanest semantically.Option 3 most closely matches what the workflow-level
InvokeFunctionTool + requireApprovalpath already does (nofunction_callpersisted; only the bridge envelope is exchanged).Workaround
Use
InvokeFunctionToolat the workflow level withrequireApproval: trueinstead ofApprovalRequiredAIFunctionat the agent-tool level. This works end-to-end today (seesamples/csharp/hosted-agents/agent-framework/declarative-workflow-approval).Repro environment
Microsoft.Agents.AI.Workflows.Declarative1.4.0-rc1Microsoft.Agents.AI.Workflows.Declarative.Foundry1.4.0-rc1Microsoft.Agents.AI.Foundry.Hosting1.4.0-preview.260505.1References
InvokeFunctionTool + requireApprovalsample (workaround):alliscode/foundry-samples-pr, branchsample/declarative-workflow-dotnet, pathsamples/csharp/hosted-agents/agent-framework/declarative-workflow-approval/externalLoopreference YAML:microsoft/agent-framework/dotnet/samples/03-workflows/Declarative/FunctionTools/FunctionTools.yamlC:\Users\bentho\src\ha-dec-5\src\declarative-customer-support(with.hitl-bakfiles restored).