-
Notifications
You must be signed in to change notification settings - Fork 5k
Description
Library name and version
Azure.AI.OpenAI (2.3.0-beta.1) - Azure Core (1.47.3)
Describe the bug
Hi,
I’m developing a chatbot using the Azure SDK and OpenAI features (especially the Assistants feature). I’ve recently run into an issue that I haven’t been able to resolve.
Here’s the scenario.
I created a Proxy assistant (via the Assistants Playground web interface).
The role of this Proxy assistant is to analyze the user’s prompt and route it to another specialized assistant that can answer the question.
This routing in the Proxy agent is implemented via function (TOOL) definitions—one function per specialized assistant—to handle a specific type of question.
As an example, I defined two specialized assistants:
Assistant A: Specialized in answering questions related to topic X.
Assistant B: Specialized in answering questions related to topic Y.
To answer questions, my assistants A and B can also call functions (TOOLS) defined on each specialized assistant.
Here’s a possible flow:
user prompt → Proxy Assistant → [TOOL] AskAssistantA(prompt) → Assistant A → [TOOL] GetInformation(param1, param2, ...)
Here’s the problem I’m encountering. Let’s use the example above.
When the GetInformation function call returns its output (a string), that output is submitted to the Run (created on Assistant A).
if (run.Status == RunStatus.RequiresAction)
{
List toolOutputs = [];
foreach (OpenAI.Assistants.RequiredAction action in run.RequiredActions)
{
using JsonDocument argumentsJson = JsonDocument.Parse(action.FunctionArguments);
string toolResult = await OAIFunctionCallbackDispatherAsync(action.FunctionName, argumentsJson, turnContext, cancellationToken);
_logger.Debug($"Tool [{action.FunctionName}] returned: {toolResult}");
toolOutputs.Add(new ToolOutput(action.ToolCallId, toolResult));
}
run = await client.SubmitToolOutputsToRunAsync(run.ThreadId, run.Id, toolOutputs);
}
Once run.Status.IsTerminal is true, when I exit my loop (run.Status.IsTerminal == true), I notice that the status of my run object (for Assistant A) ends with the value Incomplete.
I dump the contents of my Run in JSON, but the reason why the Run ends with this status isn’t clearly apparent to me.
{
"RequiredActions": [],
"ResponseFormat": {},
"ToolConstraint": {},
"NucleusSamplingFactor": 1,
"AllowParallelToolCalls": true,
"MaxInputTokenCount": null,
"MaxOutputTokenCount": null,
"Id": "run_23JsD8g2j2XYSJoUiq8IdDEF",
"CreatedAt": "2025-09-04T09:11:08+00:00",
"ThreadId": "thread_LDH6D9mCCbJwqrkBFEcwnDIt",
"AssistantId": "asst_PChx1vfAZAtFCQdHOHOEhZF0",
"Status": {
"IsTerminal": true
},
"LastError": null,
"ExpiresAt": null,
"StartedAt": "2025-09-04T09:11:11+00:00",
"CancelledAt": null,
"FailedAt": null,
"CompletedAt": "2025-09-04T09:11:12+00:00",
"IncompleteDetails": {
"Reason": {}
},
"Model": "gpt-4o-mini",
"Instructions": "You are a helpful and informative assistant designed to answer user questions related to...",
"Tools": [
{},
{}
],
"Metadata": {},
"Usage": {
"OutputTokenCount": 22,
"InputTokenCount": 661,
"TotalTokenCount": 683
},
"Temperature": 0.1,
"TruncationStrategy": {
"LastMessages": 5
}
}
This problem is recent. After doing some research, it seems this issue could be related to maximum token consumption. I don’t appear to be hitting any particular limit, and my usage remains very low.
Any help would be appreciated.
Thank you.
Fred
Expected behavior
The Run ends with Complete status.
Actual behavior
The Run ends with Incomplete status.
Reproduction Steps
The scenario is described in the section "Described bug".
Environment
.net 8.0, Windows Server 2019, Visual Studio 17.13.6, Azure.AI.OpenAI (2.3.0-beta.1)