.NET: Add dynamic tool expansion sample#5425
Conversation
There was a problem hiding this comment.
Pull request overview
Adds a new .NET sample demonstrating “dynamic tool expansion” during a function-calling loop by mutating ChatOptions.Tools from the ambient FunctionInvokingChatClient.CurrentContext.
Changes:
- Add
Agent_Step20_DynamicFunctionToolssample (code + README) showing runtime tool registration. - Add the sample to the Agents samples index and to the main .NET solution.
Reviewed changes
Copilot reviewed 5 out of 5 changed files in this pull request and generated 3 comments.
Show a summary per file
| File | Description |
|---|---|
| dotnet/samples/02-agents/Agents/README.md | Adds the new sample to the “Getting started with agents” list. |
| dotnet/samples/02-agents/Agents/Agent_Step20_DynamicFunctionTools/README.md | Documents the dynamic tool expansion scenario and how to run the sample. |
| dotnet/samples/02-agents/Agents/Agent_Step20_DynamicFunctionTools/Program.cs | Implements the dynamic tool expansion via CurrentContext.Options.Tools plus logging middleware. |
| dotnet/samples/02-agents/Agents/Agent_Step20_DynamicFunctionTools/Agent_Step20_DynamicFunctionTools.csproj | Adds a new sample project referencing Microsoft.Agents.AI.OpenAI. |
| dotnet/agent-framework-dotnet.slnx | Includes the new sample project in the solution. |
There was a problem hiding this comment.
Automated Code Review
Reviewers: 4 | Confidence: 92%
✗ Correctness
This PR adds a new sample (Step20) demonstrating dynamic function tool expansion during an agent's function-calling loop. The API usage is correct:
BuildAIAgentreturnsChatClientAgentwhich extendsAIAgent, andFunctionInvokingChatClient.CurrentContextis a valid ambient context property used elsewhere in the codebase. The logic for dynamic tool registration, duplicate avoidance, and catalog matching is sound. One inconsistency: the new .csproj uses<TargetFramework>(singular) while all 19 existing sample .csproj files consistently use<TargetFrameworks>(plural). While both compile for a single TFM, this breaks the established convention.
✓ Security Reliability
This PR adds a new sample demonstrating dynamic function tool expansion during agent function-calling loops. The code follows established patterns from other samples in the repository: environment variable credential handling, OpenAIClient plain-string constructor, ChatClientBuilder middleware, and no explicit resource disposal (consistent with other console samples). The API usages (BuildAIAgent, FunctionInvokingChatClient.CurrentContext, AsBuilder, .Use middleware) are all verified against the codebase. No security or reliability issues were found — the tool catalog is finite and read-only, LM-sourced input is only used for keyword matching against a hardcoded dictionary, and null cases are properly handled.
✗ Test Coverage
This PR adds a new sample (Agent_Step20_DynamicFunctionTools) demonstrating dynamic tool expansion via FunctionInvokingChatClient.CurrentContext. The sample code itself is well-structured and the core framework types it exercises are well-tested. However, the sample is not registered in the verify-samples tool (
dotnet/eng/verify-samples/AgentsSamples.cs), which is the established convention for all other Agent_Step samples. Without a SampleDefinition entry, the sample won't be built/run/verified by thedotnet-verify-samples.ymlCI workflow, leaving it with no automated test coverage.
✗ Design Approach
I found one design-level problem. The new Step20 sample is implemented as an OpenAI-specific sample, but it is being added to the
samples/02-agents/Agentswalkthrough, whose shared README and prerequisites are explicitly Azure OpenAI-based. That mismatch means the sample is solving the right technical scenario in the wrong sample track, so discovery, setup, and execution guidance become misleading for users following the existingAgentspath.
Flagged Issues
- The new sample is not registered in
dotnet/eng/verify-samples/AgentsSamples.cs. Every other Agent_Step sample (Step01–Step19) has aSampleDefinitionentry there, which is how the repo's CI (dotnet-verify-samples.yml) validates samples. Without this entry the sample has zero automated verification. A definition should be added after the Step19 entry (around line 343) with appropriateName,ProjectPath,RequiredEnvironmentVariables,MustContain, andExpectedOutputDescriptionfields. -
Program.cs:13-14usesOPENAI_API_KEY/OpenAIClient, while the parentAgents/README.md:16-23defines this sample track around Azure OpenAI setup (AZURE_OPENAI_ENDPOINT, Azure CLI auth). The sample should either useAzureOpenAIClientto stay consistent with theAgents/walkthrough, or be moved into an OpenAI-specific sample area such asAgentWithOpenAIorAgentProviders.
Suggestions
- The .csproj uses
<TargetFramework>net10.0</TargetFramework>(singular), while all 19 other sample .csproj files use<TargetFrameworks>net10.0</TargetFrameworks>(plural). Both compile, but changing to plural would maintain consistency across the sample set.
Automated review by westey-m's agents
|
Hello, Let me explain : In your example, you send the following prompts sequentially : string[] prompts =
[
"What's the weather like in Seattle and London?",
"What time is it in New York?",
"Can you convert those temperatures to Celsius?"
];And this works fine. Now, if I change the prompts to make two distincts calls to the same "GetWeather" methods : string[] prompts =
[
"What's the weather like in Seattle ?",
"What's the weather like in London ?",
"What time is it in New York?",
"Can you convert those temperatures to Celsius?"
];Then, the second prompt (i.e : second call to the previously loaded tool "GetWeather") fails with the following error : Maybe this is the expected behaviour when using FunctionInvokingChatClient.CurrentContext (who might be short lived for the current LLM answer only). But if this is the case, I wonder how to properly add a new tool so that it could be reliably use for the rest of the chat session. Might be related to #5325 Thank you for your great work ! |
|
Hi @gjactat, thanks for mentioning this. What you are seeing is definitely unexpected. It may be due to an issue with the model where it assumes that it can call a function if it called it previously, even though the set of advertised functions changed. I was running the sample with gpt-5.4-mini but haven't reproduced what you are seeing. What model were you using? Modifying the input as you did, I get the following:
|


Motivation and Context
#5326
Description
Contribution Checklist