Conversation
Review Summary by QodoImplement dynamic LLM model upgrade mechanism via ISettingService
WalkthroughsDescription• Add dynamic LLM model upgrade mechanism via ISettingService • Replace hardcoded model strings with Gpt4xModelConstants references • Add new GPT-4o model variants to constants (search, transcribe, TTS) • Update test configurations to use GPT-5 models Diagramflowchart LR
A["Hardcoded Model Strings"] -->|Replace with| B["Gpt4xModelConstants"]
B -->|Upgrade via| C["ISettingService.GetUpgradeModel"]
C -->|Returns| D["Upgraded Model Name"]
E["New Model Variants"] -->|Added to| B
F["Multiple Services"] -->|Integrate| C
File Changes1. src/Infrastructure/BotSharp.Abstraction/Models/Gpt4xModelConstants.cs
|
Code Review by Qodo
1. InputAudioTranscription null dereference
|
|
|
||
| sessionUpdate.session.InputAudioTranscription = new InputAudioTranscription | ||
| { | ||
| Model = realtimeModelSettings.InputAudioTranscription.Model, | ||
| Model = settingService.GetUpgradeModel(realtimeModelSettings.InputAudioTranscription.Model), | ||
| Language = realtimeModelSettings.InputAudioTranscription.Language, |
There was a problem hiding this comment.
1. inputaudiotranscription null dereference 📘 Rule violation ⛯ Reliability
realtimeModelSettings.InputAudioTranscription.Model is dereferenced and passed into GetUpgradeModel without checking whether InputAudioTranscription (or its Model) is null. This can throw at runtime when that settings section is missing/partial.
Agent Prompt
## Issue description
The realtime session update dereferences `realtimeModelSettings.InputAudioTranscription` without null checks, then passes the model into `GetUpgradeModel`.
## Issue Context
Configuration sections for realtime input audio transcription can be absent or partially configured; the code should behave predictably (e.g., fall back to a known default model).
## Fix Focus Areas
- src/Plugins/BotSharp.Plugin.OpenAI/Providers/Realtime/RealTimeCompletionProvider.cs[377-381]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| "LlmConfig": { | ||
| "Provider": "openai", | ||
| "Model": "gpt-4.1-nano" | ||
| "Model": "gpt-5-nano" |
There was a problem hiding this comment.
2. Unknown default model 🐞 Bug ⛯ Reliability
src/WebStarter/appsettings.json sets Agent:LlmConfig:Model to "gpt-5-nano", but the configured OpenAI model list in the same file only includes gpt-4o-* and gpt-5 / gpt-5.1 / gpt-5.2 entries. When a flow uses CompletionProvider.GetCompletion (e.g., InstructService.Execute), ILlmProviderService.GetSetting can return null for this model and CompletionProvider dereferences settings.Type, causing a NullReferenceException.
Agent Prompt
### Issue description
`Agent:LlmConfig:Model` is set to `gpt-5-nano`, but the OpenAI provider’s configured model list does not define `gpt-5-nano`. When code paths call `CompletionProvider.GetCompletion`, `ILlmProviderService.GetSetting(provider, model)` can return `null` and `CompletionProvider` dereferences `settings.Type`, causing a runtime `NullReferenceException`.
### Issue Context
The default agent model in `WebStarter` config is used broadly across instruction/execution flows (e.g., `InstructService.Execute`). The provider-model registry (`LlmProviders`) must include any model used by `GetCompletion`.
### Fix Focus Areas
- src/WebStarter/appsettings.json[643-651]
- src/WebStarter/appsettings.json[151-568]
- src/Infrastructure/BotSharp.Core/Infrastructures/CompletionProvider.cs[16-26]
- src/Infrastructure/BotSharp.Core/Infrastructures/LlmProviderService.cs[77-95]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
| "LlmConfig": { | ||
| "Provider": "azure-openai", | ||
| "Model": "gpt-4o-mini" | ||
| "Model": "gpt-5-mini" |
There was a problem hiding this comment.
3. Broken test model config 🐞 Bug ✓ Correctness
tests/BotSharp.LLM.Tests/appsettings.json sets Agent:LlmConfig Provider=azure-openai and Model=gpt-5-mini, but the azure-openai provider models listed in the same config only include gpt-35-* entries. Any test path that uses CompletionProvider.GetCompletion (via AgentSettings) can fail model lookup and then crash due to dereferencing a null model setting.
Agent Prompt
### Issue description
The test `Agent:LlmConfig` points to an azure-openai model (`gpt-5-mini`) that is not present in the configured azure-openai `LlmProviders` model list. This can break any path that relies on `ILlmProviderService.GetSetting()` and `CompletionProvider.GetCompletion()`.
### Issue Context
`LlmProviderService.GetSetting` returns `null` when the model name is missing, while `CompletionProvider.GetCompletion` dereferences `settings.Type` without checking for `null`.
### Fix Focus Areas
- tests/BotSharp.LLM.Tests/appsettings.json[45-64]
- tests/BotSharp.LLM.Tests/appsettings.json[164-172]
- src/Infrastructure/BotSharp.Core/Infrastructures/CompletionProvider.cs[16-26]
ⓘ Copy this prompt and use it to remediate the issue with your preferred AI generation tools
|
reviewed |
No description provided.