Skip to content

Conversation

iceljc
Copy link
Collaborator

@iceljc iceljc commented Aug 12, 2025

PR Type

Enhancement


Description

  • Add reasoning settings configuration for LLM models

  • Update OpenAI provider to use configurable reasoning parameters

  • Upgrade OpenAI package to version 2.3.0

  • Refactor temperature and effort level handling


Diagram Walkthrough

flowchart LR
  A["LlmModelSetting"] --> B["ReasoningSetting"]
  B --> C["Temperature & EffortLevel"]
  C --> D["ChatCompletionProvider"]
  D --> E["Dynamic Configuration"]
Loading

File Walkthrough

Relevant files
Enhancement
LlmModelSetting.cs
Add reasoning configuration settings                                         

src/Infrastructure/BotSharp.Abstraction/MLTasks/Settings/LlmModelSetting.cs

  • Add ReasoningSetting class with temperature and effort level
    properties
  • Add reasoning settings property to LlmModelSetting
  • Rename LlmCost to LlmCostSetting for consistency
+16/-2   
ChatCompletionProvider.cs
Refactor reasoning parameters handling                                     

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs

  • Remove hardcoded temperature dictionary for reasoning models
  • Implement dynamic reasoning settings from configuration
  • Update reasoning effort level parsing with new default handling
  • Add pragma warning disable for OpenAI001
+15/-19 
Dependencies
Directory.Packages.props
Update OpenAI package version                                                       

Directory.Packages.props

  • Upgrade OpenAI package from 2.2.0-beta.4 to 2.3.0
+1/-1     

@iceljc iceljc merged commit 4e13310 into SciSharp:master Aug 12, 2025
0 of 4 checks passed
Copy link

PR Reviewer Guide 🔍

Here are some key observations to aid the review process:

⏱️ Estimated effort to review: 2 🔵🔵⚪⚪⚪
🧪 No relevant tests
🔒 No security concerns identified
⚡ Recommended focus areas for review

Nullability

Reasoning is nullable but consumed without null-checks elsewhere could cause NREs if other providers assume it exists; ensure callers handle null or provide defaults.

    public ReasoningSetting? Reasoning { get; set; }

    /// <summary>
    /// Settings for llm cost
    /// </summary>
    public LlmCostSetting Cost { get; set; } = new();

    public override string ToString()
    {
        return $"[{Type}] {Name} {Endpoint}";
    }
}

public class ReasoningSetting
{
    public float Temperature { get; set; } = 1.0f;
    public string? EffortLevel { get; set; }
Parsing Robustness

Temperature is parsed from state with float.Parse which can throw with culture-specific decimals; prefer invariant parsing and TryParse with fallback to config defaults.

var temperature = float.Parse(state.GetState("temperature", "0.0"));
if (settings?.Reasoning != null)
{
    temperature = settings.Reasoning.Temperature;
    var level = state.GetState("reasoning_effort_level")
                 .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel ?? string.Empty)
                 .IfNullOrEmptyAs(settings?.Reasoning?.EffortLevel ?? string.Empty);
    reasoningEffortLevel = ParseReasoningEffortLevel(level);
}

var maxTokens = int.TryParse(state.GetState("max_tokens"), out var tokens)
                ? tokens
                : agent.LlmConfig?.MaxOutputTokens ?? LlmConstant.DEFAULT_MAX_OUTPUT_TOKEN;

return new ChatCompletionOptions()
{
    Temperature = temperature,
    MaxOutputTokenCount = maxTokens,
Default Effort

ParseReasoningEffortLevel returns null on empty level, changing behavior from previous default; verify downstream expects optional value and define explicit default if required.

private ChatReasoningEffortLevel? ParseReasoningEffortLevel(string? level)
{
    if (string.IsNullOrWhiteSpace(level))
    {
        return null;
    }

    var effortLevel = new ChatReasoningEffortLevel("minimal");
    switch (level.ToLower())
    {
        case "low":
            effortLevel = ChatReasoningEffortLevel.Low;
            break;
        case "medium":
            effortLevel = ChatReasoningEffortLevel.Medium;
            break;
        case "high":
            effortLevel = ChatReasoningEffortLevel.High;

Copy link

PR Code Suggestions ✨

Explore these optional code suggestions:

CategorySuggestion                                                                                                                                    Impact
High-level
Reasoning config fallback gaps

The new reasoning settings are only applied when settings.Reasoning exists, but
when absent the code drops both model-specific defaults and the previous default
effort level, potentially changing behavior across non-reasoning models.
Consider restoring a clear fallback chain (state -> agent -> settings ->
constant) for both temperature and reasoning effort, and limit
ReasoningEffortLevel to models that support it to avoid unintended nulls or API
mismatches.

Examples:

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [490-499]
ChatReasoningEffortLevel? reasoningEffortLevel = null;
var temperature = float.Parse(state.GetState("temperature", "0.0"));
if (settings?.Reasoning != null)
{
    temperature = settings.Reasoning.Temperature;
    var level = state.GetState("reasoning_effort_level")
                 .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel ?? string.Empty)
                 .IfNullOrEmptyAs(settings?.Reasoning?.EffortLevel ?? string.Empty);
    reasoningEffortLevel = ParseReasoningEffortLevel(level);
}

Solution Walkthrough:

Before:

ChatReasoningEffortLevel? reasoningEffortLevel = null;
var temperature = float.Parse(state.GetState("temperature", "0.0"));
if (settings?.Reasoning != null)
{
    temperature = settings.Reasoning.Temperature;
    var level = state.GetState("reasoning_effort_level")
                 .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel)
                 .IfNullOrEmptyAs(settings?.Reasoning?.EffortLevel);
    reasoningEffortLevel = ParseReasoningEffortLevel(level);
}
// If settings.Reasoning is null, temperature defaults to 0.0 and reasoningEffortLevel is null,
// losing previous default behaviors.

After:

// Restore a full fallback chain outside the conditional check
var temperature = float.Parse(state.GetState("temperature", "0.0"));
if (settings?.Reasoning != null) {
    temperature = settings.Reasoning.Temperature;
}

var level = state.GetState("reasoning_effort_level")
             .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel)
             .IfNullOrEmptyAs(settings?.Reasoning?.EffortLevel)
             .IfNullOrEmptyAs(LlmConstant.DEFAULT_REASONING_EFFORT_LEVEL);

// Only parse effort level for models that support it
var reasoningEffortLevel = (settings?.Reasoning != null) ? ParseReasoningEffortLevel(level) : null;
Suggestion importance[1-10]: 9

__

Why: The suggestion correctly identifies a significant regression where the removal of fallback logic for temperature and effort level can silently alter model behavior, which is a critical design flaw.

High
Possible issue
Make temperature parsing resilient

Avoid unguarded float.Parse which can throw on invalid or locale-formatted
values. Use float.TryParse with InvariantCulture and fall back to a safe default
or configured temperature. This prevents runtime exceptions from corrupt state
inputs.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [491-499]

-var temperature = float.Parse(state.GetState("temperature", "0.0"));
+var temperatureStr = state.GetState("temperature", "0.0");
+if (!float.TryParse(temperatureStr, System.Globalization.NumberStyles.Float, System.Globalization.CultureInfo.InvariantCulture, out var temperature))
+{
+    temperature = 0.0f;
+}
 if (settings?.Reasoning != null)
 {
     temperature = settings.Reasoning.Temperature;
     var level = state.GetState("reasoning_effort_level")
                  .IfNullOrEmptyAs(agent?.LlmConfig?.ReasoningEffortLevel ?? string.Empty)
                  .IfNullOrEmptyAs(settings?.Reasoning?.EffortLevel ?? string.Empty);
     reasoningEffortLevel = ParseReasoningEffortLevel(level);
 }
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: The suggestion correctly identifies that float.Parse can throw an exception and proposes using float.TryParse for robustness, which is a valid improvement for error handling.

Medium
Validate max tokens bounds

Ensure negative or zero values for max tokens are not passed through, as they
can cause API errors. Clamp to a minimum of 1 and optionally cap to a sensible
upper bound from configuration.

src/Plugins/BotSharp.Plugin.OpenAI/Providers/Chat/ChatCompletionProvider.cs [501-503]

 var maxTokens = int.TryParse(state.GetState("max_tokens"), out var tokens)
                 ? tokens
                 : agent.LlmConfig?.MaxOutputTokens ?? LlmConstant.DEFAULT_MAX_OUTPUT_TOKEN;
+if (maxTokens <= 0)
+{
+    maxTokens = agent.LlmConfig?.MaxOutputTokens ?? LlmConstant.DEFAULT_MAX_OUTPUT_TOKEN;
+}
  • Apply / Chat
Suggestion importance[1-10]: 7

__

Why: This suggestion correctly points out that maxTokens could be an invalid non-positive value, and adding a check prevents potential downstream API errors, improving input validation.

Medium
  • More

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant