Conversation
…essages and LocalAI Replace || with ?? for temperature config in Anthropic Messages provider and LocalAI providers. The || operator treats 0 as falsy, so temperature: 0 was silently ignored and fell back to environment variable or hardcoded default (0.7 for LocalAI). The fix already exists in anthropic/completion.ts (using ??) and was applied to OpenAI/Azure in PR #7323, but was not propagated to these providers. Closes #8161 Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
…sh-coalescing-extended
The original nullish coalescing fix (|| → ??) correctly preserved temperature: 0 from config, but provider-scoped env overrides (e.g. env.ANTHROPIC_TEMPERATURE set per-provider in YAML) were still bypassed because getEnvFloat() only reads process.env and cliState, not options.env. Add provider-scoped env to the temperature fallback chain: config.temperature → provider env → global env → default - Store env on LocalAiGenericProvider instance - Add parseEnvFloat helper to convert string env values to numbers - Update model IDs in tests to claude-sonnet-4-6
Security Review ✅No critical issues found. The changes correctly add provider-scoped env variable support for temperature, with proper precedence: 🟡 Minor Observations (1 item)
Last updated: 2026-03-15 | Reviewing: 9722cac |
There was a problem hiding this comment.
👍 All Clear
I reviewed the changes to Anthropic and LocalAI providers that adjust how the numeric temperature parameter is sourced (adding provider-scoped env overrides and nullish coalescing fixes). I traced inputs and outputs through the providers and confirmed no modifications to prompts, tools/capabilities, or output execution. Based on this, there are no new LLM security risks introduced by this PR.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more
📝 WalkthroughWalkthroughThis pull request fixes a bug where explicitly setting Estimated code review effort🎯 2 (Simple) | ⏱️ ~12 minutes Possibly related PRs
Suggested reviewers
🚥 Pre-merge checks | ✅ 4 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (4 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
📝 Coding Plan
Comment |
There was a problem hiding this comment.
🧹 Nitpick comments (3)
src/providers/anthropic/messages.ts (1)
32-38: Consider centralizingparseEnvFloatto avoid drift across providers.This helper now exists in multiple provider files with identical logic. Extracting to a shared util would reduce divergence risk in future changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/anthropic/messages.ts` around lines 32 - 38, The parseEnvFloat helper in this file (function parseEnvFloat) is duplicated across providers; extract it into a shared utility (e.g., create a new exported function parseEnvFloat in a common utils module) and replace the local definition with an import from that module in this file and other provider files that currently define the same function, ensuring signatures and behavior remain identical and updating imports where parseEnvFloat is referenced.test/providers/localai.test.ts (1)
49-101: Add matching env-precedence tests forLocalAiCompletionProvider.Nice chat coverage here, but the same provider-scoped env logic was added to completion too; mirroring these cases there would close the regression gap.
As per coding guidelines, "`test/**/*.{ts,tsx}`: Test both success and error cases for all functionality."Suggested additions
+ it('should use provider-scoped env temperature when config temperature is not set (completion)', async () => { + vi.mocked(fetchWithCache).mockResolvedValue({ + data: { choices: [{ text: 'Test output' }] }, + } as any); + + const provider = new LocalAiCompletionProvider('test-model', { + config: {}, + env: { LOCALAI_TEMPERATURE: '0.42' }, + }); + + await provider.callApi('Test prompt'); + + const callBody = JSON.parse( + (vi.mocked(fetchWithCache).mock.calls[0][1] as RequestInit).body as string, + ); + expect(callBody.temperature).toBe(0.42); + });🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/providers/localai.test.ts` around lines 49 - 101, Add three unit tests mirroring the LocalAiChatProvider cases but for LocalAiCompletionProvider: (1) when config.temperature is undefined and env { LOCALAI_TEMPERATURE: '0.42' } assert the request body temperature is 0.42, (2) when env { LOCALAI_TEMPERATURE: '0' } assert temperature is 0, and (3) when config.temperature is set (e.g., 0.1) and env { LOCALAI_TEMPERATURE: '0.9' } assert the config value (0.1) wins. In each test mock fetchWithCache to resolve with a completion-shaped response, instantiate LocalAiCompletionProvider with the same options pattern used in the chat tests, call provider.callApi('Test prompt'), parse the fetchWithCache mock call body (JSON.parse((vi.mocked(fetchWithCache).mock.calls[0][1] as RequestInit).body as string)) and assert callBody.temperature accordingly.src/providers/localai.ts (1)
63-67: Extract temperature resolution into a shared base helper.The same precedence chain is duplicated in chat and completion methods; a single helper on
LocalAiGenericProviderwould keep behavior in sync.Refactor sketch
class LocalAiGenericProvider implements ApiProvider { + protected resolveTemperature(defaultValue = 0.7): number { + return ( + this.config.temperature ?? + parseEnvFloat(this.env?.LOCALAI_TEMPERATURE) ?? + getEnvFloat('LOCALAI_TEMPERATURE') ?? + defaultValue + ); + } } // chat - temperature: - this.config.temperature ?? - parseEnvFloat(this.env?.LOCALAI_TEMPERATURE) ?? - getEnvFloat('LOCALAI_TEMPERATURE') ?? - 0.7, + temperature: this.resolveTemperature(), // completion - temperature: - this.config.temperature ?? - parseEnvFloat(this.env?.LOCALAI_TEMPERATURE) ?? - getEnvFloat('LOCALAI_TEMPERATURE') ?? - 0.7, + temperature: this.resolveTemperature(),Also applies to: 147-151
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@src/providers/localai.ts` around lines 63 - 67, Summary: Extract the duplicated temperature precedence chain into a shared helper on LocalAiGenericProvider. Create a private method (e.g., resolveTemperature or getTemperature) on LocalAiGenericProvider that returns this.config.temperature ?? parseEnvFloat(this.env?.LOCALAI_TEMPERATURE) ?? getEnvFloat('LOCALAI_TEMPERATURE') ?? 0.7, then replace the inline chains in the chat and completion code paths with calls to that method; ensure both locations (the current blocks around temperature at lines referenced and the similar block at 147-151) use the new helper so behavior stays consistent.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Nitpick comments:
In `@src/providers/anthropic/messages.ts`:
- Around line 32-38: The parseEnvFloat helper in this file (function
parseEnvFloat) is duplicated across providers; extract it into a shared utility
(e.g., create a new exported function parseEnvFloat in a common utils module)
and replace the local definition with an import from that module in this file
and other provider files that currently define the same function, ensuring
signatures and behavior remain identical and updating imports where
parseEnvFloat is referenced.
In `@src/providers/localai.ts`:
- Around line 63-67: Summary: Extract the duplicated temperature precedence
chain into a shared helper on LocalAiGenericProvider. Create a private method
(e.g., resolveTemperature or getTemperature) on LocalAiGenericProvider that
returns this.config.temperature ?? parseEnvFloat(this.env?.LOCALAI_TEMPERATURE)
?? getEnvFloat('LOCALAI_TEMPERATURE') ?? 0.7, then replace the inline chains in
the chat and completion code paths with calls to that method; ensure both
locations (the current blocks around temperature at lines referenced and the
similar block at 147-151) use the new helper so behavior stays consistent.
In `@test/providers/localai.test.ts`:
- Around line 49-101: Add three unit tests mirroring the LocalAiChatProvider
cases but for LocalAiCompletionProvider: (1) when config.temperature is
undefined and env { LOCALAI_TEMPERATURE: '0.42' } assert the request body
temperature is 0.42, (2) when env { LOCALAI_TEMPERATURE: '0' } assert
temperature is 0, and (3) when config.temperature is set (e.g., 0.1) and env {
LOCALAI_TEMPERATURE: '0.9' } assert the config value (0.1) wins. In each test
mock fetchWithCache to resolve with a completion-shaped response, instantiate
LocalAiCompletionProvider with the same options pattern used in the chat tests,
call provider.callApi('Test prompt'), parse the fetchWithCache mock call body
(JSON.parse((vi.mocked(fetchWithCache).mock.calls[0][1] as RequestInit).body as
string)) and assert callBody.temperature accordingly.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: b427ba0c-2854-49f4-a1d0-a6f56469c9a7
📒 Files selected for processing (4)
src/providers/anthropic/messages.tssrc/providers/localai.tstest/providers/anthropic/messages.test.tstest/providers/localai.test.ts
There was a problem hiding this comment.
👍 All Clear
I reviewed the changes adding provider-scoped env handling for temperature across Anthropic Messages and LocalAI providers. The modifications only affect how a numeric temperature value is resolved from config or env and do not change prompt construction, tool access, or output execution. Tracing through provider instantiation and env override flows confirmed these values come from configuration, not untrusted runtime inputs. No LLM-security issues were identified related to prompt injection, data exfiltration, secrets/PII exposure, insecure output handling, excessive agency, or jailbreak risks.
Minimum severity threshold: 🟡 Medium | To re-scan after changes, comment @promptfoo-scanner
Learn more

Closes #8161
Summary
||with??fortemperatureconfig in Anthropic Messages provider and LocalAI providers (builds on fix(providers): use nullish coalescing for temperature in Anthropic Messages and LocalAI #8163)env.ANTHROPIC_TEMPERATURE/env.LOCALAI_TEMPERATUREset per-provider in YAML config now correctly participates in the fallback chain||operator treats0as falsy, sotemperature: 0was silently ignored and fell back to environment variable or hardcoded default (0.7for LocalAI)Temperature resolution order (both providers)
What was missing in #8163
The original fix correctly handled
config.temperature: 0, but provider-scoped env overrides (set viaenv:in YAML per-provider) were still bypassed becausegetEnvFloat()only readsprocess.envandcliState.config.env, notoptions.env. This PR adds the missing middle layer.Test plan
temperature: 0in config, provider-scoped env0.42, provider-scoped env0, and config-over-env precedence0, config-over-env precedence, and default0.7fallbacknpx vitest run test/providers/anthropic/messages.test.ts test/providers/localai.test.ts)npm run tscpassestemperature: 0, provider-scopedenv.ANTHROPIC_TEMPERATURE: "0.42", and default fallback all send correct values