Skip to content

Add configurable LLM temperature setting#75

Merged
sfw merged 1 commit intomainfrom
sfw/llm-temperature-config
Mar 20, 2026
Merged

Add configurable LLM temperature setting#75
sfw merged 1 commit intomainfrom
sfw/llm-temperature-config

Conversation

@sfw
Copy link
Copy Markdown
Owner

@sfw sfw commented Mar 20, 2026

Summary

  • New llm_temperature config field across Settings, admin API, and staff dashboard UI — lets operators set the sampling temperature for models that require specific values (e.g. kimi-k2.5 only accepts temperature=1)
  • OpenAICompatibleChatClient uses the configured default instead of hardcoded 0.2, while keeping the automatic temperature=1 retry as a safety net for unrecognized errors
  • Env var: DIBBLE_LLM_TEMPERATURE, TOML: [llm] temperature, Default: null (uses 0.2)

Test plan

  • Backend tests: 733 passed
  • Frontend tests: 289 passed
  • Backend lint clean
  • Frontend lint clean
  • Frontend production build passes
  • Manual: set llm_temperature = 1 in config, verify kimi-k2.5 no longer falls back to mock

🤖 Generated with Claude Code

Models like kimi-k2.5 only accept temperature=1, causing repeated 400
errors and mock fallback. Add llm_temperature to Settings, admin config,
and the staff dashboard so operators can set the value per-instance.
OpenAICompatibleChatClient uses the configured default (falling back to
0.2) instead of a hardcoded value. The automatic temperature=1 retry
remains as a safety net.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@sfw sfw merged commit 93901aa into main Mar 20, 2026
4 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant