Skip to content

Conversation

@ammar-agent
Copy link
Collaborator

@ammar-agent ammar-agent commented Dec 20, 2025

Eliminate localStorage↔backend duplication for thinking levels by making per-model thinking backend-authoritative.

What changed

  • Added persistedSettings to config (~/.mux/config.json) storing ai.thinkingLevelByModel
  • New oRPC routes: persistedSettings.get, persistedSettings.setAIThinkingLevel, persistedSettings.onChanged
  • Frontend PersistedSettingsStore keeps an in-memory snapshot, refreshes + subscribes, and provides a localStorage fallback + one-time seed to migrate existing thinkingLevel:model:* keys
  • ThinkingContext, command palette thinking actions, and getSendOptionsFromStorage now read from the store; no longer write thinking to localStorage
  • WorkspaceContext only seeds workspace model; thinking is handled via persistedSettings

Tests

  • Added unit tests for PersistedSettingsService + PersistedSettingsStore
  • Updated WorkspaceContext seeding test

Generated with mux • Model: openai:gpt-5.2 • Thinking: xhigh

Restore the per-model thinking behavior that was changed in #1203. While
that PR added valuable backend persistence for AI settings, it also
changed thinking levels from per-model to per-workspace scoping. This
was an unintended UX regression - users expect different models to
remember their individual thinking preferences.

### Changes

- **ThinkingContext**: Use per-model localStorage keys
  (`thinkingLevel:model:{model}`) instead of workspace-scoped keys
- **WorkspaceContext**: Seed per-model thinking from backend metadata
- **Storage**: Remove workspace-scoped thinking key from persistent keys
  (thinking is global per-model, not workspace-specific)
- **sendOptions**: Simplify to read per-model thinking directly
- **useCreationWorkspace**: Remove workspace-scoped thinking sync
- **ThinkingSlider**: Update tooltip to "Saved per model"

### Backend persistence preserved

Backend still stores `aiSettings.thinkingLevel` per workspace (the
last-used value) and seeds it to new devices via the per-model key.
This maintains cross-device sync while restoring the expected per-model
UX.

### Tests

- Updated ThinkingContext tests to verify per-model behavior
- Updated WorkspaceContext test to seed per-model key
- Updated useCreationWorkspace test to remove migration assertion
@ammar-agent ammar-agent changed the title 🤖 fix: restore per-model thinking levels 🤖 feat: persist per-model thinking via backend settings Dec 20, 2025
@ammar-agent
Copy link
Collaborator Author

Heads up: we’re not planning to merge this approach. We’re abandoning the backend persistedSettings/store design in favor of mode-based thinking storage.

Leaving notes here because there were a few useful lessons worth carrying forward:

  • State ownership matters: making thinking backend-authoritative removes localStorage↔backend drift, but it necessarily introduces a cache + subscription layer on the client. If we don’t want that complexity, we should keep the source of truth local.
  • Normalize at the boundary: always normalize model IDs (mux-gateway:provider/modelprovider:model) before persisting or keying lookups, or you’ll get duplicate entries and surprising “lost” preferences.
  • Prefer a typed blob + events over per-key RPC: get() returning a small typed object + setX() mutations + onChanged() works well and avoids O(n) IPC patterns.
  • Migration needs to be explicit + best-effort: seeding backend state from existing localStorage (once) is a reasonable bridge, but plan for partial failures / old-server mismatches.
  • Testing: permission-based tests are flaky under root/CI. For failure-path coverage, prefer deterministic setups (e.g. ENOTDIR via “muxHome is a file”) and gate async-iterator tests with explicit signals to avoid racey ordering.

Generated with mux • Model: openai:gpt-5.2 • Thinking: xhigh

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant