Skip to content

Fix model selector showing wrong model in tabs#7784

Merged
jh-block merged 17 commits intomainfrom
jhugo/fix-issue-7402
Mar 11, 2026
Merged

Fix model selector showing wrong model in tabs#7784
jh-block merged 17 commits intomainfrom
jhugo/fix-issue-7402

Conversation

@jh-block
Copy link
Collaborator

Summary

Fixed the model selector display being incorrectly synchronized across all tabs when changing a model in one tab. The bug occurred because the global ModelAndProviderContext state was being updated whenever any session's model changed, causing all tabs to display the same model name regardless of their actual session models.

Type of Change

  • Bug fix

Key Changes

  • Made model display per-session: each tab now shows its own session's model
  • Passed sessionModel/sessionProvider as props through the component tree instead of reading from global context
  • Added local override state in ModelsBottomBar for immediate UI feedback before session re-fetches
  • Only update the config default when changeModel() is called without a sessionId (existing chats don't affect defaults for new chats)

Testing

  • All existing tests pass (4 pre-existing failures in App.test.tsx unrelated to this change)
  • TypeScript type checking passes
  • Component tree data flow verified

Related Issues

Fixes #7402

AI Assistance

  • This PR was created or reviewed with AI assistance

Fix the model selector display being incorrectly synchronized across all tabs when changing a model in one tab. The bug occurred because the global ModelAndProviderContext state was being updated whenever any session's model changed, causing all tabs to display the same model name regardless of their actual session models.

Changes:
- Make model display per-session: each tab now shows its own session's model
- Pass sessionModel/sessionProvider as props through the component tree instead of reading from global context
- Add local override state in ModelsBottomBar for immediate UI feedback before session re-fetches
- Only update the config default when changeModel() is called without a sessionId (new chats use defaults, existing chats don't affect defaults)

Co-Authored-By: Claude Haiku 4.5 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 37fe03f133

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

- Use session model/provider for token-limit lookup in ChatInput instead
  of always reading from global config, so tabs with session-specific
  models get accurate context-window limits.
- Make changeModel() return a boolean success indicator so
  onModelSelected callback only fires after a successful switch,
  preventing the UI from showing an override for a model change that
  failed.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f298d1774a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

- Check updateAgentProvider response for errors so changeModel()
  correctly returns false on API failure, preventing stale UI overrides.
- Include configModel/configProvider in token-limit refresh deps so
  Hub (no-session) ChatInput reloads limits after a config model change.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: a2d73aa303

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

…imit refresh after in-session model changes

Moves the modelOverride state from ModelsBottomBar up to ChatInput, creating
effectiveModel/effectiveProvider that both CostTracker, loadProviderDetails,
and ModelsBottomBar consume. This ensures cost tracking and token limits
update immediately after an in-session model change.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jh-block jh-block requested review from DOsinga and jamadeo March 10, 2026 13:02
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: f3d469b88e

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

When no session exists (Hub, new empty chat), sessionModel/sessionProvider
are null. Re-add the useModelAndProvider fallback so the bottom bar shows
the configured default model instead of "Select Model".

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 5d6a1f3e6d

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

The predefined model initialization was reading GOOSE_MODEL from config,
which could preselect the wrong model when a tab's model differs from the
global default. Use the already-computed currentModel (which prefers
session model over config) instead.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 3ae3522d5a

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Split predefined model initialization into its own effect with
currentModel in the dependency array, so if the modal opens before
session/config data has loaded, the selection updates once it arrives.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
- CostTracker: make model/provider props required (non-optional)
- ModelsBottomBar: make onModelChanged required (always set by ChatInput)
- useCostTracking: early-return when model/provider are missing instead
  of proceeding with null values

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 26a4954516

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

jh-block and others added 5 commits March 11, 2026 17:46
In Hub / no-session paths, sessionModel/sessionProvider are always null
so the override was never cleared. Now also check configModel/configProvider
when there's no active session, preventing stale override from blocking
config-driven model updates.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
When opening a past chat, session model data loads asynchronously. During
that window, both ModelsBottomBar and loadProviderDetails were falling
back to the config default model, causing a brief flash of the wrong
model. Now only fall back to config when there is genuinely no session
(sessionId is null); when a session exists but data hasn't arrived, wait
rather than showing/loading the wrong model.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… fallback

Use opacity-0 on the model label while waiting for session data to load,
so the bot icon stays visible but no incorrect model name flashes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
… chats

The sessionId guard was too aggressive — new chats have a sessionId but
should still show the config default model. Instead, track whether session
model data has resolved: for new chats (!sessionId or sessionModel already
set on mount) it's immediately resolved; for resumed chats it resolves
once sessionModel arrives, hiding the label only during that brief window.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
The previous sessionModelResolved heuristic couldn't distinguish a new
chat (session loaded but no model_config) from a resumed chat still
loading. Now BaseChat passes sessionLoaded (session !== undefined) through
ChatInput to ModelsBottomBar, so:
- New chat: session loads quickly with no model → shows config default
- Resumed chat: hides label only until session fetch completes

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 1d985cac3c

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

In manual (non-predefined) mode, provider and model useState values are
set at mount time, which may be before session data loads. Add an effect
to update them when currentModel/currentProvider resolve, but only if
the user hasn't already changed them.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 0c12fe10bb

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

effectiveModel/effectiveProvider were null in Hub (no session, no
override), causing CostTracker to receive null and hide pricing. Add
configModel/configProvider as the final fallback in the chain.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 8cc0ff6123

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

…ovider

When the modal opens with initialProvider different from the active
provider, model is intentionally set to empty so the user picks one.
The sync effect was overriding this by repopulating model from
currentModel. Guard against this by skipping sync in the forced-provider
flow.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: b0fb74eb06

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

The sync effect was firing whenever model was empty, including after the
user intentionally cleared it by changing provider. Use a ref to ensure
it only fires once (on initial session data arrival) and doesn't
interfere with subsequent user-driven state changes.

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@jh-block jh-block added this pull request to the merge queue Mar 11, 2026
Merged via the queue into main with commit 6fed3e3 Mar 11, 2026
21 checks passed
@jh-block jh-block deleted the jhugo/fix-issue-7402 branch March 11, 2026 18:36
lifeizhou-ap added a commit that referenced this pull request Mar 12, 2026
* main: (270 commits)
  test(acp): align provider and server test parity (#7822)
  fix(acp): register MCP extensions when resuming a session (#7806)
  fix(goose): load .gitignore in prompt_manager for hint file filtering (#7795)
  fix: remap max_completion_tokens to max_tokens for OpenAI-compatible providers (#7765)
  fix(openai): preserve Responses API tool call/output linkage (#7759)
  chore(deps): bump @hono/node-server from 1.19.9 to 1.19.11 in /evals/open-model-gym/mcp-harness (#7687)
  fix: return ContextLengthExceeded when prompt exceeds effective KV cache size (#7815)
  feat: MCP Roots support (#7790)
  fix(google): use `includeThoughts/part.thought` for thinking handling (#7593)
  refactor: simplify tokenizer initialization — remove unnecessary Result wrapper (#7744)
  Fix model selector showing wrong model in tabs (#7784)
  Stop collecting goosed stderr after startup (#7814)
  fix: avoid word splitting by space for windows shell commands (#7781) (#7810)
  Simplify and make it not break on linux (#7813)
  Add preferred microphone selection  (#7805)
  Remove dependency on posthog-rs (#7811)
  feat: load hints in nested subdirs (#7772)
  feat(acp): add read tool and delegate filesystem I/O to ACP clients (#7668)
  Support secret interpolation in streamable HTTP extension URLs (#7782)
  More logging for command injection classifier model training (#7779)
  ...
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

Desktop: Switching model in one tab incorrectly updates model display in all tabs

2 participants