Skip to content

feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757

Merged
Shironex merged 6 commits intov0.15.0rcfrom
feat/new-claude-and-codex-models
Feb 15, 2026
Merged

feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
Shironex merged 6 commits intov0.15.0rcfrom
feat/new-claude-and-codex-models

Conversation

@Shironex
Copy link
Collaborator

@Shironex Shironex commented Feb 5, 2026

Summary

  • Claude Opus 4.6: Update default Opus model to claude-opus-4-6 with 128K max output tokens and adaptive thinking support
  • GPT-5.3-Codex: Add latest Codex frontier model and update @openai/codex-sdk to 0.98.0
  • Claude Agent SDK: Update @anthropic-ai/claude-agent-sdk from 0.1.76 to 0.2.32

Changes

Claude Opus 4.6 Integration

  • Updated all 3 model maps (CLAUDE_CANONICAL_MAP, CLAUDE_MODEL_MAP, DEFAULT_MODELS) to resolve opusclaude-opus-4-6
  • Added 'adaptive' thinking level for Opus 4.6 (model decides its own reasoning depth)
  • Added isAdaptiveThinkingModel() and getThinkingLevelsForModel() helpers to @automaker/types
  • Updated thinking level UI: Opus 4.6 shows only "None" / "Adaptive", other models keep full 5-level range
  • Auto-normalizes thinking level on model switch (to/from Opus 4.6)
  • Updated display names (Opus 4.6) and available models list (128K output)

GPT-5.3-Codex Model

  • Added gpt53Codex: 'codex-gpt-5.3-codex' to CODEX_MODEL_MAP
  • Added to UI model options, reasoning effort support, and available models list
  • Updated @openai/codex-sdk from 0.97.0 to 0.98.0

SDK Updates

  • @anthropic-ai/claude-agent-sdk 0.1.76 → 0.2.32
  • @openai/codex-sdk 0.97.0 → 0.98.0
  • outputFormat property confirmed still valid (no deprecation)

Test plan

  • All 547 package tests passing
  • All 1418 server tests passing (including new adaptive thinking tests)
  • 0 lint errors
  • Successfully executed a feature with claude-opus-4-6 model
  • No remaining references to claude-opus-4-5-20251101 in codebase

🤖 Generated with Claude Code

Summary by CodeRabbit

  • New Features

    • Adaptive thinking support for Opus 4.6; GPT‑5.3‑Codex added as a reasoning-capable Codex model.
  • Updates

    • Opus bumped to 4.6 with much larger output capacity (up to 128K tokens); thinking-level options become model-aware; dependency updates for agent SDKs.
  • Documentation

    • Docs and examples updated to reference Opus 4.6 and new Codex model.
  • Tests

    • Unit tests adjusted/added for Opus 4.6 and adaptive thinking behavior.
  • Chores

    • Windows CLI path detection improved to handle NVM symlinks.

- Upgraded @openai/codex-sdk from version 0.77.0 to 0.98.0 in package-lock.json and package.json.
- Introduced new model 'GPT-5.3-Codex' with enhanced capabilities in codex-models.ts and related files.
- Updated descriptions for existing models to reflect their latest features and improvements.
- Adjusted Codex model configuration and display to include the new model and its attributes.

These changes enhance the Codex model offerings and ensure compatibility with the latest SDK version.
…upport

- Changed model identifier from `claude-opus-4-5-20251101` to `claude-opus-4-6` across various files, including documentation and code references.
- Updated the SDK to support adaptive thinking for Opus 4.6, allowing the model to determine its own reasoning depth.
- Enhanced the thinking level options to include 'adaptive' and adjusted related components to reflect this change.
- Updated tests to ensure compatibility with the new model and its features.

These changes improve the model's capabilities and user experience by leveraging adaptive reasoning.
@coderabbitai
Copy link

coderabbitai bot commented Feb 5, 2026

📝 Walkthrough

Walkthrough

Upgrades Claude Opus alias to 4.6, adds an "adaptive" thinking level (omitting explicit token budgets for adaptive), introduces codex-gpt-5.3-codex, updates SDK dependencies, augments Windows CLI path detection for NVM symlinks, and updates UI/tests/docs to reflect these changes.

Changes

Cohort / File(s) Summary
Claude Opus 4.6 Migration
CLAUDE.md, docs/llm-shared-packages.md, docs/server/utilities.md, docs/server/providers.md, libs/model-resolver/README.md, libs/types/src/model.ts, apps/server/src/providers/provider-factory.ts, apps/server/src/providers/claude-provider.ts, apps/ui/src/lib/agent-context-parser.ts, apps/ui/docs/AGENT_ARCHITECTURE.md, apps/server/tests/unit/.../model-resolver.test.ts, apps/server/tests/unit/providers/claude-provider.test.ts, apps/server/tests/unit/providers/provider-factory.test.ts, libs/model-resolver/tests/resolver.test.ts
Replaced claude-opus-4-5-20251101 with claude-opus-4-6 across mappings, docs, and tests; updated Opus metadata (maxOutputTokens and description).
Adaptive Thinking Level Support
libs/types/src/settings.ts, libs/types/src/index.ts, apps/server/src/lib/sdk-options.ts, apps/server/src/providers/claude-provider.ts, apps/ui/src/components/views/board-view/shared/model-constants.ts, apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx, apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx, apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx, apps/server/tests/unit/lib/sdk-options.test.ts
Added adaptive thinking level and APIs isAdaptiveThinkingModel() / getThinkingLevelsForModel(). SDK now omits maxThinkingTokens for adaptive; UI components fetch model-specific thinking levels and normalize when switching models; tests added/updated for adaptive behavior.
Codex Model Additions & Updates
libs/types/src/codex-models.ts, libs/types/src/model-display.ts, libs/types/src/model.ts, apps/server/src/providers/codex-models.ts, apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx, libs/types/src/index.ts, apps/server/package.json
Added codex-gpt-5.3-codex (GPT-5.3-Codex), updated Codex descriptions and reasoning mappings; bumped @anthropic-ai/claude-agent-sdk and @openai/codex-sdk versions.
System Path Detection
libs/platform/src/system-paths.ts
Added helper to include NVM-for-Windows symlink CLI paths (reads NVM_SYMLINK) and appended symlinked paths for claude, codex, and opencode on Windows.
UI & Tests Adjustments
apps/ui/src/components/.../thinking-level-selector.tsx, apps/ui/src/components/.../phase-model-selector.tsx, apps/ui/src/lib/agent-context-parser.ts, various tests under apps/server/tests and libs/model-resolver/tests
UI components now source model-specific thinking levels; thinking-level normalization added when switching models; multiple tests updated to expect new model identifiers and adaptive behavior.

Sequence Diagram

sequenceDiagram
    participant User as User
    participant UI as UI (Selector)
    participant Types as Types (libs/types)
    participant Options as SDK Options
    participant Provider as Claude Provider
    participant API as Claude API

    User->>UI: Select model (e.g., Opus 4.6)
    UI->>Types: getThinkingLevelsForModel(model)
    Types-->>UI: ['none','adaptive'] or ['none','low','medium','high','ultrathink']
    UI->>User: Show available thinking levels

    User->>UI: Choose 'adaptive'
    UI->>Options: buildThinkingOptions(level='adaptive')
    Options->>Options: detect 'adaptive' -> omit maxThinkingTokens
    Options-->>Provider: {} (no maxThinkingTokens)

    Provider->>API: Send request (model: claude-opus-4-6, no maxThinkingTokens)
    API-->>Provider: Response (model-managed reasoning)
    Provider-->>UI: Deliver result
Loading

Estimated Code Review Effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

Possibly Related PRs

Suggested Labels

Enhancement, scope: ui

Poem

🐇 I hopped from 4.5 into 4.6 bright,
Adaptive thoughts let the model write,
Codex 5.3 hops in to help with code,
Symlinked paths now find the road,
A little rabbit cheers this update night.

🚥 Pre-merge checks | ✅ 3 | ❌ 1
❌ Failed checks (1 warning)
Check name Status Explanation Resolution
Docstring Coverage ⚠️ Warning Docstring coverage is 71.43% which is insufficient. The required threshold is 80.00%. Write docstrings for the functions missing them to satisfy the coverage threshold.
✅ Passed checks (3 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Title check ✅ Passed The title clearly and concisely summarizes the two main changes in the pull request: adding support for Claude Opus 4.6 and GPT-5.3-Codex models.
Merge Conflict Detection ✅ Passed ✅ No merge conflicts detected when merging into v0.15.0rc

✏️ Tip: You can configure your own custom pre-merge checks in the settings.

✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/new-claude-and-codex-models

No actionable comments were generated in the recent review. 🎉

🧹 Recent nitpick comments
apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx (1)

1592-1598: Consider extracting the thinking-level description map to reduce duplication.

The level-to-description mapping (none → "No extended thinking", low → "Light reasoning (1k tokens)", …, adaptive → "Model decides reasoning depth") is repeated verbatim in six places across mobile/desktop renders for Claude, provider, and potentially more in the future.

Extracting it into a shared THINKING_LEVEL_DESCRIPTIONS constant (alongside THINKING_LEVEL_LABELS in model-constants) would DRY this up and make future level additions a single-point change.

Also applies to: 1714-1720, 1321-1327, 1430-1436

libs/types/src/model.ts (1)

101-105: Avoid duplicating the opus model string — reference the canonical map instead.

DEFAULT_MODELS.claude hard-codes 'claude-opus-4-6', duplicating the value already in CLAUDE_CANONICAL_MAP['claude-opus']. Line 104 already follows the better pattern by referencing CODEX_MODEL_MAP.gpt53Codex.

♻️ Suggested fix
 export const DEFAULT_MODELS = {
-  claude: 'claude-opus-4-6',
+  claude: CLAUDE_CANONICAL_MAP['claude-opus'],
   cursor: 'cursor-auto', // Cursor's recommended default (with prefix)
   codex: CODEX_MODEL_MAP.gpt53Codex, // GPT-5.3-Codex is the latest frontier agentic coding model
 } as const;

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces significant advancements in the platform's AI model capabilities by integrating the newest Claude Opus 4.6 and GPT-5.3-Codex models. It brings sophisticated features such as adaptive thinking for Claude Opus, allowing for more intelligent and efficient reasoning processes. The changes also encompass essential SDK updates and user interface enhancements to streamline model selection, ensuring the system leverages the most current and powerful AI technologies available.

Highlights

  • Claude Opus 4.6 Integration: The default Claude Opus model has been updated to claude-opus-4-6, featuring an increased maximum output token limit of 128K and the introduction of 'adaptive thinking'. This new thinking level allows the model to dynamically determine its own reasoning depth, optimizing performance.
  • GPT-5.3-Codex Model Support: Support for the latest frontier model, GPT-5.3-Codex, has been added. This includes its integration into the UI, reasoning effort capabilities, and updates to the underlying @openai/codex-sdk to version 0.98.0.
  • SDK Updates: The @anthropic-ai/claude-agent-sdk has been upgraded from 0.1.76 to 0.2.32, and the @openai/codex-sdk has been updated from 0.97.0 to 0.98.0, ensuring compatibility with the latest model features and improvements.
  • Dynamic Thinking Level UI/UX: The user interface for selecting thinking levels now dynamically adjusts based on the chosen model. For Claude Opus 4.6, only 'None' and 'Adaptive' options are presented, while other models retain the full range of manual thinking levels. This also includes auto-normalization of thinking levels when switching between adaptive and non-adaptive models.
  • Enhanced CLI Path Resolution for Windows: CLI tool path resolution on Windows has been improved by adding support for NVM for Windows (nvm4w) symlink paths, which enhances the discovery of claude, codex, and opencode executables installed via npm global under NVM.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • CLAUDE.md
    • Updated Claude Opus model reference from claude-opus-4-5-20251101 to claude-opus-4-6.
  • apps/server/package.json
    • Updated @anthropic-ai/claude-agent-sdk to 0.2.32.
    • Updated @openai/codex-sdk to 0.98.0.
  • apps/server/src/lib/sdk-options.ts
    • Modified buildThinkingOptions to handle 'adaptive' thinking level by omitting maxThinkingTokens, allowing the model to decide its reasoning depth.
  • apps/server/src/providers/claude-provider.ts
    • Updated getAvailableModels to reflect Claude Opus 4.6 with 128K output tokens and an adaptive thinking description.
    • Adjusted maxThinkingTokens logic to support adaptive thinking for Opus 4.6.
  • apps/server/src/providers/codex-models.ts
    • Added GPT-5.3-Codex model definition.
    • Updated descriptions for existing Codex models (GPT-5.2-Codex, GPT-5.1-Codex-Max, GPT-5.1-Codex-Mini, GPT-5.2).
  • apps/server/src/providers/provider-factory.ts
    • Updated model ID reference in JSDoc for getProviderForModel to claude-opus-4-6.
  • apps/server/tests/unit/lib/model-resolver.test.ts
    • Updated expected model string for 'opus' alias to claude-opus-4-6 in resolver tests.
  • apps/server/tests/unit/lib/sdk-options.test.ts
    • Added new test cases for adaptive thinking, ensuring maxThinkingTokens is undefined for 'adaptive' and 'none' levels.
  • apps/server/tests/unit/providers/claude-provider.test.ts
    • Updated all test cases to use claude-opus-4-6 instead of the previous Opus model ID.
  • apps/server/tests/unit/providers/provider-factory.test.ts
    • Updated test cases to use claude-opus-4-6 for ClaudeProvider instantiation.
  • apps/ui/docs/AGENT_ARCHITECTURE.md
    • Updated the example model ID to claude-opus-4-6.
  • apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx
    • Implemented logic in handleModelChange to normalize thinking levels when switching between adaptive and non-adaptive models.
  • apps/ui/src/components/views/board-view/shared/model-constants.ts
    • Added 'adaptive' to THINKING_LEVELS and THINKING_LEVEL_LABELS.
  • apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx
    • Modified to use getThinkingLevelsForModel to dynamically display available thinking levels based on the selected model.
    • Updated descriptive text to explain adaptive thinking.
  • apps/ui/src/components/views/settings-view/model-defaults/phase-model-selector.tsx
    • Integrated getThinkingLevelsForModel to filter thinking level options.
    • Added description for 'adaptive' thinking in the UI.
  • apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx
    • Added GPT-5.3-Codex to CODEX_MODEL_INFO.
    • Included GPT-5.3-Codex in the supportsReasoningEffort function.
    • Updated descriptions for other Codex models.
  • apps/ui/src/lib/agent-context-parser.ts
    • Updated DEFAULT_MODEL to claude-opus-4-6.
    • Added specific formatting for opus-4-6 in formatModelName.
  • docs/llm-shared-packages.md
    • Updated Claude Opus model reference to claude-opus-4-6.
  • docs/server/providers.md
    • Updated Claude Opus model reference in documentation examples to claude-opus-4-6.
  • docs/server/utilities.md
    • Updated Claude Opus model reference in documentation examples for CLAUDE_MODEL_MAP, DEFAULT_MODELS, and resolveModelString outputs to claude-opus-4-6.
  • libs/model-resolver/README.md
    • Updated Claude Opus model reference in examples to claude-opus-4-6.
  • libs/model-resolver/tests/resolver.test.ts
    • Updated expected model string for claude-opus-4-6 in resolvePhaseModel tests.
  • libs/platform/src/system-paths.ts
    • Added NVM for Windows symlink paths to getClaudeCliPaths, getCodexCliPaths, and getOpenCodeCliPaths.
  • libs/types/src/codex-models.ts
    • Added codex-gpt-5.3-codex to CodexModelId type.
    • Added codex-gpt-5.3-codex configuration to CODEX_MODEL_CONFIG_MAP.
  • libs/types/src/index.ts
    • Exported new helper functions: isAdaptiveThinkingModel and getThinkingLevelsForModel.
  • libs/types/src/model-display.ts
    • Added GPT-5.3-Codex to CODEX_MODELS and getModelDisplayName.
    • Added 'adaptive' to THINKING_LEVELS and THINKING_LEVEL_LABELS.
  • libs/types/src/model.ts
    • Updated CLAUDE_CANONICAL_MAP, CLAUDE_MODEL_MAP, and DEFAULT_MODELS to reference claude-opus-4-6.
    • Added gpt53Codex to CODEX_MODEL_MAP and REASONING_CAPABLE_MODELS.
  • libs/types/src/settings.ts
    • Added 'adaptive' to the ThinkingLevel type.
    • Added 'adaptive' with undefined token budget to THINKING_TOKEN_BUDGET.
    • Implemented isAdaptiveThinkingModel and getThinkingLevelsForModel functions to manage adaptive thinking logic.
  • package-lock.json
    • Updated package versions for @anthropic-ai/claude-agent-sdk and @openai/codex-sdk.
Activity
  • The pull request introduces support for new AI models and updates existing SDKs.
  • Extensive test coverage is maintained, with all 547 package tests and 1418 server tests passing, including new adaptive thinking tests.
  • No lint errors were introduced during the changes.
  • The changes were generated with Claude Code, indicating AI assistance in the development process.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the new Claude Opus 4.6 and GPT-5.3-Codex models, along with updates to the corresponding SDKs. The changes are comprehensive, covering model maps, UI components, documentation, and tests to integrate the new models and their features, such as adaptive thinking for Opus 4.6. Overall, the implementation is solid. I've identified a couple of minor opportunities for improvement related to code duplication and ensuring consistency in model defaults.

Comment on lines 57 to 69
const nvmSymlink = process.env.NVM_SYMLINK;
const paths = [
path.join(os.homedir(), '.local', 'bin', 'claude.exe'),
path.join(appData, 'npm', 'claude.cmd'),
path.join(appData, 'npm', 'claude'),
path.join(appData, '.npm-global', 'bin', 'claude.cmd'),
path.join(appData, '.npm-global', 'bin', 'claude'),
];
// nvm4w (NVM for Windows) symlink path
if (nvmSymlink) {
paths.push(path.join(nvmSymlink, 'claude.cmd'), path.join(nvmSymlink, 'claude'));
}
return paths;
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The logic for finding CLIs installed via NVM for Windows (nvm4w) by checking NVM_SYMLINK is duplicated in getCodexCliPaths and getOpenCodeCliPaths. To improve maintainability and reduce redundancy, consider extracting this into a helper function that can be reused across these three functions.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — extracted a getNvmWindowsCliPaths(cliName) helper that's now reused across getClaudeCliPaths, getCodexCliPaths, and getOpenCodeCliPaths.

supportsVision: true,
supportsTools: true,
tier: 'premium' as const,
default: true,
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This correctly sets gpt53Codex as the new default model. However, there's a related constant in libs/types/src/model.ts that appears to have been missed. The DEFAULT_MODELS.codex is still set to CODEX_MODEL_MAP.gpt52Codex. Please update it to CODEX_MODEL_MAP.gpt53Codex for consistency across the codebase.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — updated DEFAULT_MODELS.codex to CODEX_MODEL_MAP.gpt53Codex and updated the comment.

@Shironex Shironex self-assigned this Feb 5, 2026
@Shironex Shironex added Testers-Requested Request for others to test an enhancement or bug fix/etc. Work-In-Progress Currently being addressed. Do Not Merge Use this label if something should not be merged. labels Feb 5, 2026
Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (2)
apps/server/package.json (1)

27-38: ⚠️ Potential issue | 🔴 Critical

Fix critical version discrepancies and missing SDK resolutions.

The specified versions have serious issues:

  • @anthropic-ai/claude-agent-sdk declares 0.2.32 in package.json, but this version does not exist on npm (latest public is 0.2.29). The lockfile shows 0.1.69 with specifier ^0.1.61—a significant downgrade from the declared version.
  • @openai/codex-sdk is not resolved in the lockfile, indicating a missing or unresolved dependency.

These mismatches will cause build failures and inconsistent environments. Verify the correct versions and ensure package.json and lockfile are synchronized.

apps/ui/src/lib/agent-context-parser.ts (1)

37-47: ⚠️ Potential issue | 🟡 Minor

Add display names for GPT‑5.3 Codex models.

With the new codex-gpt-5.3-codex entry, this formatter will fall back to generic labeling. Adding explicit cases keeps UI display consistent.

✅ Suggested addition
   // Codex/GPT models - specific formatting
+  if (model === 'codex-gpt-5.3-codex') return 'GPT-5.3 Codex';
+  if (model === 'codex-gpt-5.3') return 'GPT-5.3';
   if (model === 'codex-gpt-5.2-codex') return 'GPT-5.2 Codex';
   if (model === 'codex-gpt-5.2') return 'GPT-5.2';
🤖 Fix all issues with AI agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`:
- Around line 8-23: The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.

In `@libs/types/src/codex-models.ts`:
- Around line 33-39: The resolver is passing the internal alias
"codex-gpt-5.3-codex" through to the Codex SDK; update the model-resolver logic
in libs/model-resolver/src/resolver.ts to strip the "codex-" provider prefix
before calling the Codex SDK by using the existing stripProviderPrefix utility
(same approach used for other prefixed providers). Locate the Codex branch in
the resolver (where Codex models are detected — around the code that imports
stripProviderPrefix and handles provider-specific mapping) and ensure you call
stripProviderPrefix(modelId) (or an equivalent helper) to produce
"gpt-5.3-codex" and pass that to the SDK call (e.g., wherever the Codex API
parameter or variable is set), leaving internal IDs unchanged elsewhere.
🧹 Nitpick comments (5)
apps/ui/docs/AGENT_ARCHITECTURE.md (1)

200-215: Consider documenting adaptive thinking support.

While the model update is correct, Opus 4.6 introduces adaptive thinking as a notable new feature (per PR objectives). Consider adding a brief note or example in this configuration section to help developers leverage this capability.

📝 Example documentation addition

You could add a comment or additional section after line 215:

   sandbox: {
     enabled: true,                // Sandboxed bash execution
     autoAllowBashIfSandboxed: true
   }
 }
+
+// Note: Claude Opus 4.6 supports adaptive thinking mode, which automatically
+// adjusts the thinking budget based on task complexity (max 128K output tokens).
libs/platform/src/system-paths.ts (1)

1141-1163: Consistent implementation, consider extracting helper to reduce duplication.

The NVM symlink logic is now duplicated across three functions (getClaudeCliPaths, getCodexCliPaths, getOpenCodeCliPaths). Consider extracting a small helper to DRY this up.

♻️ Optional: Extract NVM symlink helper

Add a helper function near the other private helpers:

/**
 * Get NVM for Windows symlink paths for a given CLI tool
 */
function getNvmSymlinkPaths(baseName: string): string[] {
  const nvmSymlink = process.env.NVM_SYMLINK;
  if (!nvmSymlink) return [];
  return [path.join(nvmSymlink, `${baseName}.cmd`), path.join(nvmSymlink, baseName)];
}

Then simplify each function:

 if (isWindows) {
   const appData = process.env.APPDATA || path.join(homeDir, 'AppData', 'Roaming');
   const localAppData = process.env.LOCALAPPDATA || path.join(homeDir, 'AppData', 'Local');
-  const nvmSymlink = process.env.NVM_SYMLINK;
-  const paths = [
+  return [
     // ... existing paths ...
-  ];
-  // nvm4w (NVM for Windows) symlink path
-  if (nvmSymlink) {
-    paths.push(path.join(nvmSymlink, 'opencode.cmd'), path.join(nvmSymlink, 'opencode'));
-  }
-  return paths;
+    // nvm4w (NVM for Windows) symlink path
+    ...getNvmSymlinkPaths('opencode'),
+  ];
 }
apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx (1)

23-60: Derive reasoning-effort support from model metadata to avoid drift.
The hardcoded reasoningModels list duplicates model capability data and will need manual updates whenever new models are added.

♻️ Suggested refactor (metadata-driven)
 interface CodexModelInfo {
   id: CodexModelId;
   label: string;
   description: string;
+  supportsReasoningEffort?: boolean;
 }

 const CODEX_MODEL_INFO: Record<CodexModelId, CodexModelInfo> = {
   'codex-gpt-5.3-codex': {
     id: 'codex-gpt-5.3-codex',
     label: 'GPT-5.3-Codex',
     description: 'Latest frontier agentic coding model',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.2-codex': {
     id: 'codex-gpt-5.2-codex',
     label: 'GPT-5.2-Codex',
     description: 'Frontier agentic coding model',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1-codex-max': {
     id: 'codex-gpt-5.1-codex-max',
     label: 'GPT-5.1-Codex-Max',
     description: 'Codex-optimized flagship for deep and fast reasoning',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1-codex-mini': {
     id: 'codex-gpt-5.1-codex-mini',
     label: 'GPT-5.1-Codex-Mini',
     description: 'Optimized for codex. Cheaper, faster, but less capable',
+    supportsReasoningEffort: false,
   },
   'codex-gpt-5.2': {
     id: 'codex-gpt-5.2',
     label: 'GPT-5.2',
     description: 'Latest frontier model with improvements across knowledge, reasoning and coding',
+    supportsReasoningEffort: true,
   },
   'codex-gpt-5.1': {
     id: 'codex-gpt-5.1',
     label: 'GPT-5.1',
     description: 'Great for coding and agentic tasks across domains',
+    supportsReasoningEffort: true,
   },
 };

-function supportsReasoningEffort(modelId: string): boolean {
-  const reasoningModels = [
-    'codex-gpt-5.3-codex',
-    'codex-gpt-5.2-codex',
-    'codex-gpt-5.1-codex-max',
-    'codex-gpt-5.2',
-    'codex-gpt-5.1',
-  ];
-  return reasoningModels.includes(modelId);
+function supportsReasoningEffort(modelId: CodexModelId): boolean {
+  return CODEX_MODEL_INFO[modelId]?.supportsReasoningEffort ?? false;
 }

Based on learnings: When implementing AI model capability flags (like thinking/reasoning support), always add per-model flags to model definitions and check the specific model being used, rather than assuming all models from a provider have the same capabilities.

Also applies to: 166-174

apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx (1)

266-281: Consider using the isAdaptiveThinkingModel() helper for consistency and maintainability.

The current inline checks duplicate the same logic as the shared helper function. Using isAdaptiveThinkingModel(entry.model) consolidates the adaptive model detection and aligns with how getThinkingLevelsForModel() also uses this helper—making it a single source of truth if the adaptation rules ever change.

♻️ Suggested update
-import { supportsReasoningEffort } from '@automaker/types';
+import { supportsReasoningEffort, isAdaptiveThinkingModel } from '@automaker/types';
-    const isNewModelAdaptive =
-      entry.model === 'claude-opus' ||
-      (typeof entry.model === 'string' && entry.model.includes('opus-4-6'));
+    const isNewModelAdaptive = isAdaptiveThinkingModel(entry.model);
apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx (1)

49-51: Reuse the computed levels for the helper text.
This avoids a second getThinkingLevelsForModel call and keeps the messaging in sync with the rendered options.

♻️ Suggested diff
-  const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');
+  const levels = getThinkingLevelsForModel(model ?? '');
+  const hasAdaptive = levels.includes('adaptive');
@@
-        {model && getThinkingLevelsForModel(model).includes('adaptive')
+        {model && hasAdaptive
           ? 'Adaptive thinking lets the model decide how much reasoning to use.'
           : 'Higher levels give more time to reason through complex problems.'}

Comment on lines +33 to +39
'codex-gpt-5.3-codex': {
id: 'codex-gpt-5.3-codex',
label: 'GPT-5.3-Codex',
description: 'Latest frontier agentic coding model',
hasThinking: true,
supportsVision: true,
},
Copy link

@coderabbitai coderabbitai bot Feb 5, 2026

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🔴 Critical

🧩 Analysis chain

🏁 Script executed:

cat -n libs/types/src/codex-models.ts | head -50

Repository: AutoMaker-Org/automaker

Length of output: 1814


🌐 Web query:

OpenAI Codex SDK 0.98.0 GPT-5.3-Codex model identifier documentation

💡 Result:

For Codex CLI + Codex SDK (v0.98.0), the GPT‑5.3‑Codex model identifier is:

  • gpt-5.3-codex [1]

Where it’s documented / how to use it:

  • Codex changelog (entry for Codex CLI 0.98.0, dated 2026‑02‑05) shows switching with: codex --model gpt-5.3-codex [1]
  • OpenAI’s release post notes GPT‑5.3‑Codex is available across Codex surfaces (app/CLI/IDE/web) and that API access is “coming soon” [2]. For API-key workflows, the changelog says to keep using gpt-5.2-codex while API support rolls out [1].

Sources: [1], [2]


🏁 Script executed:

# Search for the model resolver function mentioned in guidelines
fd -t f -e ts -e tsx -e js -e jsx | xargs rg -l "resolveModelString|model.*resolver" | head -20

Repository: AutoMaker-Org/automaker

Length of output: 998


🏁 Script executed:

# Search for usages of the model ID in the codebase
rg "codex-gpt-5\.3-codex" --type ts --type tsx --type js --type jsx -B 3 -A 3

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Look for `@automaker/model-resolver` imports and usage
rg "@automaker/model-resolver|resolveModelString" --type ts --type tsx --type js --type jsx -A 2

Repository: AutoMaker-Org/automaker

Length of output: 94


🏁 Script executed:

# Read the model resolver implementation
cat libs/model-resolver/src/resolver.ts

Repository: AutoMaker-Org/automaker

Length of output: 9207


🏁 Script executed:

# Search for codex-gpt-5.3-codex usage without tsx filter
rg "codex-gpt-5\.3-codex" -B 3 -A 3

Repository: AutoMaker-Org/automaker

Length of output: 3214


🏁 Script executed:

# Check the model resolver tests
cat libs/model-resolver/tests/resolver.test.ts

Repository: AutoMaker-Org/automaker

Length of output: 20075


Fix the model resolver to strip the codex- prefix before passing to the Codex SDK API.

The model ID codex-gpt-5.3-codex is correctly defined as an internal alias, but the resolver passes it unchanged to the Codex SDK. The actual SDK specification expects gpt-5.3-codex without the prefix. Update libs/model-resolver/src/resolver.ts to strip the codex- prefix for Codex models before API calls, similar to how other prefixed models are handled. The resolver currently imports stripProviderPrefix but doesn't apply it to Codex models (lines 83-86).

🤖 Prompt for AI Agents
In `@libs/types/src/codex-models.ts` around lines 33 - 39, The resolver is passing
the internal alias "codex-gpt-5.3-codex" through to the Codex SDK; update the
model-resolver logic in libs/model-resolver/src/resolver.ts to strip the
"codex-" provider prefix before calling the Codex SDK by using the existing
stripProviderPrefix utility (same approach used for other prefixed providers).
Locate the Codex branch in the resolver (where Codex models are detected —
around the code that imports stripProviderPrefix and handles provider-specific
mapping) and ensure you call stripProviderPrefix(modelId) (or an equivalent
helper) to produce "gpt-5.3-codex" and pass that to the SDK call (e.g., wherever
the Codex API parameter or variable is set), leaving internal IDs unchanged
elsewhere.

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is a false positive — the codex- prefix is already stripped at the service layer before reaching the provider. See AgentService (line 393), AutoModeService (line 4192), IdeationService (line 252), etc. — they all call stripProviderPrefix() before passing the model to the provider. The CodexProvider even validates this with validateBareModelId(options.model, 'CodexProvider') at line 708, which throws if a prefix is present.

The resolver correctly passes through codex- prefixed models for routing purposes (so ProviderFactory.getProviderForModel() can identify the correct provider).

Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

🧩 Analysis chain

🏁 Script executed:

#!/bin/bash

# Check AgentService line 393 for stripProviderPrefix usage
echo "=== AgentService around line 393 ==="
find . -name "*.ts" -path "*/agent-service*" -o -name "agent.service.ts" | head -1 | xargs sed -n '380,405p' 2>/dev/null || echo "File not found with that pattern"

# Try alternative path
fd -t f "agent.*service" --extension ts | head -3 | while read file; do
  echo "Found: $file"
  sed -n '380,405p' "$file" 2>/dev/null | head -30
done

echo ""
echo "=== Search for stripProviderPrefix in services ==="
rg "stripProviderPrefix" --type ts -C 3 | head -100

Length of output: 10885


🏁 Script executed:

#!/bin/bash

# Check CodexProvider for validateBareModelId
echo "=== CodexProvider validateBareModelId usage ==="
fd -t f "codex.*provider" --extension ts | head -1 | xargs cat | grep -n "validateBareModelId" -A 3 -B 3

echo ""
echo "=== CodexProvider around line 708 ==="
fd -t f "codex.*provider" --extension ts | head -1 | xargs sed -n '700,720p'

Length of output: 1555


🏁 Script executed:

#!/bin/bash

# Check how ProviderFactory uses model IDs for routing
echo "=== ProviderFactory getProviderForModel ==="
fd -t f "provider.*factory" --extension ts | head -1 | xargs cat | grep -n "getProviderForModel" -A 20 -B 5

echo ""
echo "=== Check validateBareModelId implementation ==="
rg "function validateBareModelId|const validateBareModelId|validateBareModelId.*=" --type ts -A 10 -B 2

Length of output: 4494


@Shironex, you're absolutely right — I apologize for the incorrect review comment.

I can confirm from the code that the architecture correctly handles the codex- prefix:

  1. Resolver passes through codex-gpt-5.3-codex with the prefix intact (for routing)
  2. ProviderFactory uses the prefix to identify the correct provider (line 154: reg.canHandleModel?.(lowerModel))
  3. Service layer strips the prefix before calling the provider:
    • AgentService line 393: const bareModel = stripProviderPrefix(effectiveModel);
    • Same pattern in IdeationService, AutoModeService, simple-query-service, etc.
  4. CodexProvider validates it receives a bare model ID at line 708: validateBareModelId(options.model, 'CodexProvider')

The prefix stripping happens at the correct layer (services), not in the resolver. The resolver needs to keep the prefix for proper provider routing. Thank you for the correction!


🧬 Code Graph Analysis Results

[libs/types/src/index.ts:101]

type CodexModelId

[libs/types/src/index.ts:97]

getAllCodexModelIds

[libs/types/src/model.ts:108]

export type CodexModelId = (typeof CODEX_MODEL_MAP)[keyof typeof CODEX_MODEL_MAP];

[libs/types/src/model.ts:93-95]

export function getAllCodexModelIds(): CodexModelId[] {
  return CODEX_MODEL_IDS as CodexModelId[];
}

🧠 Learnings used
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-22T11:59:31.965Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use resolveModelString() from automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names before making API calls

- Added `isAdaptiveThinkingModel` utility to improve model identification logic in the AddFeatureDialog.
- Updated the ThinkingLevelSelector to conditionally display information based on available thinking levels.
- Enhanced model name formatting in agent-context-parser to include 'GPT-5.3 Codex' for better clarity.

These changes improve the user experience by refining model handling and UI feedback related to adaptive thinking capabilities.
- Introduced a new event type 'dev-server:url-detected' to enhance event handling for the development server.
- This addition allows for better tracking and response to URL detection during server operations.

These changes improve the event system's capability to manage server-related events effectively.
Base automatically changed from v0.14.0rc to main February 13, 2026 18:47
@Shironex Shironex changed the base branch from main to v0.15.0rc February 15, 2026 15:28
Copy link
Collaborator Author

@Shironex Shironex left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

GitChorus AI Review

This is a well-structured PR that adds Claude Opus 4.6 and GPT-5.3-Codex model support with good coverage across model maps, UI, tests, and documentation. The implementation of adaptive thinking is thoughtful with proper model-specific UI filtering via getThinkingLevelsForModel() and auto-normalization on model switch.

The most notable issue is that the adaptive thinking normalization was only added to add-feature-dialog.tsx but missed in edit-feature-dialog.tsx, which has the same handleModelChange pattern. This means editing an existing feature and switching models won't properly normalize thinking levels between adaptive and manual modes.

There's also a semantic question about whether adaptive and none thinking levels producing identical SDK options (both return {}) is truly correct — the MEMORY.md says "just don't set maxThinkingTokens" for adaptive, but it would be good to confirm this is the right SDK behavior.

The DEFAULT_MODELS.codex not being updated to the new GPT-5.3-Codex model (while it's marked as default: true in the models list) is a minor inconsistency worth addressing.

Overall, the code quality is good with proper test coverage, clean documentation updates, and sensible architecture. The NVM symlink Windows fix is a nice bonus improvement.

Quality Score: 7/10 ⭐⭐⭐⭐⭐⭐⭐☆☆☆

Findings Summary

# Severity Category Finding Location
1 🟠 Major Logic Missing adaptive thinking normalization in edit-feature-dialog edit-feature-dialog.tsx:197
2 🟠 Major Logic Adaptive and none thinking levels produce identical SDK configuration sdk-options.ts:265
3 🟡 Minor Logic Provider model thinking level resolution uses fragile 'opus' string check phase-model-selector.tsx:1298
4 🟡 Minor Logic Redundant ternary in thinking level selector thinking-level-selector.tsx:23
5 🟡 Minor Codebase-fit Duplicated supportsReasoningEffort function diverges from shared version codex-model-configuration.tsx:165
6 🟡 Minor Logic DEFAULT_MODELS.codex not updated to new GPT-5.3-Codex model model.ts:101
7 🔵 Nit Style Unrelated event type addition bundled in model update PR event.ts:49
8 🔵 Nit Codebase-fit Adaptive thinking level added to global arrays despite being model-specific model-constants.ts:174

via GitChorus

Comments Not Placed Inline

The following findings could not be placed as inline comments (line not in diff):

  • apps/ui/src/components/views/board-view/dialogs/edit-feature-dialog.tsx:197 — File "apps/ui/src/components/views/board-view/dialogs/edit-feature-dialog.tsx" not found in diff

function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> {
if (!thinkingLevel || thinkingLevel === 'none') {
return {};
}
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Warning

Major - Logic: Adaptive and none thinking levels produce identical SDK configuration

When thinkingLevel is 'adaptive', buildThinkingOptions returns an empty object {} — the same as when thinking is disabled ('none'). This means 'adaptive' and 'none' produce identical SDK options. The PR description says adaptive thinking means 'just don't set maxThinkingTokens - model uses adaptive by default', which aligns with the implementation. However, there's no way for the SDK to distinguish between 'thinking off' and 'adaptive thinking'. If the SDK requires some indication (e.g., a different parameter) to enable adaptive thinking vs. simply disabling thinking, this could be a bug. Verify with the Claude Agent SDK 0.2.32 documentation that omitting maxThinkingTokens truly enables adaptive thinking rather than disabling thinking entirely.

Problematic code:

if (thinkingLevel === 'adaptive') {
    logger.debug(
      `buildThinkingOptions: thinkingLevel="adaptive" -> no maxThinkingTokens (model decides)`
    );
    return {};
  }

Suggested fix:

// Verify that the Claude Agent SDK 0.2.32 uses adaptive thinking by default
// when maxThinkingTokens is omitted. If the SDK needs an explicit signal,
// this should be updated. For example, if there's a `thinking` option:
//
// if (thinkingLevel === 'adaptive') {
//   return { thinking: 'adaptive' }; // or whatever the SDK expects
// }

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good catch. Verified against the Claude Agent SDK docs — maxThinkingTokens defaults to undefined. For Opus 4.6, omitting it enables adaptive thinking (the model's default behavior). For other models, omitting it means no thinking.

The behavior is intentionally the same for both 'none' and 'adaptive' returning {} — the difference is at the UI level: Opus 4.6 only shows 'None' and 'Adaptive' options, so users can't accidentally set manual thinking budgets on a model that doesn't support them. When a user picks 'Adaptive', the empty return signals 'use the model default' which IS adaptive thinking for Opus 4.6.

That said, the 'none' case for Opus 4.6 does mean we can't explicitly disable thinking for that model through the SDK. This is a known SDK limitation — there's no thinking: false option. We could set maxThinkingTokens: 0 but it's unclear if the SDK supports that. Leaving as-is for now since the UI guides users correctly.

@@ -1296,7 +1296,9 @@ export function PhaseModelSelector({
<div className="px-2 py-1 text-xs font-medium text-muted-foreground">
Thinking Level
</div>
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Minor - Logic: Provider model thinking level resolution uses fragile 'opus' string check

The check model.mapsToClaudeModel === 'opus' is used to determine if the model is an Opus model for adaptive thinking. But this only handles provider models that explicitly map to 'opus'. There are 4 places in this file that render thinking levels — 2 use model.mapsToClaudeModel (for provider models like GLM/MiniMax), and 2 use model.id directly (for native Claude models). The mapsToClaudeModel === 'opus' check is a fragile heuristic: future Opus versions that aren't adaptive or non-Opus models that are adaptive would break. However, for the current implementation this is acceptable.

Problematic code:

{getThinkingLevelsForModel(
                model.mapsToClaudeModel === 'opus' ? 'claude-opus' : ''
              ).map((level) => (

Suggested fix:

// Consider passing the resolved model string instead of checking mapsToClaudeModel:
// This is minor but would be more robust:
{getThinkingLevelsForModel(
  model.mapsToClaudeModel === 'opus' 
    ? 'claude-opus' 
    : model.id || ''
).map((level) => (

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — now falls back to model.id || '' instead of just '' for non-opus provider models. This makes the check more robust if future models also support adaptive thinking.

model,
}: ThinkingLevelSelectorProps) {
const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Minor - Logic: Redundant ternary in thinking level selector

When model is undefined, getThinkingLevelsForModel('') is called. Since '' does not include 'opus-4-6' and is not 'claude-opus', this returns the full manual thinking levels. This is correct behavior but the ternary is redundant — getThinkingLevelsForModel('') and getThinkingLevelsForModel(undefined) would both work if the function accepted string | undefined. The current implementation works correctly but is slightly confusing.

Problematic code:

const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');

Suggested fix:

// Simplify:
const levels = getThinkingLevelsForModel(model || '');

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — simplified to getThinkingLevelsForModel(model || '').

@@ -160,6 +165,7 @@ export function CodexModelConfiguration({

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Minor - Codebase-fit: Duplicated supportsReasoningEffort function diverges from shared version

This file has a local supportsReasoningEffort function that duplicates the shared supportsReasoningEffort from @automaker/types. The shared version uses REASONING_CAPABLE_MODELS Set for O(1) lookup. The local version uses a hardcoded array and must be manually kept in sync. The PR updated the local version to include the new model, which is correct, but the duplication increases maintenance burden and risk of divergence.

Problematic code:

function supportsReasoningEffort(modelId: string): boolean {
  const reasoningModels = [
    'codex-gpt-5.3-codex',
    'codex-gpt-5.2-codex',
    'codex-gpt-5.1-codex-max',
    'codex-gpt-5.2',
    'codex-gpt-5.1',
  ];
  return reasoningModels.includes(modelId);
}

Suggested fix:

// Replace local function with the shared one from @automaker/types:
import { supportsReasoningEffort } from '@automaker/types';
// Remove the local function definition entirely

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — replaced the local supportsReasoningEffort function with the shared version from @automaker/types which uses the REASONING_CAPABLE_MODELS Set.

@@ -96,7 +99,7 @@ export function getAllCodexModelIds(): CodexModelId[] {
* Uses canonical prefixed IDs for consistent routing.
*/
export const DEFAULT_MODELS = {
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Important

Minor - Logic: DEFAULT_MODELS.codex not updated to new GPT-5.3-Codex model

The PR adds GPT-5.3-Codex as the new 'Latest frontier agentic coding model' and marks it as default: true in CODEX_MODELS (codex-models.ts), but DEFAULT_MODELS.codex still points to gpt52Codex. The comment 'GPT-5.2-Codex is the most advanced agentic coding model' is now inaccurate since GPT-5.3-Codex was added as a higher-tier model. Consider updating the default to the new model, or at minimum fixing the comment.

Problematic code:

codex: CODEX_MODEL_MAP.gpt52Codex, // GPT-5.2-Codex is the most advanced agentic coding model

Suggested fix:

export const DEFAULT_MODELS = {
  claude: 'claude-opus-4-6',
  cursor: 'cursor-auto',
  codex: CODEX_MODEL_MAP.gpt53Codex, // GPT-5.3-Codex is the latest frontier agentic coding model
} as const;

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Fixed — updated DEFAULT_MODELS.codex to CODEX_MODEL_MAP.gpt53Codex with updated comment.

| 'dev-server:started'
| 'dev-server:output'
| 'dev-server:stopped'
| 'dev-server:url-detected'
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note

Nit - Style: Unrelated event type addition bundled in model update PR

A new event type dev-server:url-detected was added in this PR, but it's unrelated to the Claude Opus 4.6 / GPT-5.3-Codex model changes described in the PR title and description. This appears to be an unrelated change bundled into the PR. While it's a valid addition (there's code in dev-server-service.ts that emits this event), it would be better practice to separate unrelated changes into their own PRs.

Problematic code:

| 'dev-server:url-detected'

Suggested fix:

// Consider moving this to a separate PR, or at minimum mentioning it in the PR description

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acknowledged — this was bundled from a merge commit. Will keep it in this PR since it's already here and the emitting code exists, but noted for future PRs.

'none',
'low',
'medium',
'high',
Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Note

Nit - Codebase-fit: Adaptive thinking level added to global arrays despite being model-specific

The global THINKING_LEVELS array now includes 'adaptive' at the end, but this array is no longer used by thinking-level-selector.tsx (which now uses getThinkingLevelsForModel). It's still referenced in phase-model-selector.tsx's import but the import was removed in the diff. However, the array in model-display.ts also has 'adaptive' appended. Having 'adaptive' in the global array means if any other component iterates all thinking levels, it would show 'adaptive' for all models — not just Opus 4.6. This is potentially confusing but currently harmless since the key consumers were updated.

Problematic code:

export const THINKING_LEVELS: ThinkingLevel[] = [
  'none',
  'low',
  'medium',
  'high',
  'ultrathink',
  'adaptive',
];

Suggested fix:

// This is acceptable for now, but consider whether 'adaptive' belongs
// in the global THINKING_LEVELS array or if it should only appear via
// getThinkingLevelsForModel() which is model-aware

Copy link
Collaborator Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Acknowledged — the global THINKING_LEVELS array includes 'adaptive' for completeness (e.g., serialization/deserialization), but the key UI consumers now use getThinkingLevelsForModel() which is model-aware. Acceptable for now.

Shironex and others added 2 commits February 15, 2026 16:49
- Extract getNvmWindowsCliPaths() helper to DRY up NVM_SYMLINK logic
- Update DEFAULT_MODELS.codex to gpt53Codex
- Simplify redundant ternary in thinking-level-selector
- Replace local supportsReasoningEffort with shared import from @automaker/types
- Use model.id fallback in phase-model-selector thinking level resolution

Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
@Shironex Shironex merged commit 67ebf8c into v0.15.0rc Feb 15, 2026
10 checks passed
@Shironex Shironex deleted the feat/new-claude-and-codex-models branch February 15, 2026 16:16
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

Do Not Merge Use this label if something should not be merged. Testers-Requested Request for others to test an enhancement or bug fix/etc. Work-In-Progress Currently being addressed.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

1 participant