feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
feat: add Claude Opus 4.6 and GPT-5.3-Codex model support#757
Conversation
- Upgraded @openai/codex-sdk from version 0.77.0 to 0.98.0 in package-lock.json and package.json. - Introduced new model 'GPT-5.3-Codex' with enhanced capabilities in codex-models.ts and related files. - Updated descriptions for existing models to reflect their latest features and improvements. - Adjusted Codex model configuration and display to include the new model and its attributes. These changes enhance the Codex model offerings and ensure compatibility with the latest SDK version.
…upport - Changed model identifier from `claude-opus-4-5-20251101` to `claude-opus-4-6` across various files, including documentation and code references. - Updated the SDK to support adaptive thinking for Opus 4.6, allowing the model to determine its own reasoning depth. - Enhanced the thinking level options to include 'adaptive' and adjusted related components to reflect this change. - Updated tests to ensure compatibility with the new model and its features. These changes improve the model's capabilities and user experience by leveraging adaptive reasoning.
📝 WalkthroughWalkthroughUpgrades Claude Opus alias to 4.6, adds an "adaptive" thinking level (omitting explicit token budgets for adaptive), introduces codex-gpt-5.3-codex, updates SDK dependencies, augments Windows CLI path detection for NVM symlinks, and updates UI/tests/docs to reflect these changes. Changes
Sequence DiagramsequenceDiagram
participant User as User
participant UI as UI (Selector)
participant Types as Types (libs/types)
participant Options as SDK Options
participant Provider as Claude Provider
participant API as Claude API
User->>UI: Select model (e.g., Opus 4.6)
UI->>Types: getThinkingLevelsForModel(model)
Types-->>UI: ['none','adaptive'] or ['none','low','medium','high','ultrathink']
UI->>User: Show available thinking levels
User->>UI: Choose 'adaptive'
UI->>Options: buildThinkingOptions(level='adaptive')
Options->>Options: detect 'adaptive' -> omit maxThinkingTokens
Options-->>Provider: {} (no maxThinkingTokens)
Provider->>API: Send request (model: claude-opus-4-6, no maxThinkingTokens)
API-->>Provider: Response (model-managed reasoning)
Provider-->>UI: Deliver result
Estimated Code Review Effort🎯 3 (Moderate) | ⏱️ ~20 minutes Possibly Related PRs
Suggested Labels
Poem
🚥 Pre-merge checks | ✅ 3 | ❌ 1❌ Failed checks (1 warning)
✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing touches
🧪 Generate unit tests (beta)
No actionable comments were generated in the recent review. 🎉 🧹 Recent nitpick comments
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Summary of ChangesHello @Shironex, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request introduces significant advancements in the platform's AI model capabilities by integrating the newest Claude Opus 4.6 and GPT-5.3-Codex models. It brings sophisticated features such as adaptive thinking for Claude Opus, allowing for more intelligent and efficient reasoning processes. The changes also encompass essential SDK updates and user interface enhancements to streamline model selection, ensuring the system leverages the most current and powerful AI technologies available. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the new Claude Opus 4.6 and GPT-5.3-Codex models, along with updates to the corresponding SDKs. The changes are comprehensive, covering model maps, UI components, documentation, and tests to integrate the new models and their features, such as adaptive thinking for Opus 4.6. Overall, the implementation is solid. I've identified a couple of minor opportunities for improvement related to code duplication and ensuring consistency in model defaults.
libs/platform/src/system-paths.ts
Outdated
| const nvmSymlink = process.env.NVM_SYMLINK; | ||
| const paths = [ | ||
| path.join(os.homedir(), '.local', 'bin', 'claude.exe'), | ||
| path.join(appData, 'npm', 'claude.cmd'), | ||
| path.join(appData, 'npm', 'claude'), | ||
| path.join(appData, '.npm-global', 'bin', 'claude.cmd'), | ||
| path.join(appData, '.npm-global', 'bin', 'claude'), | ||
| ]; | ||
| // nvm4w (NVM for Windows) symlink path | ||
| if (nvmSymlink) { | ||
| paths.push(path.join(nvmSymlink, 'claude.cmd'), path.join(nvmSymlink, 'claude')); | ||
| } | ||
| return paths; |
There was a problem hiding this comment.
There was a problem hiding this comment.
Fixed — extracted a getNvmWindowsCliPaths(cliName) helper that's now reused across getClaudeCliPaths, getCodexCliPaths, and getOpenCodeCliPaths.
| supportsVision: true, | ||
| supportsTools: true, | ||
| tier: 'premium' as const, | ||
| default: true, |
There was a problem hiding this comment.
This correctly sets gpt53Codex as the new default model. However, there's a related constant in libs/types/src/model.ts that appears to have been missed. The DEFAULT_MODELS.codex is still set to CODEX_MODEL_MAP.gpt52Codex. Please update it to CODEX_MODEL_MAP.gpt53Codex for consistency across the codebase.
There was a problem hiding this comment.
Fixed — updated DEFAULT_MODELS.codex to CODEX_MODEL_MAP.gpt53Codex and updated the comment.
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (2)
apps/server/package.json (1)
27-38:⚠️ Potential issue | 🔴 CriticalFix critical version discrepancies and missing SDK resolutions.
The specified versions have serious issues:
@anthropic-ai/claude-agent-sdkdeclares0.2.32in package.json, but this version does not exist on npm (latest public is0.2.29). The lockfile shows0.1.69with specifier^0.1.61—a significant downgrade from the declared version.@openai/codex-sdkis not resolved in the lockfile, indicating a missing or unresolved dependency.These mismatches will cause build failures and inconsistent environments. Verify the correct versions and ensure package.json and lockfile are synchronized.
apps/ui/src/lib/agent-context-parser.ts (1)
37-47:⚠️ Potential issue | 🟡 MinorAdd display names for GPT‑5.3 Codex models.
With the new
codex-gpt-5.3-codexentry, this formatter will fall back to generic labeling. Adding explicit cases keeps UI display consistent.✅ Suggested addition
// Codex/GPT models - specific formatting + if (model === 'codex-gpt-5.3-codex') return 'GPT-5.3 Codex'; + if (model === 'codex-gpt-5.3') return 'GPT-5.3'; if (model === 'codex-gpt-5.2-codex') return 'GPT-5.2 Codex'; if (model === 'codex-gpt-5.2') return 'GPT-5.2';
🤖 Fix all issues with AI agents
In `@apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx`:
- Around line 8-23: The component calls getThinkingLevelsForModel with the raw
model prop (in ThinkingLevelSelector) which can be an alias like "opus" and
causes adaptive detection to fail; normalize the model string first (using
resolveModelString from model-config.ts or `@automaker/model-resolver`) and pass
the resolved canonical id to getThinkingLevelsForModel in all places where model
is used (e.g., the initial levels = ... call and any other calls inside
ThinkingLevelSelector), ensuring you default to empty string when
resolveModelString returns undefined.
In `@libs/types/src/codex-models.ts`:
- Around line 33-39: The resolver is passing the internal alias
"codex-gpt-5.3-codex" through to the Codex SDK; update the model-resolver logic
in libs/model-resolver/src/resolver.ts to strip the "codex-" provider prefix
before calling the Codex SDK by using the existing stripProviderPrefix utility
(same approach used for other prefixed providers). Locate the Codex branch in
the resolver (where Codex models are detected — around the code that imports
stripProviderPrefix and handles provider-specific mapping) and ensure you call
stripProviderPrefix(modelId) (or an equivalent helper) to produce
"gpt-5.3-codex" and pass that to the SDK call (e.g., wherever the Codex API
parameter or variable is set), leaving internal IDs unchanged elsewhere.
🧹 Nitpick comments (5)
apps/ui/docs/AGENT_ARCHITECTURE.md (1)
200-215: Consider documenting adaptive thinking support.While the model update is correct, Opus 4.6 introduces adaptive thinking as a notable new feature (per PR objectives). Consider adding a brief note or example in this configuration section to help developers leverage this capability.
📝 Example documentation addition
You could add a comment or additional section after line 215:
sandbox: { enabled: true, // Sandboxed bash execution autoAllowBashIfSandboxed: true } } + +// Note: Claude Opus 4.6 supports adaptive thinking mode, which automatically +// adjusts the thinking budget based on task complexity (max 128K output tokens).libs/platform/src/system-paths.ts (1)
1141-1163: Consistent implementation, consider extracting helper to reduce duplication.The NVM symlink logic is now duplicated across three functions (
getClaudeCliPaths,getCodexCliPaths,getOpenCodeCliPaths). Consider extracting a small helper to DRY this up.♻️ Optional: Extract NVM symlink helper
Add a helper function near the other private helpers:
/** * Get NVM for Windows symlink paths for a given CLI tool */ function getNvmSymlinkPaths(baseName: string): string[] { const nvmSymlink = process.env.NVM_SYMLINK; if (!nvmSymlink) return []; return [path.join(nvmSymlink, `${baseName}.cmd`), path.join(nvmSymlink, baseName)]; }Then simplify each function:
if (isWindows) { const appData = process.env.APPDATA || path.join(homeDir, 'AppData', 'Roaming'); const localAppData = process.env.LOCALAPPDATA || path.join(homeDir, 'AppData', 'Local'); - const nvmSymlink = process.env.NVM_SYMLINK; - const paths = [ + return [ // ... existing paths ... - ]; - // nvm4w (NVM for Windows) symlink path - if (nvmSymlink) { - paths.push(path.join(nvmSymlink, 'opencode.cmd'), path.join(nvmSymlink, 'opencode')); - } - return paths; + // nvm4w (NVM for Windows) symlink path + ...getNvmSymlinkPaths('opencode'), + ]; }apps/ui/src/components/views/settings-view/providers/codex-model-configuration.tsx (1)
23-60: Derive reasoning-effort support from model metadata to avoid drift.
The hardcodedreasoningModelslist duplicates model capability data and will need manual updates whenever new models are added.♻️ Suggested refactor (metadata-driven)
interface CodexModelInfo { id: CodexModelId; label: string; description: string; + supportsReasoningEffort?: boolean; } const CODEX_MODEL_INFO: Record<CodexModelId, CodexModelInfo> = { 'codex-gpt-5.3-codex': { id: 'codex-gpt-5.3-codex', label: 'GPT-5.3-Codex', description: 'Latest frontier agentic coding model', + supportsReasoningEffort: true, }, 'codex-gpt-5.2-codex': { id: 'codex-gpt-5.2-codex', label: 'GPT-5.2-Codex', description: 'Frontier agentic coding model', + supportsReasoningEffort: true, }, 'codex-gpt-5.1-codex-max': { id: 'codex-gpt-5.1-codex-max', label: 'GPT-5.1-Codex-Max', description: 'Codex-optimized flagship for deep and fast reasoning', + supportsReasoningEffort: true, }, 'codex-gpt-5.1-codex-mini': { id: 'codex-gpt-5.1-codex-mini', label: 'GPT-5.1-Codex-Mini', description: 'Optimized for codex. Cheaper, faster, but less capable', + supportsReasoningEffort: false, }, 'codex-gpt-5.2': { id: 'codex-gpt-5.2', label: 'GPT-5.2', description: 'Latest frontier model with improvements across knowledge, reasoning and coding', + supportsReasoningEffort: true, }, 'codex-gpt-5.1': { id: 'codex-gpt-5.1', label: 'GPT-5.1', description: 'Great for coding and agentic tasks across domains', + supportsReasoningEffort: true, }, }; -function supportsReasoningEffort(modelId: string): boolean { - const reasoningModels = [ - 'codex-gpt-5.3-codex', - 'codex-gpt-5.2-codex', - 'codex-gpt-5.1-codex-max', - 'codex-gpt-5.2', - 'codex-gpt-5.1', - ]; - return reasoningModels.includes(modelId); +function supportsReasoningEffort(modelId: CodexModelId): boolean { + return CODEX_MODEL_INFO[modelId]?.supportsReasoningEffort ?? false; }Based on learnings: When implementing AI model capability flags (like thinking/reasoning support), always add per-model flags to model definitions and check the specific model being used, rather than assuming all models from a provider have the same capabilities.
Also applies to: 166-174
apps/ui/src/components/views/board-view/dialogs/add-feature-dialog.tsx (1)
266-281: Consider using theisAdaptiveThinkingModel()helper for consistency and maintainability.The current inline checks duplicate the same logic as the shared helper function. Using
isAdaptiveThinkingModel(entry.model)consolidates the adaptive model detection and aligns with howgetThinkingLevelsForModel()also uses this helper—making it a single source of truth if the adaptation rules ever change.♻️ Suggested update
-import { supportsReasoningEffort } from '@automaker/types'; +import { supportsReasoningEffort, isAdaptiveThinkingModel } from '@automaker/types';- const isNewModelAdaptive = - entry.model === 'claude-opus' || - (typeof entry.model === 'string' && entry.model.includes('opus-4-6')); + const isNewModelAdaptive = isAdaptiveThinkingModel(entry.model);apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx (1)
49-51: Reuse the computedlevelsfor the helper text.
This avoids a secondgetThinkingLevelsForModelcall and keeps the messaging in sync with the rendered options.♻️ Suggested diff
- const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel(''); + const levels = getThinkingLevelsForModel(model ?? ''); + const hasAdaptive = levels.includes('adaptive'); @@ - {model && getThinkingLevelsForModel(model).includes('adaptive') + {model && hasAdaptive ? 'Adaptive thinking lets the model decide how much reasoning to use.' : 'Higher levels give more time to reason through complex problems.'}
apps/ui/src/components/views/board-view/shared/thinking-level-selector.tsx
Show resolved
Hide resolved
| 'codex-gpt-5.3-codex': { | ||
| id: 'codex-gpt-5.3-codex', | ||
| label: 'GPT-5.3-Codex', | ||
| description: 'Latest frontier agentic coding model', | ||
| hasThinking: true, | ||
| supportsVision: true, | ||
| }, |
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
cat -n libs/types/src/codex-models.ts | head -50Repository: AutoMaker-Org/automaker
Length of output: 1814
🌐 Web query:
OpenAI Codex SDK 0.98.0 GPT-5.3-Codex model identifier documentation
💡 Result:
For Codex CLI + Codex SDK (v0.98.0), the GPT‑5.3‑Codex model identifier is:
gpt-5.3-codex[1]
Where it’s documented / how to use it:
- Codex changelog (entry for Codex CLI 0.98.0, dated 2026‑02‑05) shows switching with:
codex --model gpt-5.3-codex[1] - OpenAI’s release post notes GPT‑5.3‑Codex is available across Codex surfaces (app/CLI/IDE/web) and that API access is “coming soon” [2]. For API-key workflows, the changelog says to keep using
gpt-5.2-codexwhile API support rolls out [1].
Sources: [1], [2]
🏁 Script executed:
# Search for the model resolver function mentioned in guidelines
fd -t f -e ts -e tsx -e js -e jsx | xargs rg -l "resolveModelString|model.*resolver" | head -20Repository: AutoMaker-Org/automaker
Length of output: 998
🏁 Script executed:
# Search for usages of the model ID in the codebase
rg "codex-gpt-5\.3-codex" --type ts --type tsx --type js --type jsx -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Look for `@automaker/model-resolver` imports and usage
rg "@automaker/model-resolver|resolveModelString" --type ts --type tsx --type js --type jsx -A 2Repository: AutoMaker-Org/automaker
Length of output: 94
🏁 Script executed:
# Read the model resolver implementation
cat libs/model-resolver/src/resolver.tsRepository: AutoMaker-Org/automaker
Length of output: 9207
🏁 Script executed:
# Search for codex-gpt-5.3-codex usage without tsx filter
rg "codex-gpt-5\.3-codex" -B 3 -A 3Repository: AutoMaker-Org/automaker
Length of output: 3214
🏁 Script executed:
# Check the model resolver tests
cat libs/model-resolver/tests/resolver.test.tsRepository: AutoMaker-Org/automaker
Length of output: 20075
Fix the model resolver to strip the codex- prefix before passing to the Codex SDK API.
The model ID codex-gpt-5.3-codex is correctly defined as an internal alias, but the resolver passes it unchanged to the Codex SDK. The actual SDK specification expects gpt-5.3-codex without the prefix. Update libs/model-resolver/src/resolver.ts to strip the codex- prefix for Codex models before API calls, similar to how other prefixed models are handled. The resolver currently imports stripProviderPrefix but doesn't apply it to Codex models (lines 83-86).
🤖 Prompt for AI Agents
In `@libs/types/src/codex-models.ts` around lines 33 - 39, The resolver is passing
the internal alias "codex-gpt-5.3-codex" through to the Codex SDK; update the
model-resolver logic in libs/model-resolver/src/resolver.ts to strip the
"codex-" provider prefix before calling the Codex SDK by using the existing
stripProviderPrefix utility (same approach used for other prefixed providers).
Locate the Codex branch in the resolver (where Codex models are detected —
around the code that imports stripProviderPrefix and handles provider-specific
mapping) and ensure you call stripProviderPrefix(modelId) (or an equivalent
helper) to produce "gpt-5.3-codex" and pass that to the SDK call (e.g., wherever
the Codex API parameter or variable is set), leaving internal IDs unchanged
elsewhere.
There was a problem hiding this comment.
This is a false positive — the codex- prefix is already stripped at the service layer before reaching the provider. See AgentService (line 393), AutoModeService (line 4192), IdeationService (line 252), etc. — they all call stripProviderPrefix() before passing the model to the provider. The CodexProvider even validates this with validateBareModelId(options.model, 'CodexProvider') at line 708, which throws if a prefix is present.
The resolver correctly passes through codex- prefixed models for routing purposes (so ProviderFactory.getProviderForModel() can identify the correct provider).
There was a problem hiding this comment.
🧩 Analysis chain
🏁 Script executed:
#!/bin/bash
# Check AgentService line 393 for stripProviderPrefix usage
echo "=== AgentService around line 393 ==="
find . -name "*.ts" -path "*/agent-service*" -o -name "agent.service.ts" | head -1 | xargs sed -n '380,405p' 2>/dev/null || echo "File not found with that pattern"
# Try alternative path
fd -t f "agent.*service" --extension ts | head -3 | while read file; do
echo "Found: $file"
sed -n '380,405p' "$file" 2>/dev/null | head -30
done
echo ""
echo "=== Search for stripProviderPrefix in services ==="
rg "stripProviderPrefix" --type ts -C 3 | head -100
Length of output: 10885
🏁 Script executed:
#!/bin/bash
# Check CodexProvider for validateBareModelId
echo "=== CodexProvider validateBareModelId usage ==="
fd -t f "codex.*provider" --extension ts | head -1 | xargs cat | grep -n "validateBareModelId" -A 3 -B 3
echo ""
echo "=== CodexProvider around line 708 ==="
fd -t f "codex.*provider" --extension ts | head -1 | xargs sed -n '700,720p'
Length of output: 1555
🏁 Script executed:
#!/bin/bash
# Check how ProviderFactory uses model IDs for routing
echo "=== ProviderFactory getProviderForModel ==="
fd -t f "provider.*factory" --extension ts | head -1 | xargs cat | grep -n "getProviderForModel" -A 20 -B 5
echo ""
echo "=== Check validateBareModelId implementation ==="
rg "function validateBareModelId|const validateBareModelId|validateBareModelId.*=" --type ts -A 10 -B 2
Length of output: 4494
@Shironex, you're absolutely right — I apologize for the incorrect review comment.
I can confirm from the code that the architecture correctly handles the codex- prefix:
- Resolver passes through
codex-gpt-5.3-codexwith the prefix intact (for routing) - ProviderFactory uses the prefix to identify the correct provider (line 154:
reg.canHandleModel?.(lowerModel)) - Service layer strips the prefix before calling the provider:
- AgentService line 393:
const bareModel = stripProviderPrefix(effectiveModel); - Same pattern in IdeationService, AutoModeService, simple-query-service, etc.
- AgentService line 393:
- CodexProvider validates it receives a bare model ID at line 708:
validateBareModelId(options.model, 'CodexProvider')
The prefix stripping happens at the correct layer (services), not in the resolver. The resolver needs to keep the prefix for proper provider routing. Thank you for the correction!
🧬 Code Graph Analysis Results
[libs/types/src/index.ts:101]
type CodexModelId
[libs/types/src/index.ts:97]
getAllCodexModelIds
[libs/types/src/model.ts:108]
export type CodexModelId = (typeof CODEX_MODEL_MAP)[keyof typeof CODEX_MODEL_MAP];
[libs/types/src/model.ts:93-95]
export function getAllCodexModelIds(): CodexModelId[] {
return CODEX_MODEL_IDS as CodexModelId[];
}
🧠 Learnings used
Learnt from: CR
Repo: AutoMaker-Org/automaker PR: 0
File: CLAUDE.md:0-0
Timestamp: 2026-01-22T11:59:31.965Z
Learning: Applies to **/*.{ts,tsx,js,jsx} : Use resolveModelString() from automaker/model-resolver to convert model aliases (haiku, sonnet, opus) to full model names before making API calls
- Added `isAdaptiveThinkingModel` utility to improve model identification logic in the AddFeatureDialog. - Updated the ThinkingLevelSelector to conditionally display information based on available thinking levels. - Enhanced model name formatting in agent-context-parser to include 'GPT-5.3 Codex' for better clarity. These changes improve the user experience by refining model handling and UI feedback related to adaptive thinking capabilities.
- Introduced a new event type 'dev-server:url-detected' to enhance event handling for the development server. - This addition allows for better tracking and response to URL detection during server operations. These changes improve the event system's capability to manage server-related events effectively.
Shironex
left a comment
There was a problem hiding this comment.
GitChorus AI Review
This is a well-structured PR that adds Claude Opus 4.6 and GPT-5.3-Codex model support with good coverage across model maps, UI, tests, and documentation. The implementation of adaptive thinking is thoughtful with proper model-specific UI filtering via getThinkingLevelsForModel() and auto-normalization on model switch.
The most notable issue is that the adaptive thinking normalization was only added to add-feature-dialog.tsx but missed in edit-feature-dialog.tsx, which has the same handleModelChange pattern. This means editing an existing feature and switching models won't properly normalize thinking levels between adaptive and manual modes.
There's also a semantic question about whether adaptive and none thinking levels producing identical SDK options (both return {}) is truly correct — the MEMORY.md says "just don't set maxThinkingTokens" for adaptive, but it would be good to confirm this is the right SDK behavior.
The DEFAULT_MODELS.codex not being updated to the new GPT-5.3-Codex model (while it's marked as default: true in the models list) is a minor inconsistency worth addressing.
Overall, the code quality is good with proper test coverage, clean documentation updates, and sensible architecture. The NVM symlink Windows fix is a nice bonus improvement.
Quality Score: 7/10 ⭐⭐⭐⭐⭐⭐⭐☆☆☆
Findings Summary
| # | Severity | Category | Finding | Location |
|---|---|---|---|---|
| 1 | 🟠 Major | Logic | Missing adaptive thinking normalization in edit-feature-dialog | edit-feature-dialog.tsx:197 |
| 2 | 🟠 Major | Logic | Adaptive and none thinking levels produce identical SDK configuration | sdk-options.ts:265 |
| 3 | 🟡 Minor | Logic | Provider model thinking level resolution uses fragile 'opus' string check | phase-model-selector.tsx:1298 |
| 4 | 🟡 Minor | Logic | Redundant ternary in thinking level selector | thinking-level-selector.tsx:23 |
| 5 | 🟡 Minor | Codebase-fit | Duplicated supportsReasoningEffort function diverges from shared version | codex-model-configuration.tsx:165 |
| 6 | 🟡 Minor | Logic | DEFAULT_MODELS.codex not updated to new GPT-5.3-Codex model | model.ts:101 |
| 7 | 🔵 Nit | Style | Unrelated event type addition bundled in model update PR | event.ts:49 |
| 8 | 🔵 Nit | Codebase-fit | Adaptive thinking level added to global arrays despite being model-specific | model-constants.ts:174 |
via GitChorus
Comments Not Placed Inline
The following findings could not be placed as inline comments (line not in diff):
apps/ui/src/components/views/board-view/dialogs/edit-feature-dialog.tsx:197— File "apps/ui/src/components/views/board-view/dialogs/edit-feature-dialog.tsx" not found in diff
| function buildThinkingOptions(thinkingLevel?: ThinkingLevel): Partial<Options> { | ||
| if (!thinkingLevel || thinkingLevel === 'none') { | ||
| return {}; | ||
| } |
There was a problem hiding this comment.
Warning
Major - Logic: Adaptive and none thinking levels produce identical SDK configuration
When thinkingLevel is 'adaptive', buildThinkingOptions returns an empty object {} — the same as when thinking is disabled ('none'). This means 'adaptive' and 'none' produce identical SDK options. The PR description says adaptive thinking means 'just don't set maxThinkingTokens - model uses adaptive by default', which aligns with the implementation. However, there's no way for the SDK to distinguish between 'thinking off' and 'adaptive thinking'. If the SDK requires some indication (e.g., a different parameter) to enable adaptive thinking vs. simply disabling thinking, this could be a bug. Verify with the Claude Agent SDK 0.2.32 documentation that omitting maxThinkingTokens truly enables adaptive thinking rather than disabling thinking entirely.
Problematic code:
if (thinkingLevel === 'adaptive') {
logger.debug(
`buildThinkingOptions: thinkingLevel="adaptive" -> no maxThinkingTokens (model decides)`
);
return {};
}Suggested fix:
// Verify that the Claude Agent SDK 0.2.32 uses adaptive thinking by default
// when maxThinkingTokens is omitted. If the SDK needs an explicit signal,
// this should be updated. For example, if there's a `thinking` option:
//
// if (thinkingLevel === 'adaptive') {
// return { thinking: 'adaptive' }; // or whatever the SDK expects
// }There was a problem hiding this comment.
Good catch. Verified against the Claude Agent SDK docs — maxThinkingTokens defaults to undefined. For Opus 4.6, omitting it enables adaptive thinking (the model's default behavior). For other models, omitting it means no thinking.
The behavior is intentionally the same for both 'none' and 'adaptive' returning {} — the difference is at the UI level: Opus 4.6 only shows 'None' and 'Adaptive' options, so users can't accidentally set manual thinking budgets on a model that doesn't support them. When a user picks 'Adaptive', the empty return signals 'use the model default' which IS adaptive thinking for Opus 4.6.
That said, the 'none' case for Opus 4.6 does mean we can't explicitly disable thinking for that model through the SDK. This is a known SDK limitation — there's no thinking: false option. We could set maxThinkingTokens: 0 but it's unclear if the SDK supports that. Leaving as-is for now since the UI guides users correctly.
| @@ -1296,7 +1296,9 @@ export function PhaseModelSelector({ | |||
| <div className="px-2 py-1 text-xs font-medium text-muted-foreground"> | |||
| Thinking Level | |||
| </div> | |||
There was a problem hiding this comment.
Important
Minor - Logic: Provider model thinking level resolution uses fragile 'opus' string check
The check model.mapsToClaudeModel === 'opus' is used to determine if the model is an Opus model for adaptive thinking. But this only handles provider models that explicitly map to 'opus'. There are 4 places in this file that render thinking levels — 2 use model.mapsToClaudeModel (for provider models like GLM/MiniMax), and 2 use model.id directly (for native Claude models). The mapsToClaudeModel === 'opus' check is a fragile heuristic: future Opus versions that aren't adaptive or non-Opus models that are adaptive would break. However, for the current implementation this is acceptable.
Problematic code:
{getThinkingLevelsForModel(
model.mapsToClaudeModel === 'opus' ? 'claude-opus' : ''
).map((level) => (Suggested fix:
// Consider passing the resolved model string instead of checking mapsToClaudeModel:
// This is minor but would be more robust:
{getThinkingLevelsForModel(
model.mapsToClaudeModel === 'opus'
? 'claude-opus'
: model.id || ''
).map((level) => (There was a problem hiding this comment.
Fixed — now falls back to model.id || '' instead of just '' for non-opus provider models. This makes the check more robust if future models also support adaptive thinking.
| model, | ||
| }: ThinkingLevelSelectorProps) { | ||
| const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel(''); | ||
|
|
There was a problem hiding this comment.
Important
Minor - Logic: Redundant ternary in thinking level selector
When model is undefined, getThinkingLevelsForModel('') is called. Since '' does not include 'opus-4-6' and is not 'claude-opus', this returns the full manual thinking levels. This is correct behavior but the ternary is redundant — getThinkingLevelsForModel('') and getThinkingLevelsForModel(undefined) would both work if the function accepted string | undefined. The current implementation works correctly but is slightly confusing.
Problematic code:
const levels = model ? getThinkingLevelsForModel(model) : getThinkingLevelsForModel('');Suggested fix:
// Simplify:
const levels = getThinkingLevelsForModel(model || '');There was a problem hiding this comment.
Fixed — simplified to getThinkingLevelsForModel(model || '').
| @@ -160,6 +165,7 @@ export function CodexModelConfiguration({ | |||
|
|
|||
There was a problem hiding this comment.
Important
Minor - Codebase-fit: Duplicated supportsReasoningEffort function diverges from shared version
This file has a local supportsReasoningEffort function that duplicates the shared supportsReasoningEffort from @automaker/types. The shared version uses REASONING_CAPABLE_MODELS Set for O(1) lookup. The local version uses a hardcoded array and must be manually kept in sync. The PR updated the local version to include the new model, which is correct, but the duplication increases maintenance burden and risk of divergence.
Problematic code:
function supportsReasoningEffort(modelId: string): boolean {
const reasoningModels = [
'codex-gpt-5.3-codex',
'codex-gpt-5.2-codex',
'codex-gpt-5.1-codex-max',
'codex-gpt-5.2',
'codex-gpt-5.1',
];
return reasoningModels.includes(modelId);
}Suggested fix:
// Replace local function with the shared one from @automaker/types:
import { supportsReasoningEffort } from '@automaker/types';
// Remove the local function definition entirelyThere was a problem hiding this comment.
Fixed — replaced the local supportsReasoningEffort function with the shared version from @automaker/types which uses the REASONING_CAPABLE_MODELS Set.
| @@ -96,7 +99,7 @@ export function getAllCodexModelIds(): CodexModelId[] { | |||
| * Uses canonical prefixed IDs for consistent routing. | |||
| */ | |||
| export const DEFAULT_MODELS = { | |||
There was a problem hiding this comment.
Important
Minor - Logic: DEFAULT_MODELS.codex not updated to new GPT-5.3-Codex model
The PR adds GPT-5.3-Codex as the new 'Latest frontier agentic coding model' and marks it as default: true in CODEX_MODELS (codex-models.ts), but DEFAULT_MODELS.codex still points to gpt52Codex. The comment 'GPT-5.2-Codex is the most advanced agentic coding model' is now inaccurate since GPT-5.3-Codex was added as a higher-tier model. Consider updating the default to the new model, or at minimum fixing the comment.
Problematic code:
codex: CODEX_MODEL_MAP.gpt52Codex, // GPT-5.2-Codex is the most advanced agentic coding modelSuggested fix:
export const DEFAULT_MODELS = {
claude: 'claude-opus-4-6',
cursor: 'cursor-auto',
codex: CODEX_MODEL_MAP.gpt53Codex, // GPT-5.3-Codex is the latest frontier agentic coding model
} as const;There was a problem hiding this comment.
Fixed — updated DEFAULT_MODELS.codex to CODEX_MODEL_MAP.gpt53Codex with updated comment.
| | 'dev-server:started' | ||
| | 'dev-server:output' | ||
| | 'dev-server:stopped' | ||
| | 'dev-server:url-detected' |
There was a problem hiding this comment.
Note
Nit - Style: Unrelated event type addition bundled in model update PR
A new event type dev-server:url-detected was added in this PR, but it's unrelated to the Claude Opus 4.6 / GPT-5.3-Codex model changes described in the PR title and description. This appears to be an unrelated change bundled into the PR. While it's a valid addition (there's code in dev-server-service.ts that emits this event), it would be better practice to separate unrelated changes into their own PRs.
Problematic code:
| 'dev-server:url-detected'Suggested fix:
// Consider moving this to a separate PR, or at minimum mentioning it in the PR descriptionThere was a problem hiding this comment.
Acknowledged — this was bundled from a merge commit. Will keep it in this PR since it's already here and the emitting code exists, but noted for future PRs.
| 'none', | ||
| 'low', | ||
| 'medium', | ||
| 'high', |
There was a problem hiding this comment.
Note
Nit - Codebase-fit: Adaptive thinking level added to global arrays despite being model-specific
The global THINKING_LEVELS array now includes 'adaptive' at the end, but this array is no longer used by thinking-level-selector.tsx (which now uses getThinkingLevelsForModel). It's still referenced in phase-model-selector.tsx's import but the import was removed in the diff. However, the array in model-display.ts also has 'adaptive' appended. Having 'adaptive' in the global array means if any other component iterates all thinking levels, it would show 'adaptive' for all models — not just Opus 4.6. This is potentially confusing but currently harmless since the key consumers were updated.
Problematic code:
export const THINKING_LEVELS: ThinkingLevel[] = [
'none',
'low',
'medium',
'high',
'ultrathink',
'adaptive',
];Suggested fix:
// This is acceptable for now, but consider whether 'adaptive' belongs
// in the global THINKING_LEVELS array or if it should only appear via
// getThinkingLevelsForModel() which is model-awareThere was a problem hiding this comment.
Acknowledged — the global THINKING_LEVELS array includes 'adaptive' for completeness (e.g., serialization/deserialization), but the key UI consumers now use getThinkingLevelsForModel() which is model-aware. Acceptable for now.
- Extract getNvmWindowsCliPaths() helper to DRY up NVM_SYMLINK logic - Update DEFAULT_MODELS.codex to gpt53Codex - Simplify redundant ternary in thinking-level-selector - Replace local supportsReasoningEffort with shared import from @automaker/types - Use model.id fallback in phase-model-selector thinking level resolution Co-Authored-By: Claude Opus 4.6 <noreply@anthropic.com>
Summary
claude-opus-4-6with 128K max output tokens and adaptive thinking support@openai/codex-sdkto 0.98.0@anthropic-ai/claude-agent-sdkfrom 0.1.76 to 0.2.32Changes
Claude Opus 4.6 Integration
CLAUDE_CANONICAL_MAP,CLAUDE_MODEL_MAP,DEFAULT_MODELS) to resolveopus→claude-opus-4-6'adaptive'thinking level for Opus 4.6 (model decides its own reasoning depth)isAdaptiveThinkingModel()andgetThinkingLevelsForModel()helpers to@automaker/typesOpus 4.6) and available models list (128K output)GPT-5.3-Codex Model
gpt53Codex: 'codex-gpt-5.3-codex'toCODEX_MODEL_MAP@openai/codex-sdkfrom 0.97.0 to 0.98.0SDK Updates
@anthropic-ai/claude-agent-sdk0.1.76 → 0.2.32@openai/codex-sdk0.97.0 → 0.98.0outputFormatproperty confirmed still valid (no deprecation)Test plan
claude-opus-4-6modelclaude-opus-4-5-20251101in codebase🤖 Generated with Claude Code
Summary by CodeRabbit
New Features
Updates
Documentation
Tests
Chores