feat: add GPT-5.4 support and docs refresh#65
Conversation
Add GPT-5.4 model normalization and reasoning support, plus optional GPT-5.4 Pro handling with fallback to GPT-5.4 under fallback policy. Update templates and docs to use GPT-5.4 defaults, and expand unit/integration/property tests for 5.4 coverage. Co-authored-by: Codex <noreply@openai.com>
📝 WalkthroughWalkthroughThis PR introduces support for GPT-5.4 as the new primary model, replacing GPT-5.2 across defaults and templates, and adds an optional GPT-5.4-Pro variant. Changes span configuration files, documentation updates, model mappings, request transformation logic, and comprehensive test coverage for normalization, reasoning configuration, and fallback behavior. Changes
Estimated code review effort🎯 4 (Complex) | ⏱️ ~50 minutes Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches
🧪 Generate unit tests (beta)
Comment |
|
Note Unit test generation is a public access feature. Expect some limitations and changes as we gather feedback and continue to improve it. Generating unit tests... This may take up to 20 minutes. |
|
✅ Actions performedReview triggered.
|
|
@coderabbitai review @greptile-ai please review this PR for runtime/model-routing correctness and test coverage completeness. |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 2
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
test/property/transformer.property.test.ts (1)
219-232:⚠️ Potential issue | 🟡 MinorStrengthen assertion to match the test’s stated contract (
none -> low).Current assertion only checks “not none,” which is too permissive for this behavior.
🔧 Proposed test fix
- expect(result.effort).not.toBe("none"); + expect(result.effort).toBe("low");🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/property/transformer.property.test.ts` around lines 219 - 232, The test currently only asserts that getReasoningConfig(model, { reasoningEffort: "none" }).effort is not "none"; change the assertion to assert it equals "low" so it matches the test contract ("none -> low"). Update the assertion in the "codex/pro models upgrade none to low" case to expect result.effort === "low" for the models passed into getReasoningConfig.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@CHANGELOG.md`:
- Line 28: The changelog entry header "## [5.4.2] - 2026-03-06" uses a future
date; update the header in CHANGELOG.md by either moving the entry under the
"Unreleased" section (rename to "## [5.4.2]" or place its content under the
existing "Unreleased" heading) or change the date to the actual published date
(e.g., "2026-03-05" or the real release date) so the heading no longer shows a
future date.
In `@docs/development/TESTING.md`:
- Around line 620-625: The document's example outputs and test snippets still
show the older fallback "gpt-5" and must be updated to the new fallback
"gpt-5.1": search for occurrences in this file that reference normalizeModel
outputs or unit-test examples expecting "gpt-5" (including cases for undefined
or unknown inputs like "random-model") and replace them with "gpt-5.1" (also
update variants like "GPT 5" examples to match normalized forms such as
"gpt-5.1" or "gpt-5.4-pro" as appropriate); ensure every mention of
normalizeModel, its example results, and any future unit test snippets
consistently reflect the new fallback value "gpt-5.1".
---
Outside diff comments:
In `@test/property/transformer.property.test.ts`:
- Around line 219-232: The test currently only asserts that
getReasoningConfig(model, { reasoningEffort: "none" }).effort is not "none";
change the assertion to assert it equals "low" so it matches the test contract
("none -> low"). Update the assertion in the "codex/pro models upgrade none to
low" case to expect result.effort === "low" for the models passed into
getReasoningConfig.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 4139efe6-534d-4935-89a0-9fffc1df8e82
📒 Files selected for processing (23)
CHANGELOG.mdREADME.mdconfig/README.mdconfig/opencode-legacy.jsonconfig/opencode-modern.jsondocs/configuration.mddocs/development/TESTING.mddocs/getting-started.mddocs/index.mddocs/troubleshooting.mdindex.tslib/prompts/codex.tslib/request/fetch-helpers.tslib/request/helpers/model-map.tslib/request/request-transformer.tspackage.jsontest/codex-prompts.test.tstest/fetch-helpers.test.tstest/index.test.tstest/model-map.test.tstest/property/helpers.tstest/property/transformer.property.test.tstest/request-transformer.test.ts
Resolve changelog date, restore missing GPT-5.2 codex normalization coverage, add gpt-5.4-pro-none model mapping, and tighten property assertion semantics. Align testing-guide normalizeModel examples with current gpt-5 -> gpt-5.1 fallback behavior. Co-authored-by: Codex <noreply@openai.com>
There was a problem hiding this comment.
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/configuration.md (1)
349-354:⚠️ Potential issue | 🟡 MinorAdd a language to the fenced log snippet (markdownlint MD040).
The code fence under “look for:” is missing a language identifier and will trigger lint warnings.
🛠️ Suggested doc-lint fix
look for: -``` +```text [openai-codex-plugin] Model config lookup: "gpt-5.4" → normalized to "gpt-5.4" for API { hasModelSpecificConfig: true, resolvedConfig: { ... } }</details> <details> <summary>🤖 Prompt for AI Agents</summary>Verify each finding against the current code and only fix it if needed.
In
@docs/configuration.mdaround lines 349 - 354, The fenced log snippet under
"look for:" is missing a language identifier and triggers markdownlint MD040;
update the fenced code block that contains the line beginning with
"[openai-codex-plugin] Model config lookup: "gpt-5.4" → normalized..." by adding
a language tag (e.g., use "text") after the opening triple backticks so the
block becomestext ...to satisfy the linter.</details> </blockquote></details> </blockquote></details>♻️ Duplicate comments (1)
docs/development/TESTING.md (1)
608-625:⚠️ Potential issue | 🟡 MinorNormalization docs are still internally inconsistent with current fallback outputs.
The updated examples here are correct, but later snippets still assert old outputs (e.g., Line 726, Line 729-730, Line 744-747). Please align those future-unit-test expectations to current behavior (
gpt-5.1fallback andgpt-5.1-codex-mini).📝 Suggested doc fix (future-unit-test snippet)
- expect(normalizeModel('gpt-5')).toBe('gpt-5') + expect(normalizeModel('gpt-5')).toBe('gpt-5.1') expect(normalizeModel('gpt-5-codex')).toBe('gpt-5-codex') - expect(normalizeModel('gpt-5-codex-mini')).toBe('codex-mini-latest') - expect(normalizeModel('gpt-5-mini')).toBe('gpt-5') - expect(normalizeModel('gpt-5-nano')).toBe('gpt-5') + expect(normalizeModel('gpt-5-codex-mini')).toBe('gpt-5.1-codex-mini') + expect(normalizeModel('gpt-5-mini')).toBe('gpt-5.1') + expect(normalizeModel('gpt-5-nano')).toBe('gpt-5.1') ... - expect(normalizeModel('openai/gpt-5-codex-mini-high')).toBe('codex-mini-latest') - expect(normalizeModel('gpt-5-high')).toBe('gpt-5') + expect(normalizeModel('openai/gpt-5-codex-mini-high')).toBe('gpt-5.1-codex-mini') + expect(normalizeModel('gpt-5-high')).toBe('gpt-5.1') ... - expect(normalizeModel(undefined)).toBe('gpt-5') - expect(normalizeModel('codex-mini-latest')).toBe('codex-mini-latest') - expect(normalizeModel('random')).toBe('gpt-5') + expect(normalizeModel(undefined)).toBe('gpt-5.1') + expect(normalizeModel('codex-mini-latest')).toBe('gpt-5.1-codex-mini') + expect(normalizeModel('random')).toBe('gpt-5.1')🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/development/TESTING.md` around lines 608 - 625, The later "future-unit-test" examples in TESTING.md are still using old expected outputs for normalizeModel; update those assertions to match the current behavior: use "gpt-5.1" as the generic fallback for undefined/random inputs and use "gpt-5.1-codex-mini" for codex-mini mappings (e.g., replace any old expectations that assert "gpt-5" or other legacy strings with "gpt-5.1", and replace codex-mini legacy expectations with "gpt-5.1-codex-mini"), ensuring all normalizeModel examples and test snippets consistently match the examples shown earlier.🧹 Nitpick comments (1)
test/index.test.ts (1)
1401-1403: Tighten this test’s isolation from legacy fallback toggles.This case is validating
gpt-5.4-pro -> gpt-5.4, but Line 1402 also enables the GPT-5.3 legacy fallback toggle. Keeping that toggle off here makes the assertion more regression-proof for the 5.4 path.♻️ Suggested test tightening
vi.mocked(configModule.getFallbackOnUnsupportedCodexModel).mockReturnValueOnce(true); - vi.mocked(configModule.getFallbackToGpt52OnUnsupportedGpt53).mockReturnValueOnce(true); + vi.mocked(configModule.getFallbackToGpt52OnUnsupportedGpt53).mockReturnValueOnce(false);🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/index.test.ts` around lines 1401 - 1403, The test is enabling the GPT-5.3 legacy fallback toggle which blurs validation of the gpt-5.4-pro -> gpt-5.4 path; update the mock for getFallbackToGpt52OnUnsupportedGpt53 to return false (vi.mocked(configModule.getFallbackToGpt52OnUnsupportedGpt53).mockReturnValueOnce(false)) so only getFallbackOnUnsupportedCodexModel remains true and other mocks (e.g., fetchHelpers.transformRequestForCodex) stay unchanged.🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed. Outside diff comments: In `@docs/configuration.md`: - Around line 349-354: The fenced log snippet under "look for:" is missing a language identifier and triggers markdownlint MD040; update the fenced code block that contains the line beginning with "[openai-codex-plugin] Model config lookup: "gpt-5.4" → normalized..." by adding a language tag (e.g., use "text") after the opening triple backticks so the block becomes ```text ... ``` to satisfy the linter. --- Duplicate comments: In `@docs/development/TESTING.md`: - Around line 608-625: The later "future-unit-test" examples in TESTING.md are still using old expected outputs for normalizeModel; update those assertions to match the current behavior: use "gpt-5.1" as the generic fallback for undefined/random inputs and use "gpt-5.1-codex-mini" for codex-mini mappings (e.g., replace any old expectations that assert "gpt-5" or other legacy strings with "gpt-5.1", and replace codex-mini legacy expectations with "gpt-5.1-codex-mini"), ensuring all normalizeModel examples and test snippets consistently match the examples shown earlier. --- Nitpick comments: In `@test/index.test.ts`: - Around line 1401-1403: The test is enabling the GPT-5.3 legacy fallback toggle which blurs validation of the gpt-5.4-pro -> gpt-5.4 path; update the mock for getFallbackToGpt52OnUnsupportedGpt53 to return false (vi.mocked(configModule.getFallbackToGpt52OnUnsupportedGpt53).mockReturnValueOnce(false)) so only getFallbackOnUnsupportedCodexModel remains true and other mocks (e.g., fetchHelpers.transformRequestForCodex) stay unchanged.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID:
dbfe1c40-56af-48a2-986e-2143bc56e7e9📒 Files selected for processing (23)
CHANGELOG.mdREADME.mdconfig/README.mdconfig/opencode-legacy.jsonconfig/opencode-modern.jsondocs/configuration.mddocs/development/TESTING.mddocs/getting-started.mddocs/index.mddocs/troubleshooting.mdindex.tslib/prompts/codex.tslib/request/fetch-helpers.tslib/request/helpers/model-map.tslib/request/request-transformer.tspackage.jsontest/codex-prompts.test.tstest/fetch-helpers.test.tstest/index.test.tstest/model-map.test.tstest/property/helpers.tstest/property/transformer.property.test.tstest/request-transformer.test.ts
|
Addressed all review findings in commit Also fixed the outside-diff review note by tightening the property test assertion in Validation rerun after fixes:
|
|
@greptile-ai please re-review latest commit �eb88bd. @coderabbitai review |
|
✅ Actions performedReview triggered.
|
Add language tag to configuration log snippet and isolate GPT-5.4-pro fallback test from legacy GPT-5.3 fallback toggle. Co-authored-by: Codex <noreply@openai.com>
|
Followed up on review Changes applied:
Validation after this commit:
|
|
@coderabbitai review @greptile-ai please re-review latest commit d9f8617. |
|
✅ Unit tests committed locally. Commit: |
|
✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
Caution
Some comments are outside the diff and can’t be posted inline due to platform limitations.
⚠️ Outside diff range comments (1)
docs/configuration.md (1)
38-52:⚠️ Potential issue | 🟡 MinorClarify
nonecoercion forgpt-5.4-proexplicitly.The table is correct to omit
noneforgpt-5.4-pro, but the explanatory bullet currently mentions only codex. Adding pro here will better match runtime behavior.📝 Suggested wording update
-- `none` - no reasoning phase (base models only, auto-converts to `low` for codex) +- `none` - no reasoning phase (base models only; auto-converts to `low` for codex/pro families, including `gpt-5.4-pro`)🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@docs/configuration.md` around lines 38 - 52, Update the explanatory bullet about "none" so it explicitly states that "none" is coerced to "low" not only for codex models but also for pro models such as gpt-5.4-pro; reference the table entries like `gpt-5.4-pro` and `gpt-5-codex` and change the line "- `none` - no reasoning phase (base models only, auto-converts to `low` for codex)" to indicate it auto-converts to `low` for both codex and pro models (e.g., `gpt-5-codex`, `gpt-5.4-pro`).
🧹 Nitpick comments (2)
lib/request/request-transformer.ts (1)
82-93: Tighten GPT-5.4/GPT-5.4-pro normalization patterns.These
includes(...)checks are overly broad and can normalize unintended model IDs that merely containgpt-5.4as a substring.🔧 Suggested safer normalization patterns
- if ( - normalized.includes("gpt-5.4-pro") || - normalized.includes("gpt 5.4 pro") - ) { + if (/\bgpt(?:-| )5\.4(?:-| )pro(?:\b|[- ])/.test(normalized)) { return "gpt-5.4-pro"; } @@ - if (normalized.includes("gpt-5.4") || normalized.includes("gpt 5.4")) { + if (/\bgpt(?:-| )5\.4(?:\b|[- ])/.test(normalized)) { return "gpt-5.4"; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/request/request-transformer.ts` around lines 82 - 93, The current includes(...) checks for "gpt-5.4-pro" and "gpt-5.4" are too permissive and may match unintended substrings; update the checks in request-transformer.ts to use stricter matching on the normalized string (e.g., a regex or tokenized check) that enforces separators or boundaries around the model id so only exact IDs like "gpt-5.4-pro" or "gpt-5.4" (and their space variants) match; keep the "pro" branch checked first (the existing normalized.includes("gpt-5.4-pro") / normalized.includes("gpt 5.4 pro") logic) and replace the includes(...) calls with a regex.test(...) or a token-array lookup against normalized to avoid matching names that merely contain "gpt-5.4" as a substring.lib/prompts/codex.ts (1)
123-125: Use token-aware matching for GPT-5.4 family detection.
includes("gpt-5.4")can also match unrelated version strings (e.g.,gpt-5.40). Prefer boundary-aware matching to avoid accidental family routing.♻️ Suggested matcher refinement
- if (normalizedModel.includes("gpt-5.4") || normalizedModel.includes("gpt 5.4")) { + if (/\bgpt(?:-| )5\.4(?:\b|[- ])/i.test(normalizedModel)) { return "gpt-5.4"; }🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@lib/prompts/codex.ts` around lines 123 - 125, The current family detection uses normalizedModel.includes("gpt-5.4") which can false-match strings like "gpt-5.40"; change the check on normalizedModel to a token/boundary-aware match (e.g., use a regex with word boundaries such as /\bgpt[- ]5\.4\b/i or split tokens and compare exact token values) so only the exact "gpt-5.4" family (or "gpt 5.4") is matched; update the conditional that references normalizedModel in the block returning "gpt-5.4".
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/gpt54-models.test.ts`:
- Around line 64-75: Add explicit assertions for the missing "gpt-5.4-pro-none"
variant in the test block that checks pro reasoning-effort normalization: update
the loop (or add an assertion) so that normalizeModel("gpt-5.4-pro-none") and
getNormalizedModel("gpt-5.4-pro-none") both expect "gpt-5.4-pro" to prevent a
regression if the map key is removed; apply the same explicit check in the other
similar test locations noted (lines around 94-100 and 282-287) referencing the
normalizeModel and getNormalizedModel functions.
---
Outside diff comments:
In `@docs/configuration.md`:
- Around line 38-52: Update the explanatory bullet about "none" so it explicitly
states that "none" is coerced to "low" not only for codex models but also for
pro models such as gpt-5.4-pro; reference the table entries like `gpt-5.4-pro`
and `gpt-5-codex` and change the line "- `none` - no reasoning phase (base
models only, auto-converts to `low` for codex)" to indicate it auto-converts to
`low` for both codex and pro models (e.g., `gpt-5-codex`, `gpt-5.4-pro`).
---
Nitpick comments:
In `@lib/prompts/codex.ts`:
- Around line 123-125: The current family detection uses
normalizedModel.includes("gpt-5.4") which can false-match strings like
"gpt-5.40"; change the check on normalizedModel to a token/boundary-aware match
(e.g., use a regex with word boundaries such as /\bgpt[- ]5\.4\b/i or split
tokens and compare exact token values) so only the exact "gpt-5.4" family (or
"gpt 5.4") is matched; update the conditional that references normalizedModel in
the block returning "gpt-5.4".
In `@lib/request/request-transformer.ts`:
- Around line 82-93: The current includes(...) checks for "gpt-5.4-pro" and
"gpt-5.4" are too permissive and may match unintended substrings; update the
checks in request-transformer.ts to use stricter matching on the normalized
string (e.g., a regex or tokenized check) that enforces separators or boundaries
around the model id so only exact IDs like "gpt-5.4-pro" or "gpt-5.4" (and their
space variants) match; keep the "pro" branch checked first (the existing
normalized.includes("gpt-5.4-pro") / normalized.includes("gpt 5.4 pro") logic)
and replace the includes(...) calls with a regex.test(...) or a token-array
lookup against normalized to avoid matching names that merely contain "gpt-5.4"
as a substring.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 39a27cbd-54df-486d-9c50-fe29fe854561
⛔ Files ignored due to path filters (1)
package-lock.jsonis excluded by!**/package-lock.json
📒 Files selected for processing (26)
CHANGELOG.mdREADME.mdconfig/README.mdconfig/opencode-legacy.jsonconfig/opencode-modern.jsondocs/configuration.mddocs/development/TESTING.mddocs/getting-started.mddocs/index.mddocs/troubleshooting.mdindex.tslib/prompts/codex.tslib/request/fetch-helpers.tslib/request/helpers/model-map.tslib/request/request-transformer.tspackage.jsontest/codex-prompts.test.tstest/edge-cases.test.tstest/fetch-helpers.test.tstest/gpt54-models.test.tstest/helper-utils.test.tstest/index.test.tstest/model-map.test.tstest/property/helpers.tstest/property/transformer.property.test.tstest/request-transformer.test.ts
- make gpt-5.4/pro family matching boundary-aware in runtime + prompt family detection - clarify docs for none->low coercion across codex/pro families - add regression tests for gpt-5.40/gpt-5.4pro non-matches and gpt-5.4-pro-none coverage Co-authored-by: Codex <noreply@openai.com>
|
Addressed remaining review issues in What was fixed
Validation
@coderabbitai review |
|
I'll kick off an incremental review of ✅ Actions performedReview triggered.
|
There was a problem hiding this comment.
Actionable comments posted: 1
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@lib/request/request-transformer.ts`:
- Around line 445-453: The model-name checks using string.includes on
normalizedName (for isGpt54Pro and isGpt54General) are too permissive and can
mis-set capability flags like supportsXhigh/supportsNone; update the logic in
request-transformer.ts to perform boundary-safe matching (e.g., use a regex with
word boundaries or tokenized comparisons) when computing isGpt54Pro and
isGpt54General so only exact family names like "gpt-5.4", "gpt-5.4-pro", "gpt
5.4", or "gpt 5.4 pro" match; apply the same boundary-safe fix to the other
gpt-5.4 checks in this file that influence reasoning detection (the similar
checks referenced near the second occurrence). Ensure the changed predicates
still respect the existing negation (!isGpt54Pro) for isGpt54General.
ℹ️ Review info
⚙️ Run configuration
Configuration used: Path: .coderabbit.yaml
Review profile: CHILL
Plan: Pro
Run ID: 9a7325c6-1070-41b2-ad34-79cf0fb132c7
📒 Files selected for processing (6)
docs/configuration.mdlib/prompts/codex.tslib/request/request-transformer.tstest/codex-prompts.test.tstest/gpt54-models.test.tstest/request-transformer.test.ts
🚧 Files skipped from review as they are similar to previous changes (4)
- test/codex-prompts.test.ts
- test/gpt54-models.test.ts
- test/request-transformer.test.ts
- lib/prompts/codex.ts
feat: add GPT-5.4 support and docs refresh
feat: add GPT-5.4 support and docs refresh
Summary
This PR updates the plugin to support the GPT-5.4 release while keeping rollout safe for existing users.
gpt-5.4as the new default general-purpose model family.gpt-5.4-pro(not enabled by default in shipped templates).gpt-5.4-pro -> gpt-5.4.gpt-5.3-codex -> gpt-5.2-codexvia existing compatibility toggle).Detailed Changes
Runtime/model handling
lib/request/helpers/model-map.tsnone,low,medium,high,xhigh) ->gpt-5.4.low,medium,high,xhigh) ->gpt-5.4-pro.lib/request/request-transformer.tsgpt-5.4andgpt-5.4-pro.gpt-5.4supportsnoneandxhigh.gpt-5.4-prosupportsxhigh;noneis upgraded tolow.index.tsgpt-5.4first.Fallback policy and entitlement behavior
lib/request/fetch-helpers.tsgpt-5.4-pro -> gpt-5.4gpt-5.4as latest general fallback option.Prompt family/instructions
lib/prompts/codex.tsgpt-5.4model family detection.gpt_5_2_prompt.md) per current Codex prompt layout.Templates/docs/versioning
config/opencode-modern.jsonconfig/opencode-legacy.jsonREADME.mdconfig/README.mddocs/getting-started.mddocs/configuration.mddocs/troubleshooting.mddocs/index.mddocs/development/TESTING.mdpackage.json->5.4.2CHANGELOG.mdentry for5.4.2Test Coverage Added
test/model-map.test.tstest/request-transformer.test.tstest/fetch-helpers.test.tstest/codex-prompts.test.tstest/index.test.tstest/property/helpers.tstest/property/transformer.property.test.tsValidation
npm run typecheckpassednpm run lintpassednpm testpassedNotes for Reviewers
gpt-5.4-proremains opt-in/manual in templates to avoid entitlement surprises.Summary by CodeRabbit
New Features
Documentation
Tests
Bug Fixes
Chores
note: greptile review for oc-chatgpt-multi-auth. cite files like
lib/foo.ts:123. confirm regression tests + windows concurrency/token redaction coverage.Greptile Summary
this pr adds runtime support for
gpt-5.4and optionalgpt-5.4-proacross model normalization, reasoning configuration, prompt-family detection, fallback policy, and documentation. the rollout is conservative:gpt-5.4-prostays opt-in in shipped templates and a default fallback edge (gpt-5.4-pro → gpt-5.4) is wired into the existingDEFAULT_UNSUPPORTED_CODEX_FALLBACK_CHAIN. previously-flagged gaps (gpt-5.4-pro-nonemissing fromMODEL_MAP, strippedgpt-5.2-codexnormalization tests) have been addressed.key changes:
model-map.ts— 12 new entries forgpt-5.4andgpt-5.4-provariants (includinggpt-5.4-pro-none)request-transformer.ts—gpt-5.4-propattern checked beforegpt-5.4;none→lowupgrade for pro;supportsXhighandsupportsNoneflags updated correctlycodex.ts—"gpt-5.4"added as aModelFamily; bothgpt-5.4andgpt-5.4-proresolve to the same family (gpt_5_2_prompt.md/gpt-5.4-instructions.md), which is intentional and test-confirmed;prewarmCodexInstructionsnow includesgpt-5.4fetch-helpers.ts—gpt-5.4-pro → gpt-5.4fallback edge added; entitlement error message updatedindex.ts—gpt-5.4added first in quota probe listgpt-5.4test/gpt54-models.test.ts,test/edge-cases.test.ts,test/helper-utils.test.tsaddedthe model normalization logic is correct, regex priority ordering is right (pro before base), xhigh/none/upgrade rules are consistent, fallback chain is clean, and test coverage is broad. one test description needs clarity for readability.
Confidence Score: 4/5
Flowchart
%%{init: {'theme': 'neutral'}}%% flowchart TD A["incoming model string\n(e.g. openai/gpt-5.4-pro-high)"] --> B["strip provider prefix\n(split on '/')"] B --> C["MODEL_MAP lookup\ngetNormalizedModel()"] C -- hit --> D["normalized model ID\n(e.g. gpt-5.4-pro)"] C -- miss --> E["modelId.toLowerCase()"] E --> F{regex chain} F -- matches gpt-5.4-pro --> G["return 'gpt-5.4-pro'"] F -- matches gpt-5.4 --> H["return 'gpt-5.4'"] F -- matches other --> I["existing families\n(gpt-5.2, gpt-5-codex, etc.)"] F -- no match --> J["return 'gpt-5.1' (fallback)"] D --> K["getReasoningConfig()"] G --> K H --> K K --> L{isGpt54Pro?} L -- yes --> M["supportsNone = false\nnone → low upgrade\nsupportsXhigh = true"] L -- no --> N{isGpt54General?} N -- yes --> O["supportsNone = true\nsupportsXhigh = true"] K --> P["getModelFamily() for prompts"] P --> Q["both gpt-5.4 and gpt-5.4-pro\nmap to 'gpt-5.4' family\n→ gpt_5_2_prompt.md\n→ gpt-5.4-instructions.md"] G --> R{fallback policy?} R -- fallback enabled --> S["DEFAULT fallback chain:\ngpt-5.4-pro → gpt-5.4"] R -- strict --> T["entitlement error returned"]Last reviewed commit: bff8479