fix(onboard): accept Codex auth in model check#80913
Conversation
|
Codex review: needs maintainer review before merge. Summary Reproducibility: yes. Source inspection of current main shows Real behavior proof Next step before merge Security Review detailsBest possible solution: Land this focused helper/test change once required checks and maintainer review are satisfied, keeping onboarding auth warnings aligned with runtime routing. Do we have a high-confidence way to reproduce the issue? Yes. Source inspection of current main shows Is this the best way to solve the issue? Yes. Reusing What I checked:
Likely related people:
Remaining risk / open question:
Codex review notes: model gpt-5.5, reasoning high; reviewed against 75f5d6d9b5f1. |
2a995c7 to
b388869
Compare
b388869 to
f316a3e
Compare
|
Landed via rebase onto main.
Thanks @rubencu! |
Summary
openai/gpt-5.5but only looked for directopenaiauth profiles.openai-codex:*profile, so onboarding could warn thatopenaiauth was missing even though the selected Codex runtime route was usable.Change Type (select all)
Scope (select all touched areas)
Linked Issue/PR
Real behavior proof (required for external PRs)
No auth configured for provider "openai"when the selected canonical model isopenai/gpt-5.5and the agent has a usableopenai-codexOAuth profile for the default Codex runtime route.7ae4f38478001707e7d6cde09119521c2c315acf, Node 22 via repo scripts, temporary isolatedOPENCLAW_STATE_DIR, temporary agent auth store with redacted fake OAuth token material.timeout 30s node --import tsxwith a small harness that writesagent/auth-profiles.jsoncontainingopenai-codex:default, callswarnIfModelConfigLooksOffforopenai/gpt-5.5, and captures prompter notes.openai/gpt-5.5with only anopenai-codexprofile.Root Cause (if applicable)
openai-codexauth profiles.openai/*model refs backed by Codex OAuth profiles during onboarding warning checks.Regression Test Plan (if applicable)
src/commands/auth-choice.model-check.test.tsopenai/gpt-5.5with a storedopenai-codex:defaultOAuth profile emits no model-check warning when Codex runtime is selected.User-visible / Behavior Changes
The onboarding model check no longer warns that OpenAI auth is missing after successful Codex OAuth setup for the canonical
openai/gpt-5.5route. No new config surface.Diagram (if applicable)
Security Impact (required)
Yes/No) NoYes/No) NoYes/No) NoYes/No) NoYes/No) NoYes, explain risk + mitigation: N/ARepro + Verification
Environment
openai/gpt-5.5withopenai-codexauth profileagents.defaults.model.primary = "openai/gpt-5.5"; temp auth profile provideropenai-codexSteps
openai-codex:defaultOAuth profile.warnIfModelConfigLooksOffagainstagents.defaults.model.primary = "openai/gpt-5.5"with catalog validation disabled, matching onboarding's post-auth warning path.Expected
Model checkwarning is emitted when Codex OAuth auth is present for the Codex runtime route.Actual
model-check notes emitted: 0Evidence
Attach at least one:
Verification run:
Human Verification (required)
What you personally verified (not just CI), and how:
openai/gpt-5.5plusopenai-codex:default; targeted regression tests; targeted lint; post-rebase focused test; local Codex review loop; manual live onboarding/auth test on this PR branch. Local test typecheck was also run after the typed mock fix.models.providers.openai.baseUrldoes not borrow Codex OAuth auth and still warns for missing direct OpenAI auth.Review Conversations
If a bot review conversation is addressed by this PR, resolve that conversation yourself. Do not leave bot review conversation cleanup for maintainers.
Compatibility / Migration
Yes/No) YesYes/No) NoYes/No) NoRisks and Mitigations