Support Codex ChatGPT auth#248
Conversation
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: 150961eed0
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let mut body = serde_json::json!({ | ||
| "model": request.model, | ||
| "input": messages_to_responses_input(&request.messages), | ||
| "stream": true, | ||
| "store": false, |
There was a problem hiding this comment.
Forward max token budget to Responses API
build_responses_body never maps request.max_tokens into the Responses payload (e.g. max_output_tokens), so api.max_output_tokens and per-turn token caps are silently ignored in codex_chatgpt mode. That can materially increase latency/cost and defeats callers that expect provider implementations to honor ProviderRequest::max_tokens like the other backends do.
Useful? React with 👍 / 👎.
| if response_has_function_call(parsed.get("response")) { | ||
| stop_reason = Some(StopReason::ToolUse); | ||
| } |
There was a problem hiding this comment.
Handle incomplete Responses as MaxTokens stops
The response.completed path only marks ToolUse and otherwise falls back to EndTurn, but never checks response.status / incomplete_details for truncation reasons like max_output_tokens. In that case, truncated generations are treated as successful completions, so the max-output recovery path in query/mod.rs (which depends on StopReason::MaxTokens) will not run.
Useful? React with 👍 / 👎.
Summary
auth_mode = "codex_chatgpt"/--auth-mode codex_chatgptto reuse an existingcodex loginChatGPT session without storing tokens in agent-code config.auth.jsontoken loading/refresh and routes this auth mode through an OpenAI Responses provider path forhttps://chatgpt.com/backend-api/codex.Test plan
cargo check --all-targetscargo clippy --all-targets -- -D warningscargo fmt --all -- --checkcargo test -p agent-code-lib codex_authcargo test -p agent-code-lib openai::testscargo test -p agent-code-lib api_config_parses_codex_chatgpt_auth_mode_from_tomlcargo test --workspace --lib --tests -- --skip bwrapcargo run -p agent-code -- --auth-mode codex_chatgpt --model gpt-5.4 --prompt "Reply with exactly: ok" --max-turns 2 --verbosecargo test --all-targetscurrently fails in this container on existing bwrap sandbox tests withbwrap: setting up uid map: Permission denied.