-
Notifications
You must be signed in to change notification settings - Fork 5.6k
Closed
Labels
bugSomething isn't workingSomething isn't working
Description
What version of Codex is running?
codex-cli 0.40.0
Which model were you using?
gpt-5-codex
What platform is your computer?
Darwin 24.6.0 arm64 arm
What steps can reproduce the bug?
model = "openai/gpt-5-codex"
model_provider = "litellm"
[model_providers.litellm]
name = "litellm"
base_url = "http://localhost:2113/v1"
env_key = "LITELLM_API_KEY"
What is the expected behavior?
codex should be sending only /v1/responses
calls for gpt-5-codex
What do you see instead?
We use an API proxy (litellm) for accessing OpenAI models.
When configured with gpt-5-codex, the codex CLI sends v1/chat/completions
requests leading to errors:
⚠️ stream error: unexpected status 400 Bad Request: {"error":{"message":"litellm.BadRequestError: OpenAIException - This model is
only supported in v1/responses and not in v1/chat/completions.. Received Model Group=openai/gpt-5-codex\nAvailable Model Group
Fallbacks=None","type":"invalid_request_error","param":"model","code":"400"}}; retrying 1/5 in 199ms
The API calls to the gpt-5-codex
model can be successfully made manually, so it's not an issue with the API proxy.
curl http://localhost:2113/v1/responses \
-H "Content-Type: application/json" \
-H "Authorization: Bearer $(LITELLM_API_KEY)" \
-d '{
"model": "openai/gpt-5-codex",
"input": "Write python code to return n-th prime."
}'
{"id":"resp_XXX","created_at":1758706138,"error":null,"incomplete_details":null,"instructions":null,"metadata":{},"model":"gpt-5-codex","object":"response","output":[{"id":"rs_XXX","summary":[],"type":"reasoning","status":null},{"id":"msg_XXX","content":[{"annotations":[],"text":"```python\nimport math\n\ndef nth_prime(n: int) -> int:\n \"\"\"\n Return the n-th prime number (1-indexed).
Additional information
No response
zenyr
Metadata
Metadata
Assignees
Labels
bugSomething isn't workingSomething isn't working