-
Notifications
You must be signed in to change notification settings - Fork 7.6k
Description
What version of Codex is running?
codex-cli 0.71.0
What subscription do you have?
None
Which model were you using?
gpt-oss-20b and gpt-oss-120b
What platform is your computer?
Linux 6.17.10-100.fc41.x86_64 x86_64 unknown
What issue are you seeing?
When using a custom model provider hosting gpt-oss-20b and gpt-oss-120b, the majority of requests in Codex run for a few seconds then stop, sometimes without any output. For example
› Perform a full code review. Don't skip any files.
• Explored
└ List .
Read __init__.py, file.py
› What did you find?
The first request "Perform a full code review. Don't skip any files." ran for about 15 seconds and stopped, only looking at a couple files. The follow-up question yields no response.
Some simple requests like "Create an empty file called r.txt" or "Delete the file r.txt" do complete successfully.
What steps can reproduce the bug?
Use a custom model provider, defining it in ~/.codex/config.toml
model = "openai/gpt-oss-20b"
model_provider = "rits"
[model_providers.rits]
requires_openai_auth = false
name = "RITS gpt-oss-20b"
base_url = "https://custom.url.com/gpt-oss-20b/v1"
wire_api = "chat"
env_http_headers = { "RITS_API_KEY" = "RITS_API_KEY" }What is the expected behavior?
No response
Additional information
Performing similar actions when hosted locally with Ollama operate as expected (with codex --oss -m gpt-oss:20b).