Problem
Codex v0.121+ ignores the OPENAI_BASE_URL environment variable when constructing WebSocket URLs for the Responses API. It hardcodes wss://api.openai.com/v1/responses regardless of what OPENAI_BASE_URL is set to.
When AWF runs with --enable-api-proxy:
- AWF sets
OPENAI_BASE_URL=http://172.30.0.30:10000 in the agent container
- AWF injects a placeholder
OPENAI_API_KEY (the real key lives in the api-proxy sidecar)
- REST API calls work correctly: they respect
OPENAI_BASE_URL → api-proxy injects real key → success
- WebSocket connections fail: Codex connects directly to
wss://api.openai.com/v1/responses with the placeholder key → 401 Unauthorized
This has been confirmed with Codex v0.121.0 log output showing base_url: None in ModelProviderInfo (the struct field comes from config.toml, not the env var) and a direct WebSocket connection to api.openai.com.
Root Cause
Codex's built-in openai provider reads OPENAI_BASE_URL for REST but uses a hardcoded WebSocket URL. The openai_base_url top-level config.toml key does not affect WebSocket URL construction in this version.
Additionally, the built-in openai provider ID is reserved and cannot be overridden via [model_providers.openai] in config.toml — Codex requires a name field and treats it as a custom provider, causing:
Error loading config.toml: missing field `name` in `model_providers.openai`
Workaround
The fix is to define a custom provider in config.toml that points directly to the api-proxy, with supports_websockets = false:
model_provider = "openai-proxy"
[model_providers.openai-proxy]
name = "OpenAI AWF proxy"
base_url = "http://172.30.0.30:10000"
env_key = "OPENAI_API_KEY"
supports_websockets = false
This forces Codex to use REST for all API calls. REST calls go to 172.30.0.30:10000 (the api-proxy), which replaces the placeholder key with the real OPENAI_API_KEY before forwarding to OpenAI.
Currently this is applied as a post-processing patch in gh-aw-firewall's postprocess-smoke-workflows.ts. See: https://github.com/github/gh-aw-firewall/blob/chore/upgrade-workflows-20260421/scripts/ci/postprocess-smoke-workflows.ts
Requested Compiler Behavior
When the compiler generates a Codex workflow that uses --enable-api-proxy (i.e., the workflow has engine: codex or equivalent and api-proxy is enabled), it should automatically inject the above config.toml block into the generated GH_AW_CODEX_SHELL_POLICY heredoc.
This would eliminate the need for per-repo post-processing patches and ensure all Codex + api-proxy users get the correct behavior out of the box.
References
- Codex config reference: https://developers.openai.com/codex/config-reference
model_providers.<id>.supports_websockets — disables WebSocket transport for a provider
model_providers.<id> — custom provider definition; built-in IDs (openai, ollama, lmstudio) are reserved
- AWF api-proxy sidecar IP:
172.30.0.30, OpenAI port: 10000
Problem
Codex v0.121+ ignores the
OPENAI_BASE_URLenvironment variable when constructing WebSocket URLs for the Responses API. It hardcodeswss://api.openai.com/v1/responsesregardless of whatOPENAI_BASE_URLis set to.When AWF runs with
--enable-api-proxy:OPENAI_BASE_URL=http://172.30.0.30:10000in the agent containerOPENAI_API_KEY(the real key lives in the api-proxy sidecar)OPENAI_BASE_URL→ api-proxy injects real key → successwss://api.openai.com/v1/responseswith the placeholder key → 401 UnauthorizedThis has been confirmed with Codex v0.121.0 log output showing
base_url: NoneinModelProviderInfo(the struct field comes fromconfig.toml, not the env var) and a direct WebSocket connection toapi.openai.com.Root Cause
Codex's built-in
openaiprovider readsOPENAI_BASE_URLfor REST but uses a hardcoded WebSocket URL. Theopenai_base_urltop-levelconfig.tomlkey does not affect WebSocket URL construction in this version.Additionally, the built-in
openaiprovider ID is reserved and cannot be overridden via[model_providers.openai]inconfig.toml— Codex requires anamefield and treats it as a custom provider, causing:Workaround
The fix is to define a custom provider in
config.tomlthat points directly to the api-proxy, withsupports_websockets = false:This forces Codex to use REST for all API calls. REST calls go to
172.30.0.30:10000(the api-proxy), which replaces the placeholder key with the realOPENAI_API_KEYbefore forwarding to OpenAI.Currently this is applied as a post-processing patch in
gh-aw-firewall'spostprocess-smoke-workflows.ts. See: https://github.com/github/gh-aw-firewall/blob/chore/upgrade-workflows-20260421/scripts/ci/postprocess-smoke-workflows.tsRequested Compiler Behavior
When the compiler generates a Codex workflow that uses
--enable-api-proxy(i.e., the workflow hasengine: codexor equivalent and api-proxy is enabled), it should automatically inject the aboveconfig.tomlblock into the generatedGH_AW_CODEX_SHELL_POLICYheredoc.This would eliminate the need for per-repo post-processing patches and ensure all Codex + api-proxy users get the correct behavior out of the box.
References
model_providers.<id>.supports_websockets— disables WebSocket transport for a providermodel_providers.<id>— custom provider definition; built-in IDs (openai,ollama,lmstudio) are reserved172.30.0.30, OpenAI port:10000