Skip to content

Codex engine: compiler should inject custom openai-proxy provider to prevent WebSocket 401 with api-proxy #27710

@lpcox

Description

@lpcox

Problem

Codex v0.121+ ignores the OPENAI_BASE_URL environment variable when constructing WebSocket URLs for the Responses API. It hardcodes wss://api.openai.com/v1/responses regardless of what OPENAI_BASE_URL is set to.

When AWF runs with --enable-api-proxy:

  • AWF sets OPENAI_BASE_URL=http://172.30.0.30:10000 in the agent container
  • AWF injects a placeholder OPENAI_API_KEY (the real key lives in the api-proxy sidecar)
  • REST API calls work correctly: they respect OPENAI_BASE_URL → api-proxy injects real key → success
  • WebSocket connections fail: Codex connects directly to wss://api.openai.com/v1/responses with the placeholder key → 401 Unauthorized

This has been confirmed with Codex v0.121.0 log output showing base_url: None in ModelProviderInfo (the struct field comes from config.toml, not the env var) and a direct WebSocket connection to api.openai.com.

Root Cause

Codex's built-in openai provider reads OPENAI_BASE_URL for REST but uses a hardcoded WebSocket URL. The openai_base_url top-level config.toml key does not affect WebSocket URL construction in this version.

Additionally, the built-in openai provider ID is reserved and cannot be overridden via [model_providers.openai] in config.toml — Codex requires a name field and treats it as a custom provider, causing:

Error loading config.toml: missing field `name` in `model_providers.openai`

Workaround

The fix is to define a custom provider in config.toml that points directly to the api-proxy, with supports_websockets = false:

model_provider = "openai-proxy"

[model_providers.openai-proxy]
name = "OpenAI AWF proxy"
base_url = "http://172.30.0.30:10000"
env_key = "OPENAI_API_KEY"
supports_websockets = false

This forces Codex to use REST for all API calls. REST calls go to 172.30.0.30:10000 (the api-proxy), which replaces the placeholder key with the real OPENAI_API_KEY before forwarding to OpenAI.

Currently this is applied as a post-processing patch in gh-aw-firewall's postprocess-smoke-workflows.ts. See: https://github.com/github/gh-aw-firewall/blob/chore/upgrade-workflows-20260421/scripts/ci/postprocess-smoke-workflows.ts

Requested Compiler Behavior

When the compiler generates a Codex workflow that uses --enable-api-proxy (i.e., the workflow has engine: codex or equivalent and api-proxy is enabled), it should automatically inject the above config.toml block into the generated GH_AW_CODEX_SHELL_POLICY heredoc.

This would eliminate the need for per-repo post-processing patches and ensure all Codex + api-proxy users get the correct behavior out of the box.

References

  • Codex config reference: https://developers.openai.com/codex/config-reference
  • model_providers.<id>.supports_websockets — disables WebSocket transport for a provider
  • model_providers.<id> — custom provider definition; built-in IDs (openai, ollama, lmstudio) are reserved
  • AWF api-proxy sidecar IP: 172.30.0.30, OpenAI port: 10000

Metadata

Metadata

Assignees

Labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions