Summary
After trying the v0.74.2 prerelease, the original Codex model-selection issue appears fixed: generated lock files now invoke Codex with --model "$GH_AW_MODEL_AGENT_CODEX" instead of -c model=....
However, workflows using the OpenAI Codex engine now fail because the sandboxed Codex process reports:
Missing environment variable: `OPENAI_API_KEY`.
Reproduction Context
Repo/PR:
Workflow tested:
.github/workflows/daily-standup-prep.lock.yml
- Triggered manually against branch
openai-aw-updates
- Repo secret
OPENAI_API_KEY is configured
CODEX_API_KEY is not configured
What Worked
The prerelease fixes the model flag generation. In the failed run, the logs show the requested model reaching Codex:
codex --model openai/gpt-5-mini exec ...
model: "openai/gpt-5-mini", model_provider_id: "openai-proxy"
Model: openai/gpt-5-mini
What Failed
The agent fails before useful work with:
Missing environment variable: `OPENAI_API_KEY`.
The generated lock validates OPENAI_API_KEY successfully during activation and later sets both env vars for the Codex step:
CODEX_API_KEY: ${{ secrets.CODEX_API_KEY || secrets.OPENAI_API_KEY }}
OPENAI_API_KEY: ${{ secrets.CODEX_API_KEY || secrets.OPENAI_API_KEY }}
But the generated awf command excludes both variables from the sandboxed execution:
--exclude-env CODEX_API_KEY ... --exclude-env OPENAI_API_KEY
The generated Codex config also points Codex at the OpenAI proxy while still declaring an env var requirement:
model_provider = "openai-proxy"
[model_providers.openai-proxy]
name = "OpenAI AWF proxy"
base_url = "http://172.30.0.30:10000"
env_key = "OPENAI_API_KEY"
supports_websockets = false
Expected Behavior
When gh-aw runs Codex through the AWF OpenAI proxy, the sandboxed Codex process should not fail because the raw provider secret is intentionally excluded from the agent container.
Question
Should gh-aw:
- inject a non-secret placeholder
OPENAI_API_KEY for the proxy provider,
- configure Codex to use a different auth path when
openai-proxy is active, or
- omit/avoid
env_key = "OPENAI_API_KEY" in the generated Codex config for proxied execution?
Related
This was discovered while testing the prerelease suggested in #32413 (comment). I also left an initial comment there before realizing this should probably be tracked as a separate issue.
Summary
After trying the
v0.74.2prerelease, the original Codex model-selection issue appears fixed: generated lock files now invoke Codex with--model "$GH_AW_MODEL_AGENT_CODEX"instead of-c model=....However, workflows using the OpenAI Codex engine now fail because the sandboxed Codex process reports:
Reproduction Context
Repo/PR:
v0.74.2compiled locks: Switch agentic workflows to OpenAI Codex chrizbo/agentics-beyond-code#131Workflow tested:
.github/workflows/daily-standup-prep.lock.ymlopenai-aw-updatesOPENAI_API_KEYis configuredCODEX_API_KEYis not configuredWhat Worked
The prerelease fixes the model flag generation. In the failed run, the logs show the requested model reaching Codex:
What Failed
The agent fails before useful work with:
The generated lock validates
OPENAI_API_KEYsuccessfully during activation and later sets both env vars for the Codex step:But the generated
awfcommand excludes both variables from the sandboxed execution:The generated Codex config also points Codex at the OpenAI proxy while still declaring an env var requirement:
Expected Behavior
When gh-aw runs Codex through the AWF OpenAI proxy, the sandboxed Codex process should not fail because the raw provider secret is intentionally excluded from the agent container.
Question
Should gh-aw:
OPENAI_API_KEYfor the proxy provider,openai-proxyis active, orenv_key = "OPENAI_API_KEY"in the generated Codex config for proxied execution?Related
This was discovered while testing the prerelease suggested in #32413 (comment). I also left an initial comment there before realizing this should probably be tracked as a separate issue.