Skip to content

Support Codex OAuth in OpenShell using provider-backed auth.json placeholders #988

@zredlined

Description

@zredlined

Problem Statement

Codex users in org-managed OpenAI environments often authenticate through OpenAI OAuth instead of OPENAI_API_KEY. Today the practical path is often to sign into Codex inside each OpenShell sandbox, which writes real OAuth tokens to sandbox-local ~/.codex/auth.json.

That works functionally, but it weakens OpenShell's credential isolation story: the sandboxed process can read the OAuth tokens from the filesystem.

We validated a current OpenShell-compatible pattern for running Codex OAuth in a sandbox without copying real OAuth tokens into the sandbox filesystem.

Proposed Design

Document and/or productize a Codex OAuth bootstrap pattern using provider-backed auth.json placeholders:

  1. Read local Codex OAuth state from the user's ~/.codex/auth.json.
  2. Store sensitive OAuth fields in an OpenShell provider.
  3. Launch the sandbox with that provider attached.
  4. Generate sandbox-local ~/.codex/auth.json where sensitive token fields are openshell:resolve:env:* placeholders.
  5. Run Codex normally in OAuth mode.

Validated placeholder-backed fields:

  • tokens.access_token
  • tokens.refresh_token
  • tokens.account_id

One nuance: a pure placeholder tokens.id_token fails because Codex parses it locally as a JWT before networking. A non-secret JWT-shaped id_token was sufficient for local parsing while the sensitive OAuth fields remained provider-backed.

This should align with the upcoming Providers v2 work rather than replace it. The immediate value is giving users and demos a safe current pattern for Codex OAuth.

Alternatives Considered

  • Copy real auth.json into each sandbox. Functional, but it places OAuth secrets on the sandbox filesystem and should not be the recommended pattern.
  • Require API keys. Works with printenv OPENAI_API_KEY | codex login --with-api-key, and Codex writes an auth.json containing only the OpenShell placeholder. Many org-managed users cannot use this path because they authenticate through OAuth.
  • Wait for Providers v2. Providers v2 should likely own the long-term OAuth model, but the Codex OAuth workflow is useful now for getting-started docs and E2E demos.

Agent Investigation

Validated locally with OpenShell 0.0.36 and ghcr.io/nvidia/openshell-community/sandboxes/base:latest.

Findings:

  • Direct OpenAI API calls using a codex provider placeholder succeeded. /v1/models returned 200, confirming provider placeholder rewriting works.
  • codex exec with only provider-injected OPENAI_API_KEY failed until Codex was bootstrapped with codex login --with-api-key.
  • codex login --with-api-key writes ~/.codex/auth.json with OPENAI_API_KEY set to openshell:resolve:env:OPENAI_API_KEY, preserving credential isolation.
  • OAuth auth.json with placeholders for sensitive token fields worked when paired with a non-secret JWT-shaped id_token.
  • Codex successfully returned a dad joke through OAuth-backed chatgpt.com endpoints.
  • ab.chatgpt.com:443 was denied during the run but did not block completion.

Example successful prompt:

codex exec "Return one short dad joke. No shell commands."

Example response:

Why don’t skeletons fight each other? They don’t have the guts.

Notes / Open Questions

  • Should this live as a getting-started example, a Codex-specific helper, or both?
  • Should the existing codex provider learn to discover OAuth auth state, or should that wait for Providers v2?
  • How should token refresh behavior be described and constrained?
  • Should ab.chatgpt.com remain denied as non-essential telemetry/experimentation, or be added to the Codex default policy?

Metadata

Metadata

Assignees

No one assigned

    Labels

    Type

    No type

    Projects

    No projects

    Milestone

    No milestone

    Relationships

    None yet

    Development

    No branches or pull requests

    Issue actions