Cross-platform Agent Skill for delegating tasks to OpenAI Codex.
Any AI agent that supports Agent Skills can use this to call Codex for code generation, bug fixes, and feature implementation.
- Collects project context (conventions, tech stack, directory structure)
- Composes a self-contained prompt for Codex
- Calls Codex via MCP tool (if available) or CLI (fallback)
- Supports multi-turn conversations via
codex-replyorcodex resume
agent-codex/
├── SKILL.md # Skill instructions
├── README.md # This file
├── scripts/
│ └── prepare-context.sh # Collect project context for prompt
└── references/
└── codex-cli.md # Codex CLI reference
npx skills add Ray0907/agent-codexThis auto-detects and installs to all supported agents (Claude Code, Codex CLI, Gemini CLI, Cursor, etc.).
npx skills add Ray0907/agent-codex -a claude-code
npx skills add Ray0907/agent-codex -a cursor# Claude Code
ln -s /path/to/agent-codex ~/.claude/skills/agent-codex
# Codex CLI
ln -s /path/to/agent-codex ~/.codex/skills/agent-codex
# Gemini CLI
ln -s /path/to/agent-codex ~/.gemini/skills/agent-codex/agent-codex Fix the login page redirect bug
/agent-codex Implement rate limiting for the API endpoints
/agent-codex Refactor the auth module to use JWT
- OpenAI Codex CLI:
npm install -g @openai/codex - Or Codex MCP server configured in your agent (see below)
The skill uses Codex MCP tools when available, falling back to CLI. To enable MCP mode:
npm install -g @openai/codexOr via Homebrew:
brew install --cask codexSign in with your ChatGPT account (recommended):
codex # Select "Sign in with ChatGPT"Or set an API key:
export OPENAI_API_KEY="your-api-key"The codex mcp-server command starts Codex as a stdio-based MCP server. Add it to your agent's configuration:
Claude Code (~/.claude.json):
{
"mcpServers": {
"codex": {
"type": "stdio",
"command": "codex",
"args": ["mcp-server"]
}
}
}Cursor (.cursor/mcp.json):
{
"mcpServers": {
"codex": {
"command": "codex",
"args": ["mcp-server"]
}
}
}Windsurf (~/.codeium/windsurf/mcp_config.json):
{
"mcpServers": {
"codex": {
"command": "codex",
"args": ["mcp-server"]
}
}
}OpenAI Agents SDK (Python):
from agents.mcp import MCPServerStdio
async with MCPServerStdio(
name="Codex CLI",
params={
"command": "codex",
"args": ["mcp-server"],
},
client_session_timeout_seconds=360000,
) as codex_mcp_server:
agent = Agent(
name="Developer",
mcp_servers=[codex_mcp_server],
)You can test the MCP server with the MCP Inspector:
npx @modelcontextprotocol/inspector codex mcp-serverOr restart your agent and confirm the following MCP tools are available:
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | Initial user prompt |
cwd |
string | Working directory | |
approval-policy |
string | untrusted, on-failure, on-request, never |
|
sandbox |
string | read-only, workspace-write, danger-full-access |
|
model |
string | Model override (e.g. o3, gpt-5.2-codex) |
|
developer-instructions |
string | Injected as developer role message | |
base-instructions |
string | Override default system instructions | |
config |
object | Override config.toml settings |
|
profile |
string | Configuration profile from config.toml |
Returns a threadId in structuredContent for session continuation.
| Parameter | Type | Required | Description |
|---|---|---|---|
prompt |
string | Yes | Follow-up message |
threadId |
string | Yes | Thread ID from prior response |
{
"structuredContent": {
"threadId": "019bbb20-bff6-7130-83aa-bf45ab33250e",
"content": "Agent response text..."
}
}If MCP tools are not detected, the skill automatically falls back to codex exec CLI mode.
For CLI follow-ups, resume the existing session instead of starting a fresh one:
codex resume <session-id> "Follow up instruction"MIT