Skip to content
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension


Conversations
Failed to load comments.
Loading
Jump to
Jump to file
Failed to load files.
Loading
Diff view
Diff view
156 changes: 145 additions & 11 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -16,7 +16,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an
- βœ… **Smart auto-updating Codex instructions** - Tracks latest stable release with ETag caching
- βœ… Full tool support (write, edit, bash, grep, etc.)
- βœ… Automatic tool remapping (Codex tools β†’ opencode tools)
- βœ… High reasoning effort with detailed thinking blocks
- βœ… Configurable reasoning effort and summaries (defaults: medium/auto)
- βœ… Modular architecture for easy maintenance

## Installation
Expand Down Expand Up @@ -82,17 +82,151 @@ Select "OpenAI" and choose:
## Usage

```bash
# Use gpt-5-codex with high reasoning (default)
# Use gpt-5-codex with plugin defaults (medium/auto/medium)
opencode run "create a hello world file" --model=openai/gpt-5-codex

# Or set as default in opencode.json
opencode run "solve this complex algorithm problem"
# Or use regular gpt-5 via ChatGPT subscription
opencode run "solve this complex problem" --model=openai/gpt-5

# Set as default model in opencode.json
opencode run "build a web app"
```

### Plugin Defaults

When no configuration is specified, the plugin uses these defaults for all GPT-5 models:

```json
{
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
}
```

- **`reasoningEffort: "medium"`** - Balanced computational effort for reasoning
- **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity
- **`textVerbosity: "medium"`** - Balanced output length

These defaults match the official Codex CLI behavior and can be customized (see Configuration below).

## Configuration

You can customize model behavior for both `gpt-5` and `gpt-5-codex` models accessed via ChatGPT subscription.

### Available Settings

⚠️ **Important**: The two models have different supported values. Only use values listed in the tables below to avoid API errors.

#### GPT-5 Model

| Setting | Supported Values | Plugin Default | Description |
|---------|-----------------|----------------|-------------|
| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | **`medium`** | Computational effort for reasoning |
| `reasoningSummary` | `auto`, `detailed` | **`auto`** | Verbosity of reasoning summaries |
| `textVerbosity` | `low`, `medium`, `high` | **`medium`** | Output length and detail level |

#### GPT-5-Codex Model

| Setting | Supported Values | Plugin Default | Description |
|---------|-----------------|----------------|-------------|
| `reasoningEffort` | `minimal`*, `low`, `medium`, `high` | **`medium`** | Computational effort for reasoning |
| `reasoningSummary` | `auto`, `detailed` | **`auto`** | Verbosity of reasoning summaries |
| `textVerbosity` | `medium` only | **`medium`** | Output length (codex only supports medium) |

\* `minimal` is auto-normalized to `low` for gpt-5-codex

#### Shared Settings (Both Models)

| Setting | Values | Plugin Default | Description |
|---------|--------|----------------|-------------|
| `include` | Array of strings | `["reasoning.encrypted_content"]` | Additional response fields (for stateless reasoning) |

### Configuration Examples

#### Global Configuration

Apply the same settings to all GPT-5 models:

```json
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium"
}
}
}
}
```

#### Per-Model Configuration

Different settings for different models:

```json
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"provider": {
"openai": {
"models": {
"gpt-5-codex": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "medium"
}
},
"gpt-5": {
"options": {
"reasoningEffort": "high",
"reasoningSummary": "detailed",
"textVerbosity": "low"
}
}
}
}
}
}
```

#### Mixed Configuration

Global defaults with per-model overrides:

```json
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["opencode-openai-codex-auth"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
},
"models": {
"gpt-5-codex": {
"options": {
"reasoningSummary": "detailed"
}
}
}
}
}
}
```

The plugin automatically configures:
- **High reasoning effort** for deep thinking
- **Detailed reasoning summaries** to show thought process
- **Medium text verbosity** for balanced output
In this example:
- `gpt-5-codex` uses: `reasoningEffort: "medium"`, `reasoningSummary: "detailed"` (overridden), `textVerbosity: "medium"`
- `gpt-5` uses all global defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"`

## How It Works

Expand All @@ -111,13 +245,13 @@ The plugin:
6. **Tool Remapping**: Injects instructions to map Codex tools to opencode tools:
- `apply_patch` β†’ `edit`
- `update_plan` β†’ `todowrite`
7. **Reasoning Configuration**: Forces high reasoning effort with detailed summaries
8. **History Filtering**: Removes stored conversation IDs since Codex uses `store: false`
7. **Reasoning Configuration**: Defaults to medium effort and auto summaries (configurable per-model)
8. **Encrypted Reasoning**: Includes encrypted reasoning content for stateless multi-turn conversations
9. **History Filtering**: Removes stored conversation IDs since Codex uses `store: false`

## Limitations

- **ChatGPT Plus/Pro required**: Must have an active ChatGPT Plus or Pro subscription
- **Medium text verbosity**: Codex only supports `medium` for text verbosity

## Troubleshooting

Expand Down
17 changes: 13 additions & 4 deletions index.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -21,9 +21,9 @@ export async function OpenAIAuthPlugin({ client }) {
provider: "openai",
/**
* @param {() => Promise<any>} getAuth
* @param {any} _provider
* @param {any} provider - Provider configuration from opencode.json
*/
async loader(getAuth, _provider) {
async loader(getAuth, provider) {
const auth = await getAuth();

// Only handle OAuth auth type, skip API key auth
Expand All @@ -43,6 +43,13 @@ export async function OpenAIAuthPlugin({ client }) {
return {};
}

// Extract user configuration from provider structure
// Supports both global options and per-model options following Anthropic pattern
const userConfig = {
global: provider?.options || {},
models: provider?.models || {},
};

// Fetch Codex instructions (cached with ETag)
const CODEX_INSTRUCTIONS = await getCodexInstructions();

Expand Down Expand Up @@ -118,8 +125,8 @@ export async function OpenAIAuthPlugin({ client }) {
body,
});

// Transform request body for Codex API
body = transformRequestBody(body, CODEX_INSTRUCTIONS);
// Transform request body for Codex API with user configuration
body = transformRequestBody(body, CODEX_INSTRUCTIONS, userConfig);

// Log transformed request
logRequest("after-transform", {
Expand All @@ -130,6 +137,8 @@ export async function OpenAIAuthPlugin({ client }) {
hasInput: !!body.input,
inputLength: body.input?.length,
reasoning: body.reasoning,
textVerbosity: body.text?.verbosity,
include: body.include,
body,
});

Expand Down
73 changes: 63 additions & 10 deletions lib/request-transformer.mjs
Original file line number Diff line number Diff line change
Expand Up @@ -19,17 +19,53 @@ export function normalizeModel(model) {
}

/**
* Configure reasoning parameters based on model variant
* Extract configuration for a specific model
* Merges global options with model-specific options (model-specific takes precedence)
* @param {string} modelName - Model name (e.g., "gpt-5-codex")
* @param {object} userConfig - Full user configuration object
* @returns {object} Merged configuration for this model
*/
export function getModelConfig(modelName, userConfig = {}) {
const globalOptions = userConfig.global || {};
const modelOptions = userConfig.models?.[modelName]?.options || {};

// Model-specific options override global options
return { ...globalOptions, ...modelOptions };
}

/**
* Configure reasoning parameters based on model variant and user config
*
* NOTE: This plugin follows Codex CLI defaults instead of opencode defaults because:
* - We're accessing the ChatGPT backend API (not OpenAI Platform API)
* - opencode explicitly excludes gpt-5-codex from automatic reasoning configuration
* - Codex CLI has been thoroughly tested against this backend
*
* @param {string} originalModel - Original model name before normalization
* @param {object} userConfig - User configuration object
* @returns {object} Reasoning configuration
*/
export function getReasoningConfig(originalModel) {
export function getReasoningConfig(originalModel, userConfig = {}) {
const isLightweight =
originalModel?.includes("nano") || originalModel?.includes("mini");
const isCodex = originalModel?.includes("codex");

// Default based on model type (Codex CLI defaults)
const defaultEffort = isLightweight ? "minimal" : "medium";

// Get user-requested effort
let effort = userConfig.reasoningEffort || defaultEffort;

// Normalize "minimal" to "low" for gpt-5-codex
// Codex CLI does not provide a "minimal" preset for gpt-5-codex
// (only low/medium/high - see model_presets.rs:20-40)
if (isCodex && effort === "minimal") {
effort = "low";
}

return {
effort: isLightweight ? "minimal" : "high",
summary: "detailed", // Only supported value for gpt-5
effort,
summary: userConfig.reasoningSummary || "auto", // Changed from "detailed" to match Codex CLI
};
}

Expand Down Expand Up @@ -75,15 +111,26 @@ export function addToolRemapMessage(input, hasTools) {

/**
* Transform request body for Codex API
*
* NOTE: Configuration follows Codex CLI patterns instead of opencode defaults:
* - opencode sets textVerbosity="low" for gpt-5, but Codex CLI uses "medium"
* - opencode excludes gpt-5-codex from reasoning configuration
* - This plugin uses store=false (stateless), requiring encrypted reasoning content
*
* @param {object} body - Original request body
* @param {string} codexInstructions - Codex system instructions
* @param {object} userConfig - User configuration from loader
* @returns {object} Transformed request body
*/
export function transformRequestBody(body, codexInstructions) {
export function transformRequestBody(body, codexInstructions, userConfig = {}) {
const originalModel = body.model;
const normalizedModel = normalizeModel(body.model);

// Get model-specific configuration (merges global + per-model options)
const modelConfig = getModelConfig(normalizedModel, userConfig);

// Normalize model name
body.model = normalizeModel(body.model);
body.model = normalizedModel;

// Codex required fields
body.store = false;
Expand All @@ -96,19 +143,25 @@ export function transformRequestBody(body, codexInstructions) {
body.input = addToolRemapMessage(body.input, !!body.tools);
}

// Configure reasoning
const reasoningConfig = getReasoningConfig(originalModel);
// Configure reasoning (use model-specific config)
const reasoningConfig = getReasoningConfig(originalModel, modelConfig);
body.reasoning = {
...body.reasoning,
...reasoningConfig,
};

// Configure text verbosity
// Configure text verbosity (support user config)
// Default: "medium" (matches Codex CLI default for all GPT-5 models)
body.text = {
...body.text,
verbosity: "medium",
verbosity: modelConfig.textVerbosity || "medium",
};

// Add include for encrypted reasoning content
// Default: ["reasoning.encrypted_content"] (required for stateless operation with store=false)
// This allows reasoning context to persist across turns without server-side storage
body.include = modelConfig.include || ["reasoning.encrypted_content"];

// Remove unsupported parameters
body.max_output_tokens = undefined;
body.max_completion_tokens = undefined;
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
@@ -1,6 +1,6 @@
{
"name": "opencode-openai-codex-auth",
"version": "1.0.3",
"version": "1.0.4",
"description": "OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits",
"main": "./index.mjs",
"type": "module",
Expand Down
21 changes: 21 additions & 0 deletions test-config.json
Original file line number Diff line number Diff line change
@@ -0,0 +1,21 @@
{
"$schema": "https://opencode.ai/config.json",
"plugin": ["file:///home/code/projects/ben-vargas/ai-opencode-openai-codex-auth/config-support"],
"model": "openai/gpt-5-codex",
"provider": {
"openai": {
"options": {
"reasoningEffort": "medium",
"reasoningSummary": "auto",
"textVerbosity": "medium"
},
"models": {
"gpt-5-codex": {
"options": {
"reasoningSummary": "concise"
}
}
}
}
}
}
Loading