diff --git a/README.md b/README.md index 45ac231..7245d2a 100644 --- a/README.md +++ b/README.md @@ -16,7 +16,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an - ✅ **Smart auto-updating Codex instructions** - Tracks latest stable release with ETag caching - ✅ Full tool support (write, edit, bash, grep, etc.) - ✅ Automatic tool remapping (Codex tools → opencode tools) -- ✅ High reasoning effort with detailed thinking blocks +- ✅ Configurable reasoning effort and summaries (defaults: medium/auto) - ✅ Modular architecture for easy maintenance ## Installation @@ -82,17 +82,151 @@ Select "OpenAI" and choose: ## Usage ```bash -# Use gpt-5-codex with high reasoning (default) +# Use gpt-5-codex with plugin defaults (medium/auto/medium) opencode run "create a hello world file" --model=openai/gpt-5-codex -# Or set as default in opencode.json -opencode run "solve this complex algorithm problem" +# Or use regular gpt-5 via ChatGPT subscription +opencode run "solve this complex problem" --model=openai/gpt-5 + +# Set as default model in opencode.json +opencode run "build a web app" +``` + +### Plugin Defaults + +When no configuration is specified, the plugin uses these defaults for all GPT-5 models: + +```json +{ + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium" +} +``` + +- **`reasoningEffort: "medium"`** - Balanced computational effort for reasoning +- **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity +- **`textVerbosity: "medium"`** - Balanced output length + +These defaults match the official Codex CLI behavior and can be customized (see Configuration below). + +## Configuration + +You can customize model behavior for both `gpt-5` and `gpt-5-codex` models accessed via ChatGPT subscription. + +### Available Settings + +⚠️ **Important**: The two models have different supported values. Only use values listed in the tables below to avoid API errors. + +#### GPT-5 Model + +| Setting | Supported Values | Plugin Default | Description | +|---------|-----------------|----------------|-------------| +| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | **`medium`** | Computational effort for reasoning | +| `reasoningSummary` | `auto`, `detailed` | **`auto`** | Verbosity of reasoning summaries | +| `textVerbosity` | `low`, `medium`, `high` | **`medium`** | Output length and detail level | + +#### GPT-5-Codex Model + +| Setting | Supported Values | Plugin Default | Description | +|---------|-----------------|----------------|-------------| +| `reasoningEffort` | `minimal`*, `low`, `medium`, `high` | **`medium`** | Computational effort for reasoning | +| `reasoningSummary` | `auto`, `detailed` | **`auto`** | Verbosity of reasoning summaries | +| `textVerbosity` | `medium` only | **`medium`** | Output length (codex only supports medium) | + +\* `minimal` is auto-normalized to `low` for gpt-5-codex + +#### Shared Settings (Both Models) + +| Setting | Values | Plugin Default | Description | +|---------|--------|----------------|-------------| +| `include` | Array of strings | `["reasoning.encrypted_content"]` | Additional response fields (for stateless reasoning) | + +### Configuration Examples + +#### Global Configuration + +Apply the same settings to all GPT-5 models: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["opencode-openai-codex-auth"], + "model": "openai/gpt-5-codex", + "provider": { + "openai": { + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium" + } + } + } +} +``` + +#### Per-Model Configuration + +Different settings for different models: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["opencode-openai-codex-auth"], + "provider": { + "openai": { + "models": { + "gpt-5-codex": { + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium" + } + }, + "gpt-5": { + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "low" + } + } + } + } + } +} +``` + +#### Mixed Configuration + +Global defaults with per-model overrides: + +```json +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["opencode-openai-codex-auth"], + "model": "openai/gpt-5-codex", + "provider": { + "openai": { + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium" + }, + "models": { + "gpt-5-codex": { + "options": { + "reasoningSummary": "detailed" + } + } + } + } + } +} ``` -The plugin automatically configures: -- **High reasoning effort** for deep thinking -- **Detailed reasoning summaries** to show thought process -- **Medium text verbosity** for balanced output +In this example: +- `gpt-5-codex` uses: `reasoningEffort: "medium"`, `reasoningSummary: "detailed"` (overridden), `textVerbosity: "medium"` +- `gpt-5` uses all global defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"` ## How It Works @@ -111,13 +245,13 @@ The plugin: 6. **Tool Remapping**: Injects instructions to map Codex tools to opencode tools: - `apply_patch` → `edit` - `update_plan` → `todowrite` -7. **Reasoning Configuration**: Forces high reasoning effort with detailed summaries -8. **History Filtering**: Removes stored conversation IDs since Codex uses `store: false` +7. **Reasoning Configuration**: Defaults to medium effort and auto summaries (configurable per-model) +8. **Encrypted Reasoning**: Includes encrypted reasoning content for stateless multi-turn conversations +9. **History Filtering**: Removes stored conversation IDs since Codex uses `store: false` ## Limitations - **ChatGPT Plus/Pro required**: Must have an active ChatGPT Plus or Pro subscription -- **Medium text verbosity**: Codex only supports `medium` for text verbosity ## Troubleshooting diff --git a/index.mjs b/index.mjs index 62a096a..1302876 100644 --- a/index.mjs +++ b/index.mjs @@ -21,9 +21,9 @@ export async function OpenAIAuthPlugin({ client }) { provider: "openai", /** * @param {() => Promise} getAuth - * @param {any} _provider + * @param {any} provider - Provider configuration from opencode.json */ - async loader(getAuth, _provider) { + async loader(getAuth, provider) { const auth = await getAuth(); // Only handle OAuth auth type, skip API key auth @@ -43,6 +43,13 @@ export async function OpenAIAuthPlugin({ client }) { return {}; } + // Extract user configuration from provider structure + // Supports both global options and per-model options following Anthropic pattern + const userConfig = { + global: provider?.options || {}, + models: provider?.models || {}, + }; + // Fetch Codex instructions (cached with ETag) const CODEX_INSTRUCTIONS = await getCodexInstructions(); @@ -118,8 +125,8 @@ export async function OpenAIAuthPlugin({ client }) { body, }); - // Transform request body for Codex API - body = transformRequestBody(body, CODEX_INSTRUCTIONS); + // Transform request body for Codex API with user configuration + body = transformRequestBody(body, CODEX_INSTRUCTIONS, userConfig); // Log transformed request logRequest("after-transform", { @@ -130,6 +137,8 @@ export async function OpenAIAuthPlugin({ client }) { hasInput: !!body.input, inputLength: body.input?.length, reasoning: body.reasoning, + textVerbosity: body.text?.verbosity, + include: body.include, body, }); diff --git a/lib/request-transformer.mjs b/lib/request-transformer.mjs index 295075e..b8bbce8 100644 --- a/lib/request-transformer.mjs +++ b/lib/request-transformer.mjs @@ -19,17 +19,53 @@ export function normalizeModel(model) { } /** - * Configure reasoning parameters based on model variant + * Extract configuration for a specific model + * Merges global options with model-specific options (model-specific takes precedence) + * @param {string} modelName - Model name (e.g., "gpt-5-codex") + * @param {object} userConfig - Full user configuration object + * @returns {object} Merged configuration for this model + */ +export function getModelConfig(modelName, userConfig = {}) { + const globalOptions = userConfig.global || {}; + const modelOptions = userConfig.models?.[modelName]?.options || {}; + + // Model-specific options override global options + return { ...globalOptions, ...modelOptions }; +} + +/** + * Configure reasoning parameters based on model variant and user config + * + * NOTE: This plugin follows Codex CLI defaults instead of opencode defaults because: + * - We're accessing the ChatGPT backend API (not OpenAI Platform API) + * - opencode explicitly excludes gpt-5-codex from automatic reasoning configuration + * - Codex CLI has been thoroughly tested against this backend + * * @param {string} originalModel - Original model name before normalization + * @param {object} userConfig - User configuration object * @returns {object} Reasoning configuration */ -export function getReasoningConfig(originalModel) { +export function getReasoningConfig(originalModel, userConfig = {}) { const isLightweight = originalModel?.includes("nano") || originalModel?.includes("mini"); + const isCodex = originalModel?.includes("codex"); + + // Default based on model type (Codex CLI defaults) + const defaultEffort = isLightweight ? "minimal" : "medium"; + + // Get user-requested effort + let effort = userConfig.reasoningEffort || defaultEffort; + + // Normalize "minimal" to "low" for gpt-5-codex + // Codex CLI does not provide a "minimal" preset for gpt-5-codex + // (only low/medium/high - see model_presets.rs:20-40) + if (isCodex && effort === "minimal") { + effort = "low"; + } return { - effort: isLightweight ? "minimal" : "high", - summary: "detailed", // Only supported value for gpt-5 + effort, + summary: userConfig.reasoningSummary || "auto", // Changed from "detailed" to match Codex CLI }; } @@ -75,15 +111,26 @@ export function addToolRemapMessage(input, hasTools) { /** * Transform request body for Codex API + * + * NOTE: Configuration follows Codex CLI patterns instead of opencode defaults: + * - opencode sets textVerbosity="low" for gpt-5, but Codex CLI uses "medium" + * - opencode excludes gpt-5-codex from reasoning configuration + * - This plugin uses store=false (stateless), requiring encrypted reasoning content + * * @param {object} body - Original request body * @param {string} codexInstructions - Codex system instructions + * @param {object} userConfig - User configuration from loader * @returns {object} Transformed request body */ -export function transformRequestBody(body, codexInstructions) { +export function transformRequestBody(body, codexInstructions, userConfig = {}) { const originalModel = body.model; + const normalizedModel = normalizeModel(body.model); + + // Get model-specific configuration (merges global + per-model options) + const modelConfig = getModelConfig(normalizedModel, userConfig); // Normalize model name - body.model = normalizeModel(body.model); + body.model = normalizedModel; // Codex required fields body.store = false; @@ -96,19 +143,25 @@ export function transformRequestBody(body, codexInstructions) { body.input = addToolRemapMessage(body.input, !!body.tools); } - // Configure reasoning - const reasoningConfig = getReasoningConfig(originalModel); + // Configure reasoning (use model-specific config) + const reasoningConfig = getReasoningConfig(originalModel, modelConfig); body.reasoning = { ...body.reasoning, ...reasoningConfig, }; - // Configure text verbosity + // Configure text verbosity (support user config) + // Default: "medium" (matches Codex CLI default for all GPT-5 models) body.text = { ...body.text, - verbosity: "medium", + verbosity: modelConfig.textVerbosity || "medium", }; + // Add include for encrypted reasoning content + // Default: ["reasoning.encrypted_content"] (required for stateless operation with store=false) + // This allows reasoning context to persist across turns without server-side storage + body.include = modelConfig.include || ["reasoning.encrypted_content"]; + // Remove unsupported parameters body.max_output_tokens = undefined; body.max_completion_tokens = undefined; diff --git a/package.json b/package.json index 812d5c5..8b0bf8d 100644 --- a/package.json +++ b/package.json @@ -1,6 +1,6 @@ { "name": "opencode-openai-codex-auth", - "version": "1.0.3", + "version": "1.0.4", "description": "OpenAI ChatGPT (Codex backend) OAuth auth plugin for opencode - use your ChatGPT Plus/Pro subscription instead of API credits", "main": "./index.mjs", "type": "module", diff --git a/test-config.json b/test-config.json new file mode 100644 index 0000000..5a6a751 --- /dev/null +++ b/test-config.json @@ -0,0 +1,21 @@ +{ + "$schema": "https://opencode.ai/config.json", + "plugin": ["file:///home/code/projects/ben-vargas/ai-opencode-openai-codex-auth/config-support"], + "model": "openai/gpt-5-codex", + "provider": { + "openai": { + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "auto", + "textVerbosity": "medium" + }, + "models": { + "gpt-5-codex": { + "options": { + "reasoningSummary": "concise" + } + } + } + } + } +} diff --git a/test-config.mjs b/test-config.mjs new file mode 100755 index 0000000..1e6b51e --- /dev/null +++ b/test-config.mjs @@ -0,0 +1,108 @@ +#!/usr/bin/env node +import { getModelConfig, getReasoningConfig } from "./lib/request-transformer.mjs"; + +console.log("=== Testing Configuration Parsing ===\n"); + +// Simulate provider config from opencode.json +const providerConfig = { + options: { + reasoningEffort: "medium", + reasoningSummary: "auto", + textVerbosity: "medium", + }, + models: { + "gpt-5-codex": { + options: { + reasoningSummary: "concise", // Override global + }, + }, + "gpt-5": { + options: { + reasoningEffort: "high", // Override global + }, + }, + }, +}; + +// Build userConfig structure (same as in index.mjs) +const userConfig = { + global: providerConfig.options || {}, + models: providerConfig.models || {}, +}; + +console.log("Provider Config:"); +console.log(JSON.stringify(providerConfig, null, 2)); +console.log("\n"); + +// Test 1: gpt-5-codex (should merge global + model-specific) +console.log("Test 1: gpt-5-codex configuration"); +const codexConfig = getModelConfig("gpt-5-codex", userConfig); +console.log("Merged config:", codexConfig); +console.log("Expected: reasoningEffort='medium' (global), reasoningSummary='concise' (override), textVerbosity='medium' (global)"); +console.log("✓ Pass:", + codexConfig.reasoningEffort === "medium" && + codexConfig.reasoningSummary === "concise" && + codexConfig.textVerbosity === "medium" +); +console.log("\n"); + +// Test 2: gpt-5 (should merge global + model-specific) +console.log("Test 2: gpt-5 configuration"); +const gpt5Config = getModelConfig("gpt-5", userConfig); +console.log("Merged config:", gpt5Config); +console.log("Expected: reasoningEffort='high' (override), reasoningSummary='auto' (global), textVerbosity='medium' (global)"); +console.log("✓ Pass:", + gpt5Config.reasoningEffort === "high" && + gpt5Config.reasoningSummary === "auto" && + gpt5Config.textVerbosity === "medium" +); +console.log("\n"); + +// Test 3: Reasoning config with user settings +console.log("Test 3: Reasoning config for gpt-5-codex"); +const reasoningConfig = getReasoningConfig("gpt-5-codex", codexConfig); +console.log("Reasoning config:", reasoningConfig); +console.log("Expected: effort='medium', summary='concise'"); +console.log("✓ Pass:", + reasoningConfig.effort === "medium" && + reasoningConfig.summary === "concise" +); +console.log("\n"); + +// Test 4: Defaults when no config provided +console.log("Test 4: Defaults with empty config"); +const emptyConfig = getModelConfig("gpt-5-codex", {}); +const defaultReasoning = getReasoningConfig("gpt-5-codex", emptyConfig); +console.log("Empty config:", emptyConfig); +console.log("Default reasoning:", defaultReasoning); +console.log("Expected: effort='medium' (default), summary='auto' (default)"); +console.log("✓ Pass:", + defaultReasoning.effort === "medium" && + defaultReasoning.summary === "auto" +); +console.log("\n"); + +// Test 5: Lightweight model defaults +console.log("Test 5: Lightweight model (gpt-5-nano)"); +const nanoReasoning = getReasoningConfig("gpt-5-nano", {}); +console.log("Nano reasoning:", nanoReasoning); +console.log("Expected: effort='minimal' (lightweight default), summary='auto'"); +console.log("✓ Pass:", + nanoReasoning.effort === "minimal" && + nanoReasoning.summary === "auto" +); +console.log("\n"); + +// Test 6: Normalize "minimal" to "low" for gpt-5-codex +console.log("Test 6: Normalize minimal→low for gpt-5-codex (Codex CLI doesn't support minimal)"); +const codexMinimalConfig = { reasoningEffort: "minimal" }; +const codexMinimalReasoning = getReasoningConfig("gpt-5-codex", codexMinimalConfig); +console.log("Config:", codexMinimalConfig); +console.log("Reasoning result:", codexMinimalReasoning); +console.log("Expected: effort='low' (normalized from minimal), summary='auto'"); +console.log("✓ Pass:", + codexMinimalReasoning.effort === "low" && + codexMinimalReasoning.summary === "auto" +); + +console.log("\n=== All Tests Complete ===");