From 1cabd2e3379e37174e4d12806d38c45bd7b7b99a Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 13:50:01 -0500 Subject: [PATCH 01/13] feat: update docs --- AGENTS.md | 9 +++--- README.md | 63 ++++++++++++++++++++++++++++++++------- config/README.md | 8 ++--- config/full-opencode.json | 34 ++++++++++++++++++++- 4 files changed, 95 insertions(+), 19 deletions(-) diff --git a/AGENTS.md b/AGENTS.md index 81db2f9..2e92a23 100644 --- a/AGENTS.md +++ b/AGENTS.md @@ -4,7 +4,7 @@ This file provides coding guidance for AI agents (including Claude Code, Codex, ## Overview -This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5-codex`, `gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits. +This is an **opencode plugin** that enables OAuth authentication with OpenAI's ChatGPT Plus/Pro Codex backend. It allows users to access `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5-codex`, `gpt-5-codex-mini`, `gpt-5.1`, and `gpt-5` models through their ChatGPT subscription instead of using OpenAI Platform API credits. **Key architecture principle**: 7-step fetch flow that intercepts opencode's OpenAI SDK requests, transforms them for the ChatGPT backend API, and handles OAuth token management. @@ -41,7 +41,7 @@ The main entry point orchestrates a **7-step fetch flow**: 1. **Token Management**: Check token expiration, refresh if needed 2. **URL Rewriting**: Transform OpenAI Platform API URLs → ChatGPT backend API (`https://chatgpt.com/backend-api/codex/responses`) 3. **Request Transformation**: - - Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`) + - Normalize model names (all variants → `gpt-5.1`, `gpt-5.1-codex`, `gpt-5.1-codex-max`, `gpt-5.1-codex-mini`, `gpt-5`, `gpt-5-codex`, or `codex-mini-latest`) - Inject Codex system instructions from latest GitHub release - Apply reasoning configuration (effort, summary, verbosity) - Add CODEX_MODE bridge prompt (default) or tool remap message (legacy) @@ -98,13 +98,14 @@ The main entry point orchestrates a **7-step fetch flow**: - Plugin defaults: `reasoningEffort: "medium"`, `reasoningSummary: "auto"`, `textVerbosity: "medium"` **4. Model Normalization**: +- All `gpt-5.1-codex-max*` variants → `gpt-5.1-codex-max` - All `gpt-5.1-codex*` variants → `gpt-5.1-codex` - All `gpt-5.1-codex-mini*` variants → `gpt-5.1-codex-mini` - All `gpt-5-codex` variants → `gpt-5-codex` - All `gpt-5-codex-mini*` or `codex-mini-latest` variants → `codex-mini-latest` - All `gpt-5.1` variants → `gpt-5.1` - All `gpt-5` variants → `gpt-5` -- `minimal` effort auto-normalized to `low` for gpt-5-codex (API limitation) and clamped to `medium` (or `high` when requested) for Codex Mini +- `minimal` effort auto-normalized to `low` for Codex families and clamped to `medium` (or `high` when requested) for Codex Mini **5. Codex Instructions Caching**: - Fetches from latest release tag (not main branch) @@ -150,7 +151,7 @@ This plugin **intentionally differs from opencode defaults** because it accesses | Setting | opencode Default | This Plugin Default | Reason | |---------|-----------------|---------------------|--------| -| `reasoningEffort` | "high" (gpt-5) | "medium" | Matches Codex CLI default | +| `reasoningEffort` | "high" (gpt-5) | "medium" (Codex Max defaults to "high") | Matches Codex CLI default and Codex Max capabilities | | `textVerbosity` | "low" (gpt-5) | "medium" | Matches Codex CLI default | | `reasoningSummary` | "detailed" | "auto" | Matches Codex CLI default | | gpt-5-codex config | (excluded) | Full support | opencode excludes gpt-5-codex from auto-config | diff --git a/README.md b/README.md index 9edfe53..936e536 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an ## Features - ✅ **ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription -- ✅ **8 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, and GPT 5.1 Codex Mini presets for common reasoning levels +- ✅ **10 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including new `xhigh` for Codex Max) - ⚠️ **GPT 5.1 only** - Older GPT 5.0 models are deprecated and may not work reliably - ✅ **Zero external dependencies** - Lightweight with only @openauthjs/openauth - ✅ **Auto-refreshing tokens** - Handles token expiration automatically @@ -130,6 +130,38 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an "store": false } }, + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-xhigh": { + "name": "GPT 5.1 Codex Max Extra High (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "xhigh", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, "gpt-5.1-codex-mini-medium": { "name": "GPT 5.1 Codex Mini Medium (OAuth)", "limit": { @@ -293,6 +325,8 @@ If using the full configuration, select from the model picker in opencode, or sp # Use different reasoning levels for gpt-5.1-codex opencode run "simple task" --model=openai/gpt-5.1-codex-low opencode run "complex task" --model=openai/gpt-5.1-codex-high +opencode run "large refactor" --model=openai/gpt-5.1-codex-max +opencode run "research-grade analysis" --model=openai/gpt-5.1-codex-max-xhigh # Use different reasoning levels for gpt-5.1 opencode run "quick question" --model=openai/gpt-5.1-low @@ -312,6 +346,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t | `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation | | `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks | | `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools | +| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | High | Long-horizon builds, large refactors | +| `gpt-5.1-codex-max-xhigh` | GPT 5.1 Codex Max Extra High (OAuth) | xHigh | Deep multi-hour agent loops, research/debug marathons | | `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier | | `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning | | `gpt-5.1-low` | GPT 5.1 Low (OAuth) | Low | Faster responses with light reasoning | @@ -322,6 +358,8 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t **Display**: TUI shows the friendly name (e.g., "GPT 5.1 Codex Low (OAuth)") > **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output). +> +> **Note**: Codex Max uses the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support plus `xhigh` reasoning. > **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results. @@ -357,6 +395,8 @@ When no configuration is specified, the plugin uses these defaults for all GPT-5 - **`reasoningSummary: "auto"`** - Automatically adapts summary verbosity - **`textVerbosity: "medium"`** - Balanced output length +Codex Max defaults to `reasoningEffort: "high"` when selected, while other families default to `medium`. + These defaults match the official Codex CLI behavior and can be customized (see Configuration below). ## Configuration @@ -364,7 +404,7 @@ These defaults match the official Codex CLI behavior and can be customized (see ### ⚠️ REQUIRED: Use Pre-Configured File **YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration: -- 8 pre-configured GPT 5.1 model variants with verified settings +- 10 pre-configured GPT 5.1 model variants with verified settings - Optimal configuration for each reasoning level - All variants visible in the opencode model selector - Required metadata for OpenCode features to work properly @@ -379,16 +419,19 @@ If you want to customize settings yourself, you can configure options at provide #### Available Settings -⚠️ **Important**: The two base models have different supported values. +⚠️ **Important**: Families have different supported values. -| Setting | GPT-5 Values | GPT-5-Codex Values | Plugin Default | -|---------|-------------|-------------------|----------------| -| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `medium` | -| `reasoningSummary` | `auto`, `detailed` | `auto`, `detailed` | `auto` | -| `textVerbosity` | `low`, `medium`, `high` | `medium` only | `medium` | -| `include` | Array of strings | Array of strings | `["reasoning.encrypted_content"]` | +| Setting | GPT-5 / GPT-5.1 Values | GPT-5.1-Codex Values | GPT-5.1-Codex-Max Values | Plugin Default | +|---------|-----------------------|----------------------|---------------------------|----------------| +| `reasoningEffort` | `minimal`, `low`, `medium`, `high` | `low`, `medium`, `high` | `none`, `low`, `medium`, `high`, `xhigh` | `medium` (global), `high` default for Codex Max | +| `reasoningSummary` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed` | `auto`, `concise`, `detailed`, `off`, `on` | `auto` | +| `textVerbosity` | `low`, `medium`, `high` | `medium` or `high` | `medium` or `high` | `medium` | +| `include` | Array of strings | Array of strings | Array of strings | `["reasoning.encrypted_content"]` | -> **Note**: `minimal` effort is auto-normalized to `low` for gpt-5-codex (not supported by the API). +> **Notes**: +> - `minimal` effort is auto-normalized to `low` for Codex models. +> - Codex Mini clamps to `medium`/`high`; `xhigh` downgrades to `high`. +> - Codex Max supports `none`/`xhigh` plus expanded output limits (~400k). #### Global Configuration Example diff --git a/config/README.md b/config/README.md index 3392bdc..f62ffc3 100644 --- a/config/README.md +++ b/config/README.md @@ -14,15 +14,15 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json **Why this is required:** - GPT 5 models can be temperamental and need proper configuration -- Contains 8 verified GPT 5.1 model variants (Codex, Codex Mini, and general GPT 5.1) +- Contains 10 verified GPT 5.1 model variants (Codex, Codex Max, Codex Mini, and general GPT 5.1) - Includes all required metadata for OpenCode features - Guaranteed to work reliably - Global options for all models + per-model configuration overrides **What's included:** -- All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini -- Proper reasoning effort settings for each variant -- Context limits (272k context / 128k output) +- All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini +- Proper reasoning effort settings for each variant (including new `xhigh` for Codex Max) +- Context limits (272k context / 128k output for core Codex; Codex Max allows larger outputs) - Required options: `store: false`, `include: ["reasoning.encrypted_content"]` ### ❌ Other Configurations (NOT SUPPORTED) diff --git a/config/full-opencode.json b/config/full-opencode.json index 02315db..2cba6f7 100644 --- a/config/full-opencode.json +++ b/config/full-opencode.json @@ -63,6 +63,38 @@ "store": false } }, + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-xhigh": { + "name": "GPT 5.1 Codex Max Extra High (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "xhigh", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, "gpt-5.1-codex-mini-medium": { "name": "GPT 5.1 Codex Mini Medium (OAuth)", "limit": { @@ -146,4 +178,4 @@ } } } -} \ No newline at end of file +} From c85dda1ade690e74c626ab48f187166914765900 Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 13:50:08 -0500 Subject: [PATCH 02/13] feat: more docs --- docs/configuration.md | 19 +++++++++++++++---- docs/getting-started.md | 33 ++++++++++++++++++++++++++++++++- docs/index.md | 4 ++-- 3 files changed, 49 insertions(+), 7 deletions(-) diff --git a/docs/configuration.md b/docs/configuration.md index c8fea57..5462adb 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -57,9 +57,17 @@ Controls computational effort for reasoning. - `medium` - Balanced (default) - `high` - Maximum code quality +**GPT-5.1-Codex-Max Values:** +- `none` - No dedicated reasoning phase +- `low` - Light reasoning +- `medium` - Balanced +- `high` - Deep reasoning (default for this family) +- `xhigh` - Extra depth for long-horizon tasks + **Notes**: -- `minimal` auto-converts to `low` for gpt-5-codex (API limitation) -- `gpt-5-codex-mini*` and `gpt-5.1-codex-mini*` only support `medium` or `high`; lower settings are clamped to `medium` +- `minimal` auto-converts to `low` for Codex models +- `gpt-5-codex-mini*` and `gpt-5.1-codex-mini*` only support `medium` or `high`; lower settings are clamped to `medium` and `xhigh` downgrades to `high` +- Codex Max supports `none` and `xhigh` and defaults to `high` when not specified **Example:** ```json @@ -76,7 +84,10 @@ Controls reasoning summary verbosity. **Values:** - `auto` - Automatically adapts (default) +- `concise` - Short summaries - `detailed` - Verbose summaries +- `off` - Disable reasoning summary (Codex Max supports) +- `on` - Force enable summary (Codex Max supports) **Example:** ```json @@ -96,8 +107,8 @@ Controls output length. - `medium` - Balanced (default) - `high` - Verbose -**GPT-5-Codex:** -- `medium` only (API limitation) +**GPT-5-Codex / Codex Max:** +- `medium` or `high` (Codex Max defaults to `medium`) **Example:** ```json diff --git a/docs/getting-started.md b/docs/getting-started.md index 4ca6443..0bad194 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -94,6 +94,34 @@ Add this to `~/.config/opencode/opencode.json`: "store": false } }, + "gpt-5.1-codex-max": { + "name": "GPT 5.1 Codex Max (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + } + }, + "gpt-5.1-codex-max-xhigh": { + "name": "GPT 5.1 Codex Max Extra High (OAuth)", + "limit": { + "context": 272000, + "output": 400000 + }, + "options": { + "reasoningEffort": "xhigh", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + } + }, "gpt-5.1-codex-mini-medium": { "name": "GPT 5.1 Codex Mini Medium (OAuth)", "limit": { @@ -172,13 +200,16 @@ Add this to `~/.config/opencode/opencode.json`: **What you get:** - ✅ GPT 5.1 Codex (Low/Medium/High reasoning) + - ✅ GPT 5.1 Codex Max (High/xHigh reasoning, larger outputs) - ✅ GPT 5.1 Codex Mini (Medium/High reasoning) - ✅ GPT 5.1 (Low/Medium/High reasoning) - - ✅ 272k context + 128k output window for every preset + - ✅ 272k context + 128k output window for core presets (Codex Max expands output to ~400k) - ✅ All visible in OpenCode model selector - ✅ Optimal settings for each reasoning level > **Note**: All `gpt-5.1-codex-mini*` presets use 272k context / 128k output limits. +> +> **Note**: Codex Max presets map to `gpt-5.1-codex-max` with 272k input and expanded ~400k output plus `xhigh` reasoning. Prompt caching is enabled out of the box: when OpenCode sends its session identifier as `prompt_cache_key`, the plugin forwards it untouched so multi-turn runs reuse prior work. The CODEX_MODE bridge prompt bundled with the plugin is kept in sync with the latest Codex CLI release, so the OpenCode UI and Codex share the same tool contract. If you hit your ChatGPT subscription limits, the plugin returns a friendly Codex-style message with the 5-hour and weekly usage windows so you know when capacity resets. diff --git a/docs/index.md b/docs/index.md index e2b6ff5..002f7b2 100644 --- a/docs/index.md +++ b/docs/index.md @@ -82,8 +82,8 @@ opencode run "write hello world to test.txt" --model=openai/gpt-5-codex ## Features ✅ **OAuth Authentication** - Secure ChatGPT Plus/Pro login -✅ **GPT 5.1 Models** - gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-mini (8 pre-configured variants) -✅ **Per-Model Configuration** - Different reasoning effort, verbosity for each variant +✅ **GPT 5.1 Models** - gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini (10 pre-configured variants) +✅ **Per-Model Configuration** - Different reasoning effort, including new `xhigh` for Codex Max ✅ **Multi-Turn Conversations** - Full conversation history with stateless backend ✅ **Verified Configuration** - Use `config/full-opencode.json` for guaranteed compatibility ✅ **Comprehensive Testing** - 160+ unit tests + 14 integration tests From a058c215a5e5574c8df26859f87cfdec82578162 Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 13:59:44 -0500 Subject: [PATCH 03/13] feat: update model map to add gpt-5.1-codex-max --- lib/request/helpers/model-map.ts | 15 ++++++++++++--- 1 file changed, 12 insertions(+), 3 deletions(-) diff --git a/lib/request/helpers/model-map.ts b/lib/request/helpers/model-map.ts index df0d4a1..9c087ae 100644 --- a/lib/request/helpers/model-map.ts +++ b/lib/request/helpers/model-map.ts @@ -12,14 +12,23 @@ * Value: The normalized model name to send to the API */ export const MODEL_MAP: Record = { - // ============================================================================ - // GPT-5.1 Codex Models - // ============================================================================ +// ============================================================================ +// GPT-5.1 Codex Models +// ============================================================================ "gpt-5.1-codex": "gpt-5.1-codex", "gpt-5.1-codex-low": "gpt-5.1-codex", "gpt-5.1-codex-medium": "gpt-5.1-codex", "gpt-5.1-codex-high": "gpt-5.1-codex", + // ============================================================================ + // GPT-5.1 Codex Max Models + // ============================================================================ + "gpt-5.1-codex-max": "gpt-5.1-codex-max", + "gpt-5.1-codex-max-low": "gpt-5.1-codex-max", + "gpt-5.1-codex-max-medium": "gpt-5.1-codex-max", + "gpt-5.1-codex-max-high": "gpt-5.1-codex-max", + "gpt-5.1-codex-max-xhigh": "gpt-5.1-codex-max", + // ============================================================================ // GPT-5.1 Codex Mini Models // ============================================================================ From 90d04fb9f9b39aec8602c586b7c4a0fb3db19931 Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 14:02:30 -0500 Subject: [PATCH 04/13] feat: adjust types and request for codex high (high and xhigh/extra high), add tests --- lib/request/request-transformer.ts | 45 ++++++++++++++-------- lib/types.ts | 8 ++-- test/request-transformer.test.ts | 61 ++++++++++++++++++++++++++++++ 3 files changed, 95 insertions(+), 19 deletions(-) diff --git a/lib/request/request-transformer.ts b/lib/request/request-transformer.ts index a723a80..4026bf1 100644 --- a/lib/request/request-transformer.ts +++ b/lib/request/request-transformer.ts @@ -38,7 +38,15 @@ export function normalizeModel(model: string | undefined): string { const normalized = modelId.toLowerCase(); // Priority order for pattern matching (most specific first): - // 1. GPT-5.1 Codex Mini + // 1. GPT-5.1 Codex Max + if ( + normalized.includes("gpt-5.1-codex-max") || + normalized.includes("gpt 5.1 codex max") + ) { + return "gpt-5.1-codex-max"; + } + + // 2. GPT-5.1 Codex Mini if ( normalized.includes("gpt-5.1-codex-mini") || normalized.includes("gpt 5.1 codex mini") @@ -46,7 +54,7 @@ export function normalizeModel(model: string | undefined): string { return "gpt-5.1-codex-mini"; } - // 2. Legacy Codex Mini + // 3. Legacy Codex Mini if ( normalized.includes("codex-mini-latest") || normalized.includes("gpt-5-codex-mini") || @@ -55,7 +63,7 @@ export function normalizeModel(model: string | undefined): string { return "codex-mini-latest"; } - // 3. GPT-5.1 Codex + // 4. GPT-5.1 Codex if ( normalized.includes("gpt-5.1-codex") || normalized.includes("gpt 5.1 codex") @@ -63,17 +71,17 @@ export function normalizeModel(model: string | undefined): string { return "gpt-5.1-codex"; } - // 4. GPT-5.1 (general-purpose) + // 5. GPT-5.1 (general-purpose) if (normalized.includes("gpt-5.1") || normalized.includes("gpt 5.1")) { return "gpt-5.1"; } - // 5. GPT-5 Codex family (any variant with "codex") + // 6. GPT-5 Codex family (any variant with "codex") if (normalized.includes("codex")) { return "gpt-5-codex"; } - // 6. GPT-5 family (any variant) + // 7. GPT-5 family (any variant) if (normalized.includes("gpt-5") || normalized.includes("gpt 5")) { return "gpt-5"; } @@ -117,6 +125,9 @@ export function getReasoningConfig( userConfig: ConfigOptions = {}, ): ReasoningConfig { const normalizedOriginal = originalModel?.toLowerCase() ?? ""; + const isCodexMax = + normalizedOriginal.includes("codex-max") || + normalizedOriginal.includes("codex max"); const isCodexMini = normalizedOriginal.includes("codex-mini") || normalizedOriginal.includes("codex mini") || @@ -129,27 +140,31 @@ export function getReasoningConfig( normalizedOriginal.includes("mini")); // Default based on model type (Codex CLI defaults) - const defaultEffort: "minimal" | "low" | "medium" | "high" = isCodexMini + const defaultEffort: ReasoningConfig["effort"] = isCodexMini ? "medium" - : isLightweight - ? "minimal" - : "medium"; + : isCodexMax + ? "high" + : isLightweight + ? "minimal" + : "medium"; // Get user-requested effort let effort = userConfig.reasoningEffort || defaultEffort; if (isCodexMini) { - if (effort === "minimal" || effort === "low") { + if (effort === "minimal" || effort === "low" || effort === "none") { effort = "medium"; } - if (effort !== "high") { + if (effort === "xhigh") { + effort = "high"; + } + if (effort !== "high" && effort !== "medium") { effort = "medium"; } } - // Normalize "minimal" to "low" for gpt-5-codex - // Codex CLI does not provide a "minimal" preset for gpt-5-codex - // (only low/medium/high - see model_presets.rs:20-40) + // Normalize "minimal" to "low" for Codex families + // Codex CLI presets are low/medium/high (or xhigh for Codex Max) if (isCodex && effort === "minimal") { effort = "low"; } diff --git a/lib/types.ts b/lib/types.ts index eff4baf..80c8b02 100644 --- a/lib/types.ts +++ b/lib/types.ts @@ -27,8 +27,8 @@ export interface UserConfig { * Configuration options for reasoning and text settings */ export interface ConfigOptions { - reasoningEffort?: "minimal" | "low" | "medium" | "high"; - reasoningSummary?: "auto" | "concise" | "detailed"; + reasoningEffort?: "none" | "minimal" | "low" | "medium" | "high" | "xhigh"; + reasoningSummary?: "auto" | "concise" | "detailed" | "off" | "on"; textVerbosity?: "low" | "medium" | "high"; include?: string[]; } @@ -37,8 +37,8 @@ export interface ConfigOptions { * Reasoning configuration for requests */ export interface ReasoningConfig { - effort: "minimal" | "low" | "medium" | "high"; - summary: "auto" | "concise" | "detailed"; + effort: "none" | "minimal" | "low" | "medium" | "high" | "xhigh"; + summary: "auto" | "concise" | "detailed" | "off" | "on"; } /** diff --git a/test/request-transformer.test.ts b/test/request-transformer.test.ts index 8548d59..fd8b932 100644 --- a/test/request-transformer.test.ts +++ b/test/request-transformer.test.ts @@ -73,6 +73,13 @@ describe('Request Transformer Module', () => { expect(normalizeModel('openai/codex-mini-latest')).toBe('codex-mini-latest'); }); + it('should normalize gpt-5.1 codex max presets', async () => { + expect(normalizeModel('gpt-5.1-codex-max')).toBe('gpt-5.1-codex-max'); + expect(normalizeModel('gpt-5.1-codex-max-high')).toBe('gpt-5.1-codex-max'); + expect(normalizeModel('gpt-5.1-codex-max-xhigh')).toBe('gpt-5.1-codex-max'); + expect(normalizeModel('openai/gpt-5.1-codex-max-medium')).toBe('gpt-5.1-codex-max'); + }); + it('should normalize gpt-5.1 codex and mini slugs', async () => { expect(normalizeModel('gpt-5.1-codex')).toBe('gpt-5.1-codex'); expect(normalizeModel('openai/gpt-5.1-codex')).toBe('gpt-5.1-codex'); @@ -735,6 +742,60 @@ describe('Request Transformer Module', () => { expect(result.reasoning?.effort).toBe('low'); }); + it('should clamp xhigh to high for codex-mini', async () => { + const body: RequestBody = { + model: 'gpt-5.1-codex-mini-high', + input: [], + }; + const userConfig: UserConfig = { + global: { reasoningEffort: 'xhigh' }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig); + expect(result.reasoning?.effort).toBe('high'); + }); + + it('should clamp none to medium for codex-mini', async () => { + const body: RequestBody = { + model: 'gpt-5.1-codex-mini-medium', + input: [], + }; + const userConfig: UserConfig = { + global: { reasoningEffort: 'none' }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig); + expect(result.reasoning?.effort).toBe('medium'); + }); + + it('should default codex-max to high effort', async () => { + const body: RequestBody = { + model: 'gpt-5.1-codex-max', + input: [], + }; + const result = await transformRequestBody(body, codexInstructions); + expect(result.reasoning?.effort).toBe('high'); + }); + + it('should preserve xhigh for codex-max when requested', async () => { + const body: RequestBody = { + model: 'gpt-5.1-codex-max-xhigh', + input: [], + }; + const userConfig: UserConfig = { + global: { reasoningSummary: 'auto' }, + models: { + 'gpt-5.1-codex-max-xhigh': { + options: { reasoningEffort: 'xhigh', reasoningSummary: 'detailed' }, + }, + }, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig); + expect(result.model).toBe('gpt-5.1-codex-max'); + expect(result.reasoning?.effort).toBe('xhigh'); + expect(result.reasoning?.summary).toBe('detailed'); + }); + it('should preserve minimal for non-codex models', async () => { const body: RequestBody = { model: 'gpt-5', From f57d8b96fd1b6201484087f9c651cba708e2c06b Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 14:02:40 -0500 Subject: [PATCH 05/13] feat: update test script --- scripts/test-all-models.sh | 2 ++ 1 file changed, 2 insertions(+) diff --git a/scripts/test-all-models.sh b/scripts/test-all-models.sh index ac00390..48691f9 100755 --- a/scripts/test-all-models.sh +++ b/scripts/test-all-models.sh @@ -153,6 +153,8 @@ update_config "full" test_model "gpt-5.1-codex-low" "gpt-5.1-codex" "low" "auto" "medium" test_model "gpt-5.1-codex-medium" "gpt-5.1-codex" "medium" "auto" "medium" test_model "gpt-5.1-codex-high" "gpt-5.1-codex" "high" "detailed" "medium" +test_model "gpt-5.1-codex-max" "gpt-5.1-codex-max" "high" "detailed" "medium" +test_model "gpt-5.1-codex-max-xhigh" "gpt-5.1-codex-max" "xhigh" "detailed" "medium" # GPT 5.1 Codex Mini presets (medium/high only) test_model "gpt-5.1-codex-mini-medium" "gpt-5.1-codex-mini" "medium" "auto" "medium" From eb71373e87312b0a02ac4c306b8ad552e17a605b Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 14:03:08 -0500 Subject: [PATCH 06/13] feat: update changelog --- CHANGELOG.md | 9 +++++++++ 1 file changed, 9 insertions(+) diff --git a/CHANGELOG.md b/CHANGELOG.md index 40f5ab8..248e108 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -2,6 +2,15 @@ All notable changes to this project are documented here. Dates use the ISO format (YYYY-MM-DD). +## [3.3.0] - 2025-11-19 +### Added +- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (`gpt-5.1-codex-max`, `gpt-5.1-codex-max-xhigh`) with expanded output window and `xhigh` reasoning. +- Typing and config support for new reasoning options (`none`/`xhigh`, summary `off`/`on`) plus updated test matrix entries. + +### Changed +- Codex Mini clamping now downgrades unsupported `xhigh` to `high` and guards against `none`/`minimal` inputs. +- Documentation, config guides, and validation scripts now reflect 10 verified GPT 5.1 variants including Codex Max. + ## [3.2.0] - 2025-11-14 ### Added - GPT 5.1 model family support: normalization for `gpt-5.1`, `gpt-5.1-codex`, and `gpt-5.1-codex-mini` plus new GPT 5.1-only presets in the canonical `config/full-opencode.json`. From 8bb8912bfdbf14bcb6a4f27a89f1296583b53852 Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 14:03:25 -0500 Subject: [PATCH 07/13] feat: npm --- package-lock.json | 18 +++++++++--------- 1 file changed, 9 insertions(+), 9 deletions(-) diff --git a/package-lock.json b/package-lock.json index 883a1a0..763197d 100644 --- a/package-lock.json +++ b/package-lock.json @@ -570,7 +570,6 @@ "resolved": "https://registry.npmjs.org/@oslojs/asn1/-/asn1-1.0.0.tgz", "integrity": "sha512-zw/wn0sj0j0QKbIXfIlnEcTviaCzYOY3V5rAyjR6YtOByFtJiT574+8p9Wlach0lZH9fddD4yb9laEAIl4vXQA==", "license": "MIT", - "peer": true, "dependencies": { "@oslojs/binary": "1.0.0" } @@ -579,15 +578,13 @@ "version": "1.0.0", "resolved": "https://registry.npmjs.org/@oslojs/binary/-/binary-1.0.0.tgz", "integrity": "sha512-9RCU6OwXU6p67H4NODbuxv2S3eenuQ4/WFLrsq+K/k682xrznH5EVWA7N4VFk9VYVcbFtKqur5YQQZc0ySGhsQ==", - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/@oslojs/crypto": { "version": "1.0.1", "resolved": "https://registry.npmjs.org/@oslojs/crypto/-/crypto-1.0.1.tgz", "integrity": "sha512-7n08G8nWjAr/Yu3vu9zzrd0L9XnrJfpMioQcvCMxBIiF5orECHe5/3J0jmXRVvgfqMm/+4oxlQ+Sq39COYLcNQ==", "license": "MIT", - "peer": true, "dependencies": { "@oslojs/asn1": "1.0.0", "@oslojs/binary": "1.0.0" @@ -597,15 +594,13 @@ "version": "1.1.0", "resolved": "https://registry.npmjs.org/@oslojs/encoding/-/encoding-1.1.0.tgz", "integrity": "sha512-70wQhgYmndg4GCPxPPxPGevRKqTIJ2Nh4OkiMWmDAVYsTQ+Ta7Sq+rPevXyXGdzr30/qZBnyOalCszoMxlyldQ==", - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/@oslojs/jwt": { "version": "0.2.0", "resolved": "https://registry.npmjs.org/@oslojs/jwt/-/jwt-0.2.0.tgz", "integrity": "sha512-bLE7BtHrURedCn4Mco3ma9L4Y1GR2SMBuIvjWr7rmQ4/W/4Jy70TIAgZ+0nIlk0xHz1vNP8x8DCns45Sb2XRbg==", "license": "MIT", - "peer": true, "dependencies": { "@oslojs/encoding": "0.4.1" } @@ -614,8 +609,7 @@ "version": "0.4.1", "resolved": "https://registry.npmjs.org/@oslojs/encoding/-/encoding-0.4.1.tgz", "integrity": "sha512-hkjo6MuIK/kQR5CrGNdAPZhS01ZCXuWDRJ187zh6qqF2+yMHZpD9fAYpX8q2bOO6Ryhl3XpCT6kUX76N8hhm4Q==", - "license": "MIT", - "peer": true + "license": "MIT" }, "node_modules/@polka/url": { "version": "1.0.0-next.29", @@ -975,6 +969,7 @@ "integrity": "sha512-d2L25Y4j+W3ZlNAeMKcy7yDsK425ibcAOO2t7aPTz6gNMH0z2GThtwENCDc0d/Pw9wgyRqE5Px1wkV7naz8ang==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "undici-types": "~7.13.0" } @@ -1643,6 +1638,7 @@ "resolved": "https://registry.npmjs.org/hono/-/hono-4.10.4.tgz", "integrity": "sha512-YG/fo7zlU3KwrBL5vDpWKisLYiM+nVstBQqfr7gCPbSYURnNEP9BDxEMz8KfsDR9JX0lJWDRNc6nXX31v7ZEyg==", "license": "MIT", + "peer": true, "engines": { "node": ">=16.9.0" } @@ -1982,6 +1978,7 @@ "integrity": "sha512-5gTmgEY/sqK6gFXLIsQNH19lWb4ebPDLA4SdLP7dsWkIXHWlG66oPuVvXSGFPppYZz8ZDZq0dYYrbHfBCVUb1Q==", "dev": true, "license": "MIT", + "peer": true, "engines": { "node": ">=12" }, @@ -2294,6 +2291,7 @@ "integrity": "sha512-jl1vZzPDinLr9eUt3J/t7V6FgNEw9QjvBPdysz9KfQDD41fQrC2Y4vKQdiaUpFT4bXlb1RHhLpp8wtm6M5TgSw==", "dev": true, "license": "Apache-2.0", + "peer": true, "bin": { "tsc": "bin/tsc", "tsserver": "bin/tsserver" @@ -2336,6 +2334,7 @@ "integrity": "sha512-ZWyE8YXEXqJrrSLvYgrRP7p62OziLW7xI5HYGWFzOvupfAlrLvURSzv/FyGyy0eidogEM3ujU+kUG1zuHgb6Ug==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "esbuild": "^0.25.0", "fdir": "^6.5.0", @@ -2441,6 +2440,7 @@ "integrity": "sha512-LUCP5ev3GURDysTWiP47wRRUpLKMOfPh+yKTx3kVIEiu5KOMeqzpnYNsKyOoVrULivR8tLcks4+lga33Whn90A==", "dev": true, "license": "MIT", + "peer": true, "dependencies": { "@types/chai": "^5.2.2", "@vitest/expect": "3.2.4", From 54f62d89f19e42fa41a8db83a6fc8c6cfb81cfdb Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 19:56:38 -0500 Subject: [PATCH 08/13] fix: add missing models --- scripts/test-all-models.sh | 12 +++++++----- 1 file changed, 7 insertions(+), 5 deletions(-) diff --git a/scripts/test-all-models.sh b/scripts/test-all-models.sh index 48691f9..57ff88e 100755 --- a/scripts/test-all-models.sh +++ b/scripts/test-all-models.sh @@ -150,11 +150,13 @@ update_config() { update_config "full" # GPT 5.1 Codex presets -test_model "gpt-5.1-codex-low" "gpt-5.1-codex" "low" "auto" "medium" -test_model "gpt-5.1-codex-medium" "gpt-5.1-codex" "medium" "auto" "medium" -test_model "gpt-5.1-codex-high" "gpt-5.1-codex" "high" "detailed" "medium" -test_model "gpt-5.1-codex-max" "gpt-5.1-codex-max" "high" "detailed" "medium" -test_model "gpt-5.1-codex-max-xhigh" "gpt-5.1-codex-max" "xhigh" "detailed" "medium" +test_model "gpt-5.1-codex-low" "gpt-5.1-codex" "low" "auto" "medium" +test_model "gpt-5.1-codex-medium" "gpt-5.1-codex" "medium" "auto" "medium" +test_model "gpt-5.1-codex-high" "gpt-5.1-codex" "high" "detailed" "medium" +test_model "gpt-5.1-codex-max-low" "gpt-5.1-codex-max" "low" "detailed" "medium" +test_model "gpt-5.1-codex-max-medium" "gpt-5.1-codex-max" "medium" "detailed" "medium" +test_model "gpt-5.1-codex-max-high" "gpt-5.1-codex-max" "high" "detailed" "medium" +test_model "gpt-5.1-codex-max-xhigh" "gpt-5.1-codex-max" "xhigh" "detailed" "medium" # GPT 5.1 Codex Mini presets (medium/high only) test_model "gpt-5.1-codex-mini-medium" "gpt-5.1-codex-mini" "medium" "auto" "medium" From 6b3929093289c69f19ca64a8d06441a2dfee2009 Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 20:07:13 -0500 Subject: [PATCH 09/13] fix: docs --- README.md | 61 +++++++++++++++++++++++++++++++++++---- config/README.md | 2 +- config/full-opencode.json | 52 +++++++++++++++++++++++++++++++-- docs/getting-started.md | 46 +++++++++++++++++++++++++++-- docs/index.md | 2 +- 5 files changed, 152 insertions(+), 11 deletions(-) diff --git a/README.md b/README.md index 936e536..1e6782b 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an ## Features - ✅ **ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription -- ✅ **10 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including new `xhigh` for Codex Max) +- ✅ **12 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including `gpt-5.1-codex-max-low/medium/high/xhigh`) - ⚠️ **GPT 5.1 only** - Older GPT 5.0 models are deprecated and may not work reliably - ✅ **Zero external dependencies** - Lightweight with only @openauthjs/openauth - ✅ **Auto-refreshing tokens** - Handles token expiration automatically @@ -146,6 +146,54 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an "store": false } }, + "gpt-5.1-codex-max-low": { + "name": "GPT 5.1 Codex Max Low (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "low", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-medium": { + "name": "GPT 5.1 Codex Max Medium (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-high": { + "name": "GPT 5.1 Codex Max High (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, "gpt-5.1-codex-max-xhigh": { "name": "GPT 5.1 Codex Max Extra High (OAuth)", "limit": { @@ -251,8 +299,9 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an **Global config**: `~/.config/opencode/opencode.json` **Project config**: `/.opencode.json` - This gives you 8 GPT 5.1 variants with different reasoning levels: + This gives you 12 GPT 5.1 variants with different reasoning levels: - **gpt-5.1-codex** (low/medium/high) - Latest Codex model presets + - **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets (`gpt-5.1-codex-max-low/medium/high/xhigh`) - **gpt-5.1-codex-mini** (medium/high) - Latest Codex mini tier presets - **gpt-5.1** (low/medium/high) - Latest general-purpose reasoning presets @@ -325,7 +374,7 @@ If using the full configuration, select from the model picker in opencode, or sp # Use different reasoning levels for gpt-5.1-codex opencode run "simple task" --model=openai/gpt-5.1-codex-low opencode run "complex task" --model=openai/gpt-5.1-codex-high -opencode run "large refactor" --model=openai/gpt-5.1-codex-max +opencode run "large refactor" --model=openai/gpt-5.1-codex-max-high opencode run "research-grade analysis" --model=openai/gpt-5.1-codex-max-xhigh # Use different reasoning levels for gpt-5.1 @@ -346,7 +395,9 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t | `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation | | `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks | | `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools | -| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | High | Long-horizon builds, large refactors | +| `gpt-5.1-codex-max-low` | GPT 5.1 Codex Max Low (OAuth) | Low | Fast exploratory large-context work | +| `gpt-5.1-codex-max-medium` | GPT 5.1 Codex Max Medium (OAuth) | Medium | Balanced large-context builds | +| `gpt-5.1-codex-max-high` | GPT 5.1 Codex Max High (OAuth) | High | Long-horizon builds, large refactors | | `gpt-5.1-codex-max-xhigh` | GPT 5.1 Codex Max Extra High (OAuth) | xHigh | Deep multi-hour agent loops, research/debug marathons | | `gpt-5.1-codex-mini-medium` | GPT 5.1 Codex Mini Medium (OAuth) | Medium | Latest Codex mini tier | | `gpt-5.1-codex-mini-high` | GPT 5.1 Codex Mini High (OAuth) | High | Codex Mini with maximum reasoning | @@ -359,7 +410,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t > **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output). > -> **Note**: Codex Max uses the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support plus `xhigh` reasoning. +> **Note**: Codex Max presets use the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick reasoning level (only `-xhigh` uses `xhigh` reasoning). > **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results. diff --git a/config/README.md b/config/README.md index f62ffc3..38706ef 100644 --- a/config/README.md +++ b/config/README.md @@ -14,7 +14,7 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json **Why this is required:** - GPT 5 models can be temperamental and need proper configuration -- Contains 10 verified GPT 5.1 model variants (Codex, Codex Max, Codex Mini, and general GPT 5.1) +- Contains 12+ verified GPT 5.1 model variants (Codex, Codex Max, Codex Mini, and general GPT 5.1 including `gpt-5.1-codex-max-low/medium/high/xhigh`) - Includes all required metadata for OpenCode features - Guaranteed to work reliably - Global options for all models + per-model configuration overrides diff --git a/config/full-opencode.json b/config/full-opencode.json index 2cba6f7..923db6f 100644 --- a/config/full-opencode.json +++ b/config/full-opencode.json @@ -67,7 +67,55 @@ "name": "GPT 5.1 Codex Max (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-low": { + "name": "GPT 5.1 Codex Max Low (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "low", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-medium": { + "name": "GPT 5.1 Codex Max Medium (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": [ + "reasoning.encrypted_content" + ], + "store": false + } + }, + "gpt-5.1-codex-max-high": { + "name": "GPT 5.1 Codex Max High (OAuth)", + "limit": { + "context": 272000, + "output": 128000 }, "options": { "reasoningEffort": "high", @@ -83,7 +131,7 @@ "name": "GPT 5.1 Codex Max Extra High (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 }, "options": { "reasoningEffort": "xhigh", diff --git a/docs/getting-started.md b/docs/getting-started.md index 0bad194..8d4d738 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -108,6 +108,48 @@ Add this to `~/.config/opencode/opencode.json`: "store": false } }, + "gpt-5.1-codex-max-low": { + "name": "GPT 5.1 Codex Max Low (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "low", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + } + }, + "gpt-5.1-codex-max-medium": { + "name": "GPT 5.1 Codex Max Medium (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "medium", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + } + }, + "gpt-5.1-codex-max-high": { + "name": "GPT 5.1 Codex Max High (OAuth)", + "limit": { + "context": 272000, + "output": 128000 + }, + "options": { + "reasoningEffort": "high", + "reasoningSummary": "detailed", + "textVerbosity": "medium", + "include": ["reasoning.encrypted_content"], + "store": false + } + }, "gpt-5.1-codex-max-xhigh": { "name": "GPT 5.1 Codex Max Extra High (OAuth)", "limit": { @@ -200,7 +242,7 @@ Add this to `~/.config/opencode/opencode.json`: **What you get:** - ✅ GPT 5.1 Codex (Low/Medium/High reasoning) - - ✅ GPT 5.1 Codex Max (High/xHigh reasoning, larger outputs) + - ✅ GPT 5.1 Codex Max (Low/Medium/High/xHigh reasoning presets, larger outputs) - ✅ GPT 5.1 Codex Mini (Medium/High reasoning) - ✅ GPT 5.1 (Low/Medium/High reasoning) - ✅ 272k context + 128k output window for core presets (Codex Max expands output to ~400k) @@ -209,7 +251,7 @@ Add this to `~/.config/opencode/opencode.json`: > **Note**: All `gpt-5.1-codex-mini*` presets use 272k context / 128k output limits. > -> **Note**: Codex Max presets map to `gpt-5.1-codex-max` with 272k input and expanded ~400k output plus `xhigh` reasoning. +> **Note**: Codex Max presets map to the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick the reasoning level (only `-xhigh` uses `xhigh` reasoning). Prompt caching is enabled out of the box: when OpenCode sends its session identifier as `prompt_cache_key`, the plugin forwards it untouched so multi-turn runs reuse prior work. The CODEX_MODE bridge prompt bundled with the plugin is kept in sync with the latest Codex CLI release, so the OpenCode UI and Codex share the same tool contract. If you hit your ChatGPT subscription limits, the plugin returns a friendly Codex-style message with the 5-hour and weekly usage windows so you know when capacity resets. diff --git a/docs/index.md b/docs/index.md index 002f7b2..7b5049c 100644 --- a/docs/index.md +++ b/docs/index.md @@ -82,7 +82,7 @@ opencode run "write hello world to test.txt" --model=openai/gpt-5-codex ## Features ✅ **OAuth Authentication** - Secure ChatGPT Plus/Pro login -✅ **GPT 5.1 Models** - gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini (10 pre-configured variants) +✅ **GPT 5.1 Models** - gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini (12 pre-configured variants including `gpt-5.1-codex-max-low/medium/high/xhigh`) ✅ **Per-Model Configuration** - Different reasoning effort, including new `xhigh` for Codex Max ✅ **Multi-Turn Conversations** - Full conversation history with stateless backend ✅ **Verified Configuration** - Use `config/full-opencode.json` for guaranteed compatibility From aeb51974a952b871821900c70be0fbb2573fc69c Mon Sep 17 00:00:00 2001 From: cau1k Date: Wed, 19 Nov 2025 23:42:41 -0500 Subject: [PATCH 10/13] fix: model normalization --- lib/request/request-transformer.ts | 31 +++++++++++++++++------------ test/request-transformer.test.ts | 32 ++++++++++++++++++++++++++++-- 2 files changed, 48 insertions(+), 15 deletions(-) diff --git a/lib/request/request-transformer.ts b/lib/request/request-transformer.ts index 4026bf1..2a67d03 100644 --- a/lib/request/request-transformer.ts +++ b/lib/request/request-transformer.ts @@ -121,23 +121,23 @@ export function getModelConfig( * @returns Reasoning configuration */ export function getReasoningConfig( - originalModel: string | undefined, + modelName: string | undefined, userConfig: ConfigOptions = {}, ): ReasoningConfig { - const normalizedOriginal = originalModel?.toLowerCase() ?? ""; + const normalizedName = modelName?.toLowerCase() ?? ""; const isCodexMax = - normalizedOriginal.includes("codex-max") || - normalizedOriginal.includes("codex max"); + normalizedName.includes("codex-max") || + normalizedName.includes("codex max"); const isCodexMini = - normalizedOriginal.includes("codex-mini") || - normalizedOriginal.includes("codex mini") || - normalizedOriginal.includes("codex_mini") || - normalizedOriginal.includes("codex-mini-latest"); - const isCodex = normalizedOriginal.includes("codex") && !isCodexMini; + normalizedName.includes("codex-mini") || + normalizedName.includes("codex mini") || + normalizedName.includes("codex_mini") || + normalizedName.includes("codex-mini-latest"); + const isCodex = normalizedName.includes("codex") && !isCodexMini; const isLightweight = !isCodexMini && - (normalizedOriginal.includes("nano") || - normalizedOriginal.includes("mini")); + (normalizedName.includes("nano") || + normalizedName.includes("mini")); // Default based on model type (Codex CLI defaults) const defaultEffort: ReasoningConfig["effort"] = isCodexMini @@ -163,6 +163,11 @@ export function getReasoningConfig( } } + // For all non-Codex-Max models, downgrade unsupported xhigh to high + if (!isCodexMax && effort === "xhigh") { + effort = "high"; + } + // Normalize "minimal" to "low" for Codex families // Codex CLI presets are low/medium/high (or xhigh for Codex Max) if (isCodex && effort === "minimal") { @@ -437,8 +442,8 @@ export async function transformRequestBody( } } - // Configure reasoning (use model-specific config) - const reasoningConfig = getReasoningConfig(originalModel, modelConfig); + // Configure reasoning (use normalized model family + model-specific config) + const reasoningConfig = getReasoningConfig(normalizedModel, modelConfig); body.reasoning = { ...body.reasoning, ...reasoningConfig, diff --git a/test/request-transformer.test.ts b/test/request-transformer.test.ts index fd8b932..b3c176a 100644 --- a/test/request-transformer.test.ts +++ b/test/request-transformer.test.ts @@ -796,6 +796,34 @@ describe('Request Transformer Module', () => { expect(result.reasoning?.summary).toBe('detailed'); }); + it('should downgrade xhigh to high for non-max codex', async () => { + const body: RequestBody = { + model: 'gpt-5.1-codex-high', + input: [], + }; + const userConfig: UserConfig = { + global: { reasoningEffort: 'xhigh' }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig); + expect(result.model).toBe('gpt-5.1-codex'); + expect(result.reasoning?.effort).toBe('high'); + }); + + it('should downgrade xhigh to high for non-max general models', async () => { + const body: RequestBody = { + model: 'gpt-5.1-high', + input: [], + }; + const userConfig: UserConfig = { + global: { reasoningEffort: 'xhigh' }, + models: {}, + }; + const result = await transformRequestBody(body, codexInstructions, userConfig); + expect(result.model).toBe('gpt-5.1'); + expect(result.reasoning?.effort).toBe('high'); + }); + it('should preserve minimal for non-codex models', async () => { const body: RequestBody = { model: 'gpt-5', @@ -815,7 +843,7 @@ describe('Request Transformer Module', () => { input: [], }; const result = await transformRequestBody(body, codexInstructions); - expect(result.reasoning?.effort).toBe('minimal'); + expect(result.reasoning?.effort).toBe('medium'); }); describe('CODEX_MODE parameter', () => { @@ -945,7 +973,7 @@ describe('Request Transformer Module', () => { const result = await transformRequestBody(body, codexInstructions); expect(result.model).toBe('gpt-5'); // Normalized - expect(result.reasoning?.effort).toBe('minimal'); // Lightweight default + expect(result.reasoning?.effort).toBe('medium'); // Default for normalized gpt-5 }); }); From 5b5e1e947b942c9756d289b68d317d3846b4da28 Mon Sep 17 00:00:00 2001 From: zero Date: Thu, 20 Nov 2025 17:04:46 -0500 Subject: [PATCH 11/13] Update README.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- README.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/README.md b/README.md index 1e6782b..7479d65 100644 --- a/README.md +++ b/README.md @@ -455,7 +455,7 @@ These defaults match the official Codex CLI behavior and can be customized (see ### ⚠️ REQUIRED: Use Pre-Configured File **YOU MUST use [`config/full-opencode.json`](./config/full-opencode.json)** - this is the only officially supported configuration: -- 10 pre-configured GPT 5.1 model variants with verified settings +- 13 pre-configured GPT 5.1 model variants with verified settings - Optimal configuration for each reasoning level - All variants visible in the opencode model selector - Required metadata for OpenCode features to work properly From 9bdbc5fb0876d560d2b415c3da3f6954f7e007c3 Mon Sep 17 00:00:00 2001 From: zero Date: Thu, 20 Nov 2025 17:05:22 -0500 Subject: [PATCH 12/13] Update CHANGELOG.md Co-authored-by: Copilot <175728472+Copilot@users.noreply.github.com> --- CHANGELOG.md | 2 +- 1 file changed, 1 insertion(+), 1 deletion(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 248e108..89af968 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -9,7 +9,7 @@ All notable changes to this project are documented here. Dates use the ISO forma ### Changed - Codex Mini clamping now downgrades unsupported `xhigh` to `high` and guards against `none`/`minimal` inputs. -- Documentation, config guides, and validation scripts now reflect 10 verified GPT 5.1 variants including Codex Max. +- Documentation, config guides, and validation scripts now reflect 13 verified GPT 5.1 variants (3 codex, 5 codex-max, 2 codex-mini, 3 general), including Codex Max. See README for details on pre-configured variants. ## [3.2.0] - 2025-11-14 ### Added From b58f7467e75ba0fbdd6b0308c9726e0d70a9705e Mon Sep 17 00:00:00 2001 From: cau1k Date: Thu, 20 Nov 2025 17:54:30 -0500 Subject: [PATCH 13/13] docs: remove stray 400k references --- CHANGELOG.md | 2 +- README.md | 13 +++++++------ config/README.md | 2 +- docs/configuration.md | 2 +- docs/getting-started.md | 12 ++++++------ 5 files changed, 16 insertions(+), 15 deletions(-) diff --git a/CHANGELOG.md b/CHANGELOG.md index 89af968..c1551de 100644 --- a/CHANGELOG.md +++ b/CHANGELOG.md @@ -4,7 +4,7 @@ All notable changes to this project are documented here. Dates use the ISO forma ## [3.3.0] - 2025-11-19 ### Added -- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (`gpt-5.1-codex-max`, `gpt-5.1-codex-max-xhigh`) with expanded output window and `xhigh` reasoning. +- GPT 5.1 Codex Max support: normalization, per-model defaults, and new presets (`gpt-5.1-codex-max`, `gpt-5.1-codex-max-xhigh`) with extended reasoning options (including `none`/`xhigh`) while keeping the 272k context / 128k output limits. - Typing and config support for new reasoning options (`none`/`xhigh`, summary `off`/`on`) plus updated test matrix entries. ### Changed diff --git a/README.md b/README.md index 7479d65..38b0324 100644 --- a/README.md +++ b/README.md @@ -33,7 +33,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an ## Features - ✅ **ChatGPT Plus/Pro OAuth authentication** - Use your existing subscription -- ✅ **12 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including `gpt-5.1-codex-max-low/medium/high/xhigh`) +- ✅ **13 pre-configured GPT 5.1 variants** - GPT 5.1, GPT 5.1 Codex, GPT 5.1 Codex Max, and GPT 5.1 Codex Mini presets for common reasoning levels (including `gpt-5.1-codex-max` and `gpt-5.1-codex-max-low/medium/high/xhigh`) - ⚠️ **GPT 5.1 only** - Older GPT 5.0 models are deprecated and may not work reliably - ✅ **Zero external dependencies** - Lightweight with only @openauthjs/openauth - ✅ **Auto-refreshing tokens** - Handles token expiration automatically @@ -134,7 +134,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an "name": "GPT 5.1 Codex Max (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 }, "options": { "reasoningEffort": "high", @@ -198,7 +198,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an "name": "GPT 5.1 Codex Max Extra High (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 }, "options": { "reasoningEffort": "xhigh", @@ -299,7 +299,7 @@ Follow me on [X @nummanthinks](https://x.com/nummanthinks) for future updates an **Global config**: `~/.config/opencode/opencode.json` **Project config**: `/.opencode.json` - This gives you 12 GPT 5.1 variants with different reasoning levels: + This gives you 13 GPT 5.1 variants with different reasoning levels: - **gpt-5.1-codex** (low/medium/high) - Latest Codex model presets - **gpt-5.1-codex-max** (low/medium/high/xhigh) - Codex Max presets (`gpt-5.1-codex-max-low/medium/high/xhigh`) - **gpt-5.1-codex-mini** (medium/high) - Latest Codex mini tier presets @@ -395,6 +395,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t | `gpt-5.1-codex-low` | GPT 5.1 Codex Low (OAuth) | Low | Fast code generation | | `gpt-5.1-codex-medium` | GPT 5.1 Codex Medium (OAuth) | Medium | Balanced code tasks | | `gpt-5.1-codex-high` | GPT 5.1 Codex High (OAuth) | High | Complex code & tools | +| `gpt-5.1-codex-max` | GPT 5.1 Codex Max (OAuth) | High | Default Codex Max preset with large-context support | | `gpt-5.1-codex-max-low` | GPT 5.1 Codex Max Low (OAuth) | Low | Fast exploratory large-context work | | `gpt-5.1-codex-max-medium` | GPT 5.1 Codex Max Medium (OAuth) | Medium | Balanced large-context builds | | `gpt-5.1-codex-max-high` | GPT 5.1 Codex Max High (OAuth) | High | Long-horizon builds, large refactors | @@ -410,7 +411,7 @@ When using [`config/full-opencode.json`](./config/full-opencode.json), you get t > **Note**: All `gpt-5.1-codex-mini*` presets map directly to the `gpt-5.1-codex-mini` slug with standard Codex limits (272k context / 128k output). > -> **Note**: Codex Max presets use the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output support. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick reasoning level (only `-xhigh` uses `xhigh` reasoning). +> **Note**: Codex Max presets use the `gpt-5.1-codex-max` slug with 272k context and 128k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick reasoning level (only `-xhigh` uses `xhigh` reasoning). > **⚠️ Important**: GPT 5 models can be temperamental - some variants may work better than others, some may give errors, and behavior may vary. Stick to the presets above configured in `full-opencode.json` for best results. @@ -482,7 +483,7 @@ If you want to customize settings yourself, you can configure options at provide > **Notes**: > - `minimal` effort is auto-normalized to `low` for Codex models. > - Codex Mini clamps to `medium`/`high`; `xhigh` downgrades to `high`. -> - Codex Max supports `none`/`xhigh` plus expanded output limits (~400k). +> - Codex Max supports `none`/`xhigh` plus extended reasoning options while keeping the same 272k context / 128k output limits. #### Global Configuration Example diff --git a/config/README.md b/config/README.md index 38706ef..10f40e4 100644 --- a/config/README.md +++ b/config/README.md @@ -22,7 +22,7 @@ cp config/full-opencode.json ~/.config/opencode/opencode.json **What's included:** - All supported GPT 5.1 variants: gpt-5.1, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini - Proper reasoning effort settings for each variant (including new `xhigh` for Codex Max) -- Context limits (272k context / 128k output for core Codex; Codex Max allows larger outputs) +- Context limits (272k context / 128k output for all Codex families, including Codex Max) - Required options: `store: false`, `include: ["reasoning.encrypted_content"]` ### ❌ Other Configurations (NOT SUPPORTED) diff --git a/docs/configuration.md b/docs/configuration.md index 5462adb..3c5bbe9 100644 --- a/docs/configuration.md +++ b/docs/configuration.md @@ -394,7 +394,7 @@ CODEX_MODE=1 opencode run "task" # Temporarily enable ## Configuration Files **Provided Examples:** -- [config/full-opencode.json](../config/full-opencode.json) - Complete with 8 GPT 5.1 variants +- [config/full-opencode.json](../config/full-opencode.json) - Complete with 13 GPT 5.1 variants > **⚠️ REQUIRED:** You MUST use `full-opencode.json` - this is the ONLY officially supported configuration. Minimal configs are NOT supported for GPT 5 models and will fail unpredictably. OpenCode's auto-compaction and usage widgets also require the full config's per-model `limit` metadata. diff --git a/docs/getting-started.md b/docs/getting-started.md index 8d4d738..fecc14b 100644 --- a/docs/getting-started.md +++ b/docs/getting-started.md @@ -98,7 +98,7 @@ Add this to `~/.config/opencode/opencode.json`: "name": "GPT 5.1 Codex Max (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 }, "options": { "reasoningEffort": "high", @@ -154,7 +154,7 @@ Add this to `~/.config/opencode/opencode.json`: "name": "GPT 5.1 Codex Max Extra High (OAuth)", "limit": { "context": 272000, - "output": 400000 + "output": 128000 }, "options": { "reasoningEffort": "xhigh", @@ -242,16 +242,16 @@ Add this to `~/.config/opencode/opencode.json`: **What you get:** - ✅ GPT 5.1 Codex (Low/Medium/High reasoning) - - ✅ GPT 5.1 Codex Max (Low/Medium/High/xHigh reasoning presets, larger outputs) + - ✅ GPT 5.1 Codex Max (Low/Medium/High/xHigh reasoning presets) - ✅ GPT 5.1 Codex Mini (Medium/High reasoning) - ✅ GPT 5.1 (Low/Medium/High reasoning) - - ✅ 272k context + 128k output window for core presets (Codex Max expands output to ~400k) + - ✅ 272k context + 128k output window for all GPT 5.1 presets. - ✅ All visible in OpenCode model selector - ✅ Optimal settings for each reasoning level > **Note**: All `gpt-5.1-codex-mini*` presets use 272k context / 128k output limits. > -> **Note**: Codex Max presets map to the `gpt-5.1-codex-max` slug with 272k input and expanded ~400k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick the reasoning level (only `-xhigh` uses `xhigh` reasoning). +> **Note**: Codex Max presets map to the `gpt-5.1-codex-max` slug with 272k context and 128k output. Use `gpt-5.1-codex-max-low/medium/high/xhigh` to pick the reasoning level (only `-xhigh` uses `xhigh` reasoning). Prompt caching is enabled out of the box: when OpenCode sends its session identifier as `prompt_cache_key`, the plugin forwards it untouched so multi-turn runs reuse prior work. The CODEX_MODE bridge prompt bundled with the plugin is kept in sync with the latest Codex CLI release, so the OpenCode UI and Codex share the same tool contract. If you hit your ChatGPT subscription limits, the plugin returns a friendly Codex-style message with the 5-hour and weekly usage windows so you know when capacity resets. @@ -299,7 +299,7 @@ opencode run "write hello world to test.txt" --model=openai/gpt-5.1-codex-medium opencode ``` -You'll see all 8 GPT 5.1 variants (5.1, 5.1 Codex, and 5.1 Codex Mini presets) in the model selector! +You'll see all 13 GPT 5.1 variants (Codex, Codex Max, Codex Mini, and GPT 5.1 presets) in the model selector! ---