Skip to content

Conversation

@ben-vargas
Copy link
Contributor

@ben-vargas ben-vargas commented Oct 3, 2025

Summary

This PR adds comprehensive user-configurable settings support to the plugin, allowing users to customize reasoning effort, reasoning summaries, text verbosity, and response fields for both gpt-5 and gpt-5-codex models through opencode's configuration system.

What's New

🎛️ User-Configurable Settings

Users can now customize model behavior via opencode.json configuration:

  • reasoningEffort: Control computational effort (minimal, low, medium, high)
  • reasoningSummary: Control summary verbosity (auto, detailed)
  • textVerbosity: Control output length (low, medium, high - codex only supports medium)
  • include: Additional response fields (default: ["reasoning.encrypted_content"] for stateless reasoning)

🎯 Flexible Configuration Patterns

The plugin supports three configuration patterns following Anthropic's configuration approach:

  1. Global options: Apply same settings to all GPT-5 models
  2. Per-model options: Different settings for gpt-5 vs gpt-5-codex
  3. Mixed configuration: Global defaults with per-model overrides

Per-model options take precedence over global options when both are specified.

📊 Updated Plugin Defaults

The plugin defaults have been updated to match Codex CLI behavior and provide better out-of-the-box performance:

New defaults (v1.0.4):

{
  "reasoningEffort": "medium",
  "reasoningSummary": "auto",
  "textVerbosity": "medium",
  "include": ["reasoning.encrypted_content"]
}

Previous defaults (v1.0.3):

{
  "reasoningEffort": "high",
  "reasoningSummary": "detailed",
  "textVerbosity": "medium"
}

Changes:

  • reasoningEffort: Changed from highmedium (more balanced performance/cost)
  • reasoningSummary: Changed from detailedauto (matches Codex CLI default)
  • include: Added ["reasoning.encrypted_content"] (critical for stateless multi-turn conversations)

The include: ["reasoning.encrypted_content"] default is critical for stateless operation (store: false), allowing reasoning context to persist across turns without server-side storage.

✅ Model-Specific Validation

The implementation includes model-specific restrictions based on comprehensive testing:

GPT-5 Model:

  • reasoningEffort: minimal, low, medium, high
  • reasoningSummary: auto, detailed
  • textVerbosity: low, medium, high

GPT-5-Codex Model:

  • reasoningEffort: minimal* (auto-normalized to low), low, medium, high
  • reasoningSummary: auto, detailed
  • textVerbosity: medium only

Both Models:

  • include: Array of strings (default: ["reasoning.encrypted_content"])

* The plugin automatically normalizes minimallow for gpt-5-codex to match Codex CLI behavior.

Implementation Details

Configuration Loader (index.mjs)

  • Extracts user configuration from opencode's provider options
  • Supports both provider.openai.options (global) and provider.openai.models.{model}.options (per-model)
  • Passes configuration to request transformer

Request Transformer (lib/request-transformer.mjs)

  • New getModelConfig(): Merges global + per-model options (per-model overrides)
  • Enhanced getReasoningConfig(): Accepts user config, applies defaults, normalizes values
  • Enhanced transformRequestBody(): Applies user-configured text verbosity, reasoning settings, and include fields
  • Automatic normalization of minimallow for gpt-5-codex
  • Configurable include field with default for encrypted reasoning content

Testing (test-config.mjs)

  • Comprehensive unit tests for configuration parsing
  • Tests global options, per-model options, and mixed configurations
  • Validates merging logic and default values
  • Run with: node test-config.mjs

Documentation

Updated README.md with:

  • Clear tables showing supported values for each model (including include setting)
  • Three configuration examples (global, per-model, mixed)
  • Plugin defaults section explaining the importance of reasoning.encrypted_content
  • Model-specific restrictions and warnings

Example Configuration

{
  "$schema": "https://opencode.ai/config.json",
  "plugin": ["opencode-openai-codex-auth"],
  "model": "openai/gpt-5-codex",
  "provider": {
    "openai": {
      "options": {
        "reasoningEffort": "medium",
        "reasoningSummary": "auto",
        "textVerbosity": "medium",
        "include": ["reasoning.encrypted_content"]
      },
      "models": {
        "gpt-5-codex": {
          "options": {
            "reasoningEffort": "high",
            "reasoningSummary": "detailed"
          }
        }
      }
    }
  }
}

To restore v1.0.3 behavior:

{
  "provider": {
    "openai": {
      "options": {
        "reasoningEffort": "high",
        "reasoningSummary": "detailed"
      }
    }
  }
}

Testing

All configuration patterns have been tested with:

  • Request logging verification (ENABLE_PLUGIN_REQUEST_LOGGING=1)
  • Systematic testing of all value combinations for reasoning and text verbosity
  • Validation against ChatGPT backend API responses
  • Unit tests for configuration parsing and merging logic

Commits

  1. feat: Add user-configurable settings support - Core implementation
  2. test: Add configuration parsing tests - Unit tests for config loader
  3. docs: Update README with configuration documentation - Complete user documentation
  4. fix: Normalize 'minimal' to 'low' for gpt-5-codex - Match Codex CLI behavior
  5. chore: Bump version to 1.0.4 - Version update for new feature

Breaking Changes

Default behavior has changed:

Users upgrading from v1.0.3 to v1.0.4 will experience different default reasoning behavior:

  • Reasoning effort reduced from highmedium (more balanced dynamic behavior, recommended by OpenAI)
  • Reasoning summary changed from detailedauto (adaptive verbosity)
  • Added encrypted reasoning content for better multi-turn conversations

Rationale: The new defaults match Codex CLI behavior and provide better out-of-the-box performance. The previous high effort setting is against default Codex CLI behavior and OpenAI's recommendation of medium which will dynamically think longer or shorter - also improved speed for users.

To restore v1.0.3 behavior: Add the configuration shown in the example above to explicitly set reasoningEffort: "high" and reasoningSummary: "detailed".

- Add getModelConfig() to merge global and per-model options
- Update getReasoningConfig() to accept user configuration
  - Change defaults: effort from 'high' to 'medium', summary from 'detailed' to 'auto'
  - Matches Codex CLI defaults for ChatGPT backend API
- Update transformRequestBody() to use user configuration
  - Support configurable textVerbosity (default: 'medium')
  - Add include parameter for encrypted reasoning content
- Update index.mjs loader to extract provider configuration
  - Follows Anthropic pattern: uses 'openai' provider for both OAuth and API key
  - Supports both global options and per-model options
- Enhanced logging to track textVerbosity and include in requests

Configuration structure:
- Global options: provider.openai.options
- Per-model options: provider.openai.models['gpt-5-codex'].options
- Supported settings: reasoningEffort, reasoningSummary, textVerbosity, include

See SETTINGS.md for full implementation details and examples.
- Create test-config.json with example global and per-model options
- Add test-config.mjs to verify configuration merging logic
- All 5 tests pass:
  ✓ gpt-5-codex merges global + per-model options correctly
  ✓ gpt-5 merges global + per-model options correctly
  ✓ Reasoning config uses merged options
  ✓ Defaults work with empty config (medium/auto)
  ✓ Lightweight models get minimal effort by default
- Add Configuration section with all settings and examples
- Update Usage section to reflect new defaults (medium/auto instead of high/detailed)
- Add support for both gpt-5 and gpt-5-codex models
- Show global, per-model, and mixed configuration examples
- Update How It Works section with encrypted reasoning
- Remove outdated limitation about hardcoded text verbosity
- Link to SETTINGS.md for implementation details
Codex CLI does not provide a 'minimal' reasoning effort preset for gpt-5-codex
(only low/medium/high - see model_presets.rs:20-40). When users configure
reasoningEffort: 'minimal' for gpt-5-codex, normalize it to 'low' to prevent
unsupported configuration.

- Add validation in getReasoningConfig() to detect and normalize
- Add test case to verify normalization works correctly
- Prevents potential errors when backend receives unsupported effort level

Addresses feedback from GPT-5 Codex code review.
@numman-ali numman-ali requested a review from Copilot October 3, 2025 10:16
Copy link
Contributor

Copilot AI left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Pull Request Overview

This PR adds comprehensive user-configurable settings support to the OpenAI Codex authentication plugin, allowing users to customize reasoning effort, reasoning summaries, text verbosity, and response fields for both gpt-5 and gpt-5-codex models through opencode's configuration system.

Key changes:

  • Implementation of user-configurable settings with global and per-model options
  • Updated plugin defaults from high/detailed to medium/auto for better performance
  • Addition of encrypted reasoning content support for stateless multi-turn conversations

Reviewed Changes

Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.

Show a summary per file
File Description
test-config.mjs Test suite for configuration parsing and merging logic
test-config.json Example configuration file for testing
package.json Version bump to 1.0.4
lib/request-transformer.mjs Core implementation of configuration parsing and request transformation
index.mjs Configuration loader integration with provider options
README.md Comprehensive documentation of configuration options and examples

Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.

models: {
"gpt-5-codex": {
options: {
reasoningSummary: "concise", // Override global
Copy link

Copilot AI Oct 3, 2025

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The value 'concise' is not a valid reasoningSummary option. According to the README documentation, only 'auto' and 'detailed' are supported values.

Suggested change
reasoningSummary: "concise", // Override global
reasoningSummary: "detailed", // Override global

Copilot uses AI. Check for mistakes.
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd probably leave this in for now, "concise" is returned by the error message as a valid value but not actually supported - so maybe they're going to add it, maybe not - doesn't hurt anything to be in the test while the API reports it as valid.

@numman-ali
Copy link
Owner

Thank you @ben-vargas !

Will review tonight

@ben-vargas
Copy link
Contributor Author

Cool, thanks!

@numman-ali
Copy link
Owner

@ben-vargas this looks good ! I'm going to merge it in, and then follow up with some tweaks before publishing a new version

Should be only around half an hour

@numman-ali numman-ali merged commit 1b57022 into numman-ali:main Oct 3, 2025
@ben-vargas
Copy link
Contributor Author

Awesome, sounds good - thanks!

@ben-vargas ben-vargas deleted the config-support branch October 4, 2025 06:31
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants