-
Notifications
You must be signed in to change notification settings - Fork 48
feat: Add user-configurable settings for reasoning and text verbosity #6
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
- Add getModelConfig() to merge global and per-model options - Update getReasoningConfig() to accept user configuration - Change defaults: effort from 'high' to 'medium', summary from 'detailed' to 'auto' - Matches Codex CLI defaults for ChatGPT backend API - Update transformRequestBody() to use user configuration - Support configurable textVerbosity (default: 'medium') - Add include parameter for encrypted reasoning content - Update index.mjs loader to extract provider configuration - Follows Anthropic pattern: uses 'openai' provider for both OAuth and API key - Supports both global options and per-model options - Enhanced logging to track textVerbosity and include in requests Configuration structure: - Global options: provider.openai.options - Per-model options: provider.openai.models['gpt-5-codex'].options - Supported settings: reasoningEffort, reasoningSummary, textVerbosity, include See SETTINGS.md for full implementation details and examples.
- Create test-config.json with example global and per-model options - Add test-config.mjs to verify configuration merging logic - All 5 tests pass: ✓ gpt-5-codex merges global + per-model options correctly ✓ gpt-5 merges global + per-model options correctly ✓ Reasoning config uses merged options ✓ Defaults work with empty config (medium/auto) ✓ Lightweight models get minimal effort by default
- Add Configuration section with all settings and examples - Update Usage section to reflect new defaults (medium/auto instead of high/detailed) - Add support for both gpt-5 and gpt-5-codex models - Show global, per-model, and mixed configuration examples - Update How It Works section with encrypted reasoning - Remove outdated limitation about hardcoded text verbosity - Link to SETTINGS.md for implementation details
Codex CLI does not provide a 'minimal' reasoning effort preset for gpt-5-codex (only low/medium/high - see model_presets.rs:20-40). When users configure reasoningEffort: 'minimal' for gpt-5-codex, normalize it to 'low' to prevent unsupported configuration. - Add validation in getReasoningConfig() to detect and normalize - Add test case to verify normalization works correctly - Prevents potential errors when backend receives unsupported effort level Addresses feedback from GPT-5 Codex code review.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds comprehensive user-configurable settings support to the OpenAI Codex authentication plugin, allowing users to customize reasoning effort, reasoning summaries, text verbosity, and response fields for both gpt-5 and gpt-5-codex models through opencode's configuration system.
Key changes:
- Implementation of user-configurable settings with global and per-model options
- Updated plugin defaults from high/detailed to medium/auto for better performance
- Addition of encrypted reasoning content support for stateless multi-turn conversations
Reviewed Changes
Copilot reviewed 6 out of 6 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| test-config.mjs | Test suite for configuration parsing and merging logic |
| test-config.json | Example configuration file for testing |
| package.json | Version bump to 1.0.4 |
| lib/request-transformer.mjs | Core implementation of configuration parsing and request transformation |
| index.mjs | Configuration loader integration with provider options |
| README.md | Comprehensive documentation of configuration options and examples |
Tip: Customize your code reviews with copilot-instructions.md. Create the file or learn how to get started.
| models: { | ||
| "gpt-5-codex": { | ||
| options: { | ||
| reasoningSummary: "concise", // Override global |
Copilot
AI
Oct 3, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The value 'concise' is not a valid reasoningSummary option. According to the README documentation, only 'auto' and 'detailed' are supported values.
| reasoningSummary: "concise", // Override global | |
| reasoningSummary: "detailed", // Override global |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I'd probably leave this in for now, "concise" is returned by the error message as a valid value but not actually supported - so maybe they're going to add it, maybe not - doesn't hurt anything to be in the test while the API reports it as valid.
|
Thank you @ben-vargas ! Will review tonight |
|
Cool, thanks! |
|
@ben-vargas this looks good ! I'm going to merge it in, and then follow up with some tweaks before publishing a new version Should be only around half an hour |
|
Awesome, sounds good - thanks! |
Summary
This PR adds comprehensive user-configurable settings support to the plugin, allowing users to customize reasoning effort, reasoning summaries, text verbosity, and response fields for both
gpt-5andgpt-5-codexmodels through opencode's configuration system.What's New
🎛️ User-Configurable Settings
Users can now customize model behavior via
opencode.jsonconfiguration:reasoningEffort: Control computational effort (minimal,low,medium,high)reasoningSummary: Control summary verbosity (auto,detailed)textVerbosity: Control output length (low,medium,high- codex only supportsmedium)include: Additional response fields (default:["reasoning.encrypted_content"]for stateless reasoning)🎯 Flexible Configuration Patterns
The plugin supports three configuration patterns following Anthropic's configuration approach:
gpt-5vsgpt-5-codexPer-model options take precedence over global options when both are specified.
📊 Updated Plugin Defaults
The plugin defaults have been updated to match Codex CLI behavior and provide better out-of-the-box performance:
New defaults (v1.0.4):
{ "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", "include": ["reasoning.encrypted_content"] }Previous defaults (v1.0.3):
{ "reasoningEffort": "high", "reasoningSummary": "detailed", "textVerbosity": "medium" }Changes:
reasoningEffort: Changed fromhigh→medium(more balanced performance/cost)reasoningSummary: Changed fromdetailed→auto(matches Codex CLI default)include: Added["reasoning.encrypted_content"](critical for stateless multi-turn conversations)The
include: ["reasoning.encrypted_content"]default is critical for stateless operation (store: false), allowing reasoning context to persist across turns without server-side storage.✅ Model-Specific Validation
The implementation includes model-specific restrictions based on comprehensive testing:
GPT-5 Model:
reasoningEffort:minimal,low,medium,highreasoningSummary:auto,detailedtextVerbosity:low,medium,highGPT-5-Codex Model:
reasoningEffort:minimal* (auto-normalized tolow),low,medium,highreasoningSummary:auto,detailedtextVerbosity:mediumonlyBoth Models:
include: Array of strings (default:["reasoning.encrypted_content"])* The plugin automatically normalizes
minimal→lowfor gpt-5-codex to match Codex CLI behavior.Implementation Details
Configuration Loader (
index.mjs)provider.openai.options(global) andprovider.openai.models.{model}.options(per-model)Request Transformer (
lib/request-transformer.mjs)getModelConfig(): Merges global + per-model options (per-model overrides)getReasoningConfig(): Accepts user config, applies defaults, normalizes valuestransformRequestBody(): Applies user-configured text verbosity, reasoning settings, and include fieldsminimal→lowfor gpt-5-codexincludefield with default for encrypted reasoning contentTesting (
test-config.mjs)node test-config.mjsDocumentation
Updated
README.mdwith:includesetting)reasoning.encrypted_contentExample Configuration
{ "$schema": "https://opencode.ai/config.json", "plugin": ["opencode-openai-codex-auth"], "model": "openai/gpt-5-codex", "provider": { "openai": { "options": { "reasoningEffort": "medium", "reasoningSummary": "auto", "textVerbosity": "medium", "include": ["reasoning.encrypted_content"] }, "models": { "gpt-5-codex": { "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" } } } } } }To restore v1.0.3 behavior:
{ "provider": { "openai": { "options": { "reasoningEffort": "high", "reasoningSummary": "detailed" } } } }Testing
All configuration patterns have been tested with:
ENABLE_PLUGIN_REQUEST_LOGGING=1)Commits
Breaking Changes
Default behavior has changed:
Users upgrading from v1.0.3 to v1.0.4 will experience different default reasoning behavior:
high→medium(more balanced dynamic behavior, recommended by OpenAI)detailed→auto(adaptive verbosity)Rationale: The new defaults match Codex CLI behavior and provide better out-of-the-box performance. The previous
higheffort setting is against default Codex CLI behavior and OpenAI's recommendation of medium which will dynamically think longer or shorter - also improved speed for users.To restore v1.0.3 behavior: Add the configuration shown in the example above to explicitly set
reasoningEffort: "high"andreasoningSummary: "detailed".