Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem Statement
When using custom OpenAI-compatible providers (e.g., MiniMax, DeepSeek, or other third-party APIs that implement the OpenAI /v1 interface), users are currently required to manually declare every single model in the config.json file under the "models" dictionary.
If a model is not explicitly declared, even though the underlying API endpoint supports it and returns it via GET /v1/models, OpenCode will throw errors such as:
404 Not Found
ProviderModelNotFoundError
Model not support
Example of Current Workaround Required
Users must manually add entries like this to their config:
"models": {
"MiniMax-M2.5": {
"name": "MiniMax M2.5"
},
"MiniMax-M2.7": {
"name": "MiniMax M2.7"
},
"MiniMax-M2.7-highspeed": {
"name": "MiniMax M2.7 highspeed"
}
}
Without these explicit declarations, the models cannot be used, even if they are fully functional via the OpenAI-compatible API.
Why This Is Problematic
-
Poor User Experience: Users must manually look up and declare every model variant they want to use, which is tedious and error-prone.
-
Contradicts OpenAI Protocol Standards: The OpenAI API standard includes a GET /v1/models endpoint specifically for dynamic model discovery. OpenCode should leverage this instead of requiring static declarations.
-
Maintenance Burden: When providers add new models (e.g., MiniMax releases M2.8), users must wait for either:
- A
models.dev database update, OR
- Manually add the model to their local config
-
Inconsistent Behavior: Some providers work without explicit declaration, while others (especially custom OpenAI-compatible endpoints) do not, creating confusion.
Proposed Solution
Option 1: Dynamic Model Passthrough (Preferred)
When a user specifies a model that is not in the local models.dev database:
- If the provider is a custom OpenAI-compatible endpoint, passthrough the model name directly to the configured
baseUrl
- Do not block the request at the routing layer
- Allow the remote API to determine if the model is valid
Option 2: Add Configuration Flag
Add an optional configuration like:
"provider": {
"my-custom-provider": {
"baseUrl": "https://api.example.com/v1",
"allow_unlisted_models": true
}
}
When enabled, this would allow any model name to be passed through without local validation.
Option 3: Automatic Model Discovery
On provider initialization, automatically call GET /v1/models to fetch available models and register them dynamically.
Benefits
- Improved UX: Users can immediately use new models without config changes
- Better Standards Compliance: Aligns with OpenAI's dynamic model discovery protocol
- Reduced Maintenance: Less reliance on manual
models.dev updates for every new model release
- Flexibility: Enables easier experimentation with emerging providers and custom fine-tuned models
Related Issues
This issue is related to several existing reports where users encounter ProviderModelNotFoundError or 404 errors when trying to use models that are available via their API but not declared locally:
Environment
- OpenCode Version: Latest (as of 2026-03-19)
- Affected Providers: Any custom OpenAI-compatible provider (MiniMax, DeepSeek, Azure, etc.)
Feature hasn't been suggested before.
Describe the enhancement you want to request
Problem Statement
When using custom OpenAI-compatible providers (e.g., MiniMax, DeepSeek, or other third-party APIs that implement the OpenAI
/v1interface), users are currently required to manually declare every single model in theconfig.jsonfile under the"models"dictionary.If a model is not explicitly declared, even though the underlying API endpoint supports it and returns it via
GET /v1/models, OpenCode will throw errors such as:404 Not FoundProviderModelNotFoundErrorModel not supportExample of Current Workaround Required
Users must manually add entries like this to their config:
Without these explicit declarations, the models cannot be used, even if they are fully functional via the OpenAI-compatible API.
Why This Is Problematic
Poor User Experience: Users must manually look up and declare every model variant they want to use, which is tedious and error-prone.
Contradicts OpenAI Protocol Standards: The OpenAI API standard includes a
GET /v1/modelsendpoint specifically for dynamic model discovery. OpenCode should leverage this instead of requiring static declarations.Maintenance Burden: When providers add new models (e.g., MiniMax releases M2.8), users must wait for either:
models.devdatabase update, ORInconsistent Behavior: Some providers work without explicit declaration, while others (especially custom OpenAI-compatible endpoints) do not, creating confusion.
Proposed Solution
Option 1: Dynamic Model Passthrough (Preferred)
When a user specifies a model that is not in the local
models.devdatabase:baseUrlOption 2: Add Configuration Flag
Add an optional configuration like:
When enabled, this would allow any model name to be passed through without local validation.
Option 3: Automatic Model Discovery
On provider initialization, automatically call
GET /v1/modelsto fetch available models and register them dynamically.Benefits
models.devupdates for every new model releaseRelated Issues
This issue is related to several existing reports where users encounter
ProviderModelNotFoundErroror404errors when trying to use models that are available via their API but not declared locally:ProviderModelNotFoundErroriflowcn/minimax-m2model in OpenCode. Error status 435 "Model not support". #4305: Cannot use custom MiniMax modelsEnvironment