Skip to content

[Feature Request]: Allow dynamic model passthrough for custom OpenAI-compatible providers without explicit declaration #18219

@mameikagou

Description

@mameikagou

Feature hasn't been suggested before.

  • I have verified this feature I'm about to request hasn't been suggested before.

Describe the enhancement you want to request

Problem Statement

When using custom OpenAI-compatible providers (e.g., MiniMax, DeepSeek, or other third-party APIs that implement the OpenAI /v1 interface), users are currently required to manually declare every single model in the config.json file under the "models" dictionary.

If a model is not explicitly declared, even though the underlying API endpoint supports it and returns it via GET /v1/models, OpenCode will throw errors such as:

  • 404 Not Found
  • ProviderModelNotFoundError
  • Model not support
Image Image Image

Example of Current Workaround Required

Users must manually add entries like this to their config:

"models": {
  "MiniMax-M2.5": {
    "name": "MiniMax M2.5"
  },
  "MiniMax-M2.7": {
    "name": "MiniMax M2.7"
  },
  "MiniMax-M2.7-highspeed": {
    "name": "MiniMax M2.7 highspeed"
  }
}

Without these explicit declarations, the models cannot be used, even if they are fully functional via the OpenAI-compatible API.

Why This Is Problematic

  1. Poor User Experience: Users must manually look up and declare every model variant they want to use, which is tedious and error-prone.

  2. Contradicts OpenAI Protocol Standards: The OpenAI API standard includes a GET /v1/models endpoint specifically for dynamic model discovery. OpenCode should leverage this instead of requiring static declarations.

  3. Maintenance Burden: When providers add new models (e.g., MiniMax releases M2.8), users must wait for either:

    • A models.dev database update, OR
    • Manually add the model to their local config
  4. Inconsistent Behavior: Some providers work without explicit declaration, while others (especially custom OpenAI-compatible endpoints) do not, creating confusion.

Proposed Solution

Option 1: Dynamic Model Passthrough (Preferred)

When a user specifies a model that is not in the local models.dev database:

  1. If the provider is a custom OpenAI-compatible endpoint, passthrough the model name directly to the configured baseUrl
  2. Do not block the request at the routing layer
  3. Allow the remote API to determine if the model is valid

Option 2: Add Configuration Flag

Add an optional configuration like:

"provider": {
  "my-custom-provider": {
    "baseUrl": "https://api.example.com/v1",
    "allow_unlisted_models": true
  }
}

When enabled, this would allow any model name to be passed through without local validation.

Option 3: Automatic Model Discovery

On provider initialization, automatically call GET /v1/models to fetch available models and register them dynamically.

Benefits

  • Improved UX: Users can immediately use new models without config changes
  • Better Standards Compliance: Aligns with OpenAI's dynamic model discovery protocol
  • Reduced Maintenance: Less reliance on manual models.dev updates for every new model release
  • Flexibility: Enables easier experimentation with emerging providers and custom fine-tuned models

Related Issues

This issue is related to several existing reports where users encounter ProviderModelNotFoundError or 404 errors when trying to use models that are available via their API but not declared locally:

Environment

  • OpenCode Version: Latest (as of 2026-03-19)
  • Affected Providers: Any custom OpenAI-compatible provider (MiniMax, DeepSeek, Azure, etc.)

Metadata

Metadata

Assignees

Labels

No labels
No labels

Type

No type
No fields configured for issues without a type.

Projects

No projects

Milestone

No milestone

Relationships

None yet

Development

No branches or pull requests

Issue actions