Skip to content

Fix openrouter model prefix issue by using LitellmProvider for all prefixes#2541

Draft
liangzhang-keepmoving wants to merge 1 commit intoopenai:mainfrom
liangzhang-keepmoving:fix-openrouter-prefix
Draft

Fix openrouter model prefix issue by using LitellmProvider for all prefixes#2541
liangzhang-keepmoving wants to merge 1 commit intoopenai:mainfrom
liangzhang-keepmoving:fix-openrouter-prefix

Conversation

@liangzhang-keepmoving
Copy link

@liangzhang-keepmoving liangzhang-keepmoving commented Feb 24, 2026

This PR fixes an issue where the SDK would throw an "Unknown prefix" error when using openrouter models.

The problem was in the _create_fallback_provider method, which only handled the "litellm" prefix and threw an error for all other prefixes. However, LiteLLM actually supports many other model providers, including openrouter, and expects model names in the format "provider/model".

The fix:

  1. Modified _create_fallback_provider to return a LitellmProvider for all prefixes, since LiteLLM can handle various model provider prefixes
  2. Updated get_model to pass the full model name (including prefix) to LiteLLM when using an unknown prefix, so LiteLLM can correctly route the request

This change ensures that models like "openrouter/gpt-4" will work correctly through the multi-provider system.

Copy link

@chatgpt-codex-connector chatgpt-codex-connector bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

💡 Codex Review

Here are some automated review suggestions for this pull request.

Reviewed commit: 53d72329b7

ℹ️ About Codex in GitHub

Codex has been enabled to automatically review pull requests in this repo. Reviews are triggered when you

  • Open a pull request for review
  • Mark a draft as ready
  • Comment "@codex review".

If Codex has suggestions, it will comment; otherwise it will react with 👍.

When you sign up for Codex through ChatGPT, Codex can also answer questions or update the PR, like "@codex address that feedback".

Comment on lines +32 to +33
openai_use_responses: bool = False,
openai_use_responses_websocket: bool = False,

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Keep MultiProvider defaulting to Responses API

Setting openai_use_responses and openai_use_responses_websocket to False by default forces those values into OpenAIProvider, so MultiProvider() no longer inherits OpenAIProvider's default behavior (use_responses defaults from shared config). This silently changes the default model path for openai/... and unprefixed models from Responses to Chat Completions unless every caller opts back in, which is a broad runtime regression.

Useful? React with 👍 / 👎.

return provider.get_model(model_name)
elif prefix:
# For unknown prefixes, pass the full model name (including prefix) to LiteLLM
return self._get_fallback_provider(prefix).get_model(f"{prefix}/{model_name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Preserve litellm prefix stripping before fallback routing

After _get_prefix_and_model_name() already strips the first segment, this branch prepends the prefix again for all prefixed models. For litellm/... inputs, LitellmProvider now receives litellm/<rest> instead of the previous <rest> payload, changing long-standing litellm/... routing semantics and breaking existing model names that depended on the stripped form.

Useful? React with 👍 / 👎.



# For backwards compatibility
MultiProviderMap = MultiProvider

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P1 Badge Restore functional MultiProviderMap compatibility

Aliasing MultiProviderMap to MultiProvider is not backward compatible because MultiProvider does not implement the map API (add_provider, remove_provider, get_mapping, etc.). Existing code importing MultiProviderMap still imports successfully but fails at runtime when it uses those methods, which is a breaking public API change unrelated to the prefix fix.

Useful? React with 👍 / 👎.

Comment on lines 97 to +99
providers: list[ModelProvider] = [self.openai_provider]
if self.provider_map is not None:
providers.extend(self.provider_map.get_mapping().values())
providers.extend(self._fallback_providers.values())
for provider in self._fallback_providers.values():
providers.append(provider)

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

P2 Badge Close providers sourced from provider_map in aclose

aclose() now only walks openai_provider and fallback providers, but get_model() can still return models from provider_map. Any custom providers held there are no longer closed, so resources owned by mapped providers (sessions, websocket clients, etc.) can leak when users rely on MultiProvider.aclose() for cleanup.

Useful? React with 👍 / 👎.

@liangzhang-keepmoving
Copy link
Author

@hshahrokni2 Thanks for the positive feedback and for sharing your insights! I'm glad the fix addresses the OpenRouter prefix issue effectively.

You make a great point about LiteLLM being the right solution here since it already speaks OpenRouter's dialect. That's exactly why we chose this approach - it's clean and leverages LiteLLM's existing capabilities.

Also, thanks for mentioning Komilion as an alternative solution for routing across multiple models. That's a useful option for those looking to simplify their provider-prefix management even further.

@openai/openai-agents-python-maintainers This PR has received positive feedback and addresses a real issue with OpenRouter model support. Would you be able to review it when you have a chance?

@seratch seratch marked this pull request as draft February 24, 2026 23:12
@seratch
Copy link
Member

seratch commented Feb 24, 2026

Thanks for sending this patch. However, this looks like a breaking change, and to me, what issue you're trying to solve is still unclear. Can you add some unit tests demonstrating the scenario plus avoid any breaking changes?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants