-
-
Notifications
You must be signed in to change notification settings - Fork 3.8k
Description
Hey everyone! I ran into this exact issue and found three working solutions
The Problem: OpenRouter models DO support structured outputs, but LiteLLM's supports_response_schema()
returns False
for them, causing:
- CrewAI and other frameworks to skip sending
response_format
- "Unknown parameter: response_format.response_schema" errors
- Broken structured output functionality
Root Cause: LiteLLM doesn't recognize openrouter
as supporting structured outputs + the OpenRouter adapter strips/rewrites nested schemas.
Here's how I fixed it (pick what works for you):
Solution 1: Library Fix for OpenRouter Transformation
For the LiteLLM maintainers: The real issue is in the OpenRouter adapter that strips the response_format
. Fix it here:
# Inside OpenrouterConfig.map_openai_params(...) method:
def map_openai_params(self, non_default_params, optional_params, model, drop_params):
# ...existing code...
# Add this block to preserve response_format:
if "response_format" in non_default_params:
rf = non_default_params["response_format"]
if isinstance(rf, dict):
# force LiteLLM to send your response_format exactly as given
mapped_openai_params["response_format"] = rf
# ...existing code...
AND add OpenRouter to the global support list:
PROVIDERS_GLOBALLY_SUPPORT_RESPONSE_SCHEMA = (
"openai",
"anthropic",
"cohere",
"ai21",
"mistral",
"openrouter", # ← Add this line!
)
You need both changes - the global list fix alone won't work because the OpenRouter adapter still strips the schema.
Solution 2: No-Code Workaround (For Users Right Now)
Use extra_body
to inject the proper OpenRouter format:
import json, litellm
# Your schema
output_schema = {
"type": "object",
"properties": {
"title": {"type": "string"},
"tags": {"type": "array", "items": {"type": "string"}},
"description": {"type": "string"}
},
"required": ["title", "tags", "description"]
}
# This bypasses all the schema checking
response = litellm.completion(
model="openrouter/mistralai/mistral-small-3.1-24b-instruct",
messages=[{"role": "user", "content": "Analyze this..."}],
extra_body={
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "structured_response",
"strict": True,
"schema": output_schema
}
}
}
)
# Parse the structured response
result = json.loads(response.choices[0].message.content)
print(result)
Works immediately with any OpenRouter model that supports structured outputs!
Solution 3: Manual Library Patch (Advanced Users)
If you want to patch your local installation right now:
# Find the OpenrouterConfig.map_openai_params method and add:
if "response_format" in non_default_params:
rf = non_default_params["response_format"]
if isinstance(rf, dict):
mapped_openai_params["response_format"] = rf
For CrewAI Users Specifically:
Use Solution 2 with your existing setup:
# Your existing LLM setup
mistralLLM = BaseLLM(
model="openrouter/mistralai/mistral-small-3.1-24b-instruct",
base_url="https://openrouter.ai/api/v1",
api_key=OPENROUTER_API_KEY,
temperature=0.0,
# Add this to force structured output:
extra_body={
"response_format": {
"type": "json_schema",
"json_schema": {
"name": "your_schema_name",
"strict": True,
"schema": your_pydantic_model.model_json_schema()
}
}
}
)
💡 LiteLLM Team: This needs both fixes!
- OpenRouter transformation fix - stop stripping
response_format
dicts - Global support list - add
"openrouter"
sosupports_response_schema()
returns True
The transformation fix is the critical one - without it, even adding to the global list won't work because the schema gets stripped before sending to OpenRouter.
Hope this helps everyone dealing with this! The extra_body
workaround is your best bet until the library gets patched. 🔥