Summary
The openai/gpt-oss-120b model entry in the local catalog has pricing that does not match the current official Groq documentation. Groq is listed as a provider for this model.
Source
Gap Details
| Field |
Local catalog |
Groq official docs |
input_cost_per_mil_tokens |
$0.10 |
$0.15 |
output_cost_per_mil_tokens |
$0.50 |
$0.60 |
input_cache_read_cost_per_mil_tokens |
$0.075 |
Not published |
Current local entry (line 1393):
"openai/gpt-oss-120b": {
"format": "openai",
"flavor": "chat",
"input_cost_per_mil_tokens": 0.1,
"output_cost_per_mil_tokens": 0.5,
"input_cache_read_cost_per_mil_tokens": 0.075,
"displayName": "OpenAI GPT-OSS (120B)",
"reasoning": true,
"max_input_tokens": 131072,
"max_output_tokens": 32766,
"available_providers": ["groq", "together", ...]
}
The model has multiple providers (groq, together, etc.) so the pricing may reflect a different provider's rates. However, the openai/gpt-oss-120b model ID with the openai/ prefix is the Groq-specific convention, and the pricing should match Groq's published rates.
Why This Is High Confidence
The Groq documentation explicitly lists openai/gpt-oss-120b at $0.15/MTok input and $0.60/MTok output. The local catalog shows $0.10/$0.50, which is a 33-50% undercount.
Verification Notes
- Pricing verified from: https://console.groq.com/docs/models
- Token limits (131072 input, 32766 output) were not contradicted by Groq docs — the downstream fix job should verify these.
- Cache pricing is not published on Groq docs and should be verified separately.
{
"kind": "missing_model",
"provider": "groq",
"models": ["openai/gpt-oss-120b"],
"status": "active",
"model_specs": {
"openai/gpt-oss-120b": {
"format": "openai",
"flavor": "chat",
"input_cost_per_mil_tokens": 0.15,
"output_cost_per_mil_tokens": 0.60,
"available_providers": ["groq"],
"reasoning": true,
"max_input_tokens": 131072,
"max_output_tokens": 32766
}
},
"source_urls": [
"https://console.groq.com/docs/models"
]
}
Summary
The
openai/gpt-oss-120bmodel entry in the local catalog has pricing that does not match the current official Groq documentation. Groq is listed as a provider for this model.Source
packages/proxy/schema/model_list.json(line 1393)Gap Details
input_cost_per_mil_tokensoutput_cost_per_mil_tokensinput_cache_read_cost_per_mil_tokensCurrent local entry (line 1393):
The model has multiple providers (groq, together, etc.) so the pricing may reflect a different provider's rates. However, the
openai/gpt-oss-120bmodel ID with theopenai/prefix is the Groq-specific convention, and the pricing should match Groq's published rates.Why This Is High Confidence
The Groq documentation explicitly lists
openai/gpt-oss-120bat $0.15/MTok input and $0.60/MTok output. The local catalog shows $0.10/$0.50, which is a 33-50% undercount.Verification Notes
{ "kind": "missing_model", "provider": "groq", "models": ["openai/gpt-oss-120b"], "status": "active", "model_specs": { "openai/gpt-oss-120b": { "format": "openai", "flavor": "chat", "input_cost_per_mil_tokens": 0.15, "output_cost_per_mil_tokens": 0.60, "available_providers": ["groq"], "reasoning": true, "max_input_tokens": 131072, "max_output_tokens": 32766 } }, "source_urls": [ "https://console.groq.com/docs/models" ] }