Summary
Daily model inventory check for 2026-05-12. The Copilot API returned HTTP 400 (not available), so only Anthropic, Gemini, and OpenAI were queried. Key findings: one multiplier entry gap (claude-sonnet-4-6 dash-notation), several new GPT-5.x sub-version aliases proposed, and all Gemini 3.x models are already covered by existing wildcard patterns.
- Providers queried: Anthropic, Gemini, OpenAI (Copilot: HTTP 400 — skipped)
- Total models found: ~172 (Anthropic: 9, Gemini: ~55, OpenAI: ~108)
- Proposed alias changes: 6 (new GPT-5.x version-pinning aliases)
- Multiplier gaps found: 1 missing model ID + several chat-latest aliases without entries
Provider Model Counts
| Provider |
Models Available |
Status |
| anthropic |
9 |
✅ ok |
| copilot |
0 |
❌ HTTP 400 — skipped |
| gemini |
~55 |
✅ ok |
| openai |
~108 |
✅ ok |
Raw API Fields Discovered
Anthropic (/v1/models):
id, display_name, created_at, type — minimal metadata; no billing or capability fields in the public list endpoint
Gemini (/v1beta/models):
id, display_name, description, input_token_limit, output_token_limit, supported_generation_methods, version
- Context window fully present (
input_token_limit, output_token_limit)
- Capability flags via
supported_generation_methods (generateContent, bidiGenerateContent, embedContent, predict, predictLongRunning)
- No direct billing/pricing fields; tier must be inferred from limits and model name
OpenAI (/v1/models):
id, owned_by, created — minimal metadata; no token limits or billing in the list endpoint
owned_by distinguishes openai / openai-internal / system (internal staging) models
Token Multiplier Analysis
Missing from model_multipliers.json
| Model ID |
Provider |
Inferred Multiplier |
Basis |
claude-sonnet-4-6 |
anthropic |
9.0 |
Docs table: Claude Sonnet 4.6 new = 9; only dot-notation claude-sonnet-4.6 is stored |
gpt-5.3 |
openai |
6.0 |
No explicit entry; gpt-5.3-codex is stored (6.0), base gpt-5.3 is not |
gpt-5-chat-latest |
openai |
1.0 |
Alias to gpt-5 family; no entry |
gpt-5.1-chat-latest |
openai |
3.0 |
Alias to gpt-5.1; no entry |
gpt-5.2-chat-latest |
openai |
3.0 |
Alias to gpt-5.2; no entry |
gpt-5.3-chat-latest |
openai |
6.0 |
Alias to gpt-5.3; no entry |
Stale entries (no longer returned by any API)
The following are in model_multipliers.json but absent from all live inventories (Anthropic, Gemini, OpenAI). They may still exist in Copilot (API unavailable today), so no removals are recommended without Copilot data:
claude-3-haiku, claude-3-sonnet, claude-3-opus (Claude 3 generation)
claude-3-5-haiku, claude-3-5-sonnet, claude-3-7-sonnet (Claude 3.x generation)
claude-haiku-4-5, claude-haiku-4.5 (base alias variants — claude-haiku-4-5-20251001 is live but unnormalized forms are not returned)
gpt-4, gpt-4-turbo, gpt-4o, gpt-4o-mini — present in OpenAI live API ✅ (not stale)
Inferred vs stored discrepancies
All models in the docs billing table match model_multipliers.json with the new multiplier already applied. No discrepancies.
| Model (docs) |
Stored |
Docs new multiplier |
Match? |
| Claude Haiku 4.5 |
0.33 |
0.33 |
✅ |
| Claude Opus 4.5 |
15.0 |
15 |
✅ |
| Claude Opus 4.6 |
27.0 |
27 |
✅ |
| Claude Opus 4.7 |
27.0 |
27 |
✅ |
| Claude Sonnet 4.5 |
6.0 |
6 |
✅ |
| Claude Sonnet 4.6 |
9.0 (dot-notation only) |
9 |
⚠️ dash-notation missing |
| Gemini 2.5 Pro |
1.0 |
1 |
✅ |
| Gemini 3 Flash |
0.33 |
0.33 |
✅ |
| Gemini 3 Pro |
6.0 |
6 |
✅ |
| Gemini 3.1 Pro |
6.0 |
6 |
✅ |
| GPT-4o |
0.33 |
0.33 |
✅ |
| GPT-4.1 |
1.0 |
1 |
✅ |
| GPT-5.1 |
3.0 |
3 |
✅ |
| GPT-5.1-Codex |
3.0 |
3 |
✅ |
| GPT-5.1-Codex-Mini |
0.33 |
0.33 |
✅ |
| GPT-5.1-Codex-Max |
3.0 |
3 |
✅ |
| GPT-5.2 |
3.0 |
3 |
✅ |
| GPT-5.2-Codex |
3.0 |
3 |
✅ |
| GPT-5.3-Codex |
6.0 |
6 |
✅ |
| GPT-5.4 |
6.0 |
6 |
✅ |
| GPT-5.4 mini |
6.0 |
6 |
✅ |
| GPT-5.5 |
7.5 |
TBD |
✅ (stored as 7.5) |
| Grok Code Fast 1 |
0.33 |
0.33 |
✅ |
| Raptor mini |
0.33 |
0.33 |
✅ |
Proposed Alias Updates
1. Fix claude-sonnet-4-6 multiplier gap
What: Add dash-notation ID to model_multipliers.json
Why: Live Anthropic API returns claude-sonnet-4-6 (dashes) but only claude-sonnet-4.6 (dot) is stored. Workflows using this model ID will fall back to the default multiplier.
This is a model_multipliers.json fix, not an alias change:
2. Add gpt-5.1 through gpt-5.5 version-pinning aliases
What: New aliases in model_aliases.json for GPT-5 sub-generations
Why: OpenAI live API has gpt-5.1, gpt-5.2, gpt-5.3, gpt-5.4, gpt-5.5 as distinct model generations with distinct multipliers (3→3→6→6→7.5). The current gpt-5 alias uses gpt-5* which resolves to whichever version comes first alphabetically, making version targeting impossible for workflow authors.
{
"gpt-5.1": ["copilot/gpt-5.1*", "openai/gpt-5.1*"],
"gpt-5.2": ["copilot/gpt-5.2*", "openai/gpt-5.2*"],
"gpt-5.3": ["copilot/gpt-5.3*", "openai/gpt-5.3*"],
"gpt-5.4": ["copilot/gpt-5.4*", "openai/gpt-5.4*"],
"gpt-5.5": ["copilot/gpt-5.5*", "openai/gpt-5.5*"]
}
3. Add gemini-3-flash and gemini-3.1-flash explicit aliases
What: Explicit aliases for Gemini 3 Flash and 3.1 Flash families
Why: Both are live in the Gemini API. They are technically covered by the gemini-flash wildcard (gemini-*flash*) but an explicit alias gives workflow authors a stable, intention-revealing name.
{
"gemini-3-flash": ["copilot/gemini-3*flash*", "google/gemini-3*flash*", "gemini/gemini-3*flash*"],
"gemini-3.1-flash": ["copilot/gemini-3.1*flash*", "google/gemini-3.1*flash*", "gemini/gemini-3.1*flash*"],
"gemini-3-pro": ["copilot/gemini-3*pro*", "google/gemini-3*pro*", "gemini/gemini-3*pro*"],
"gemini-3.1-pro": ["copilot/gemini-3.1*pro*", "google/gemini-3.1*pro*", "gemini/gemini-3.1*pro*"]
}
Full Model Lists by Provider
Anthropic (9 models)
- claude-haiku-4-5-20251001
- claude-opus-4-1-20250805
- claude-opus-4-20250514
- claude-opus-4-5-20251101
- claude-opus-4-6
- claude-opus-4-7
- claude-sonnet-4-20250514
- claude-sonnet-4-5-20250929
- claude-sonnet-4-6
Gemini (selected text-generation models)
- aqa, deep-research-max-preview-04-2026, deep-research-preview-04-2026, deep-research-pro-preview-12-2025
- gemini-2.0-flash, gemini-2.0-flash-001, gemini-2.0-flash-lite, gemini-2.0-flash-lite-001
- gemini-2.5-computer-use-preview-10-2025, gemini-2.5-flash, gemini-2.5-flash-image
- gemini-2.5-flash-lite, gemini-2.5-flash-native-audio-latest, gemini-2.5-flash-native-audio-preview-09-2025
- gemini-2.5-flash-native-audio-preview-12-2025, gemini-2.5-flash-preview-tts, gemini-2.5-pro
- gemini-2.5-pro-preview-tts, gemini-3-flash-preview, gemini-3-pro-image-preview, gemini-3-pro-preview
- gemini-3.1-flash-image-preview, gemini-3.1-flash-lite, gemini-3.1-flash-lite-preview
- gemini-3.1-flash-live-preview, gemini-3.1-flash-tts-preview, gemini-3.1-pro-preview
- gemini-3.1-pro-preview-customtools, gemini-embedding-001, gemini-embedding-2, gemini-embedding-2-preview
- gemini-flash-latest, gemini-flash-lite-latest, gemini-pro-latest
- gemini-robotics-er-1.5-preview, gemini-robotics-er-1.6-preview
- gemma-4-26b-a4b-it, gemma-4-31b-it
- imagen-4.0-fast-generate-001, imagen-4.0-generate-001, imagen-4.0-ultra-generate-001
- lyria-3-clip-preview, lyria-3-pro-preview, nano-banana-pro-preview
- veo-2.0-generate-001, veo-3.0-fast-generate-001, veo-3.0-generate-001
- veo-3.1-fast-generate-preview, veo-3.1-generate-preview, veo-3.1-lite-generate-preview
OpenAI (key text/reasoning models)
- gpt-4, gpt-4-0613, gpt-4-turbo, gpt-4-turbo-2024-04-09, gpt-4o, gpt-4o-mini
- gpt-4.1, gpt-4.1-2025-04-14, gpt-4.1-mini, gpt-4.1-mini-2025-04-14, gpt-4.1-nano, gpt-4.1-nano-2025-04-14
- gpt-5, gpt-5-2025-08-07, gpt-5-chat-latest, gpt-5-codex, gpt-5-mini, gpt-5-mini-2025-08-07
- gpt-5-nano, gpt-5-nano-2025-08-07, gpt-5-pro, gpt-5-pro-2025-10-06
- gpt-5.1, gpt-5.1-2025-11-13, gpt-5.1-chat-latest, gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
- gpt-5.2, gpt-5.2-2025-12-11, gpt-5.2-chat-latest, gpt-5.2-codex, gpt-5.2-pro
- gpt-5.3-chat-latest, gpt-5.3-codex
- gpt-5.4, gpt-5.4-2026-03-05, gpt-5.4-mini, gpt-5.4-nano, gpt-5.4-pro
- gpt-5.5, gpt-5.5-2026-04-23, gpt-5.5-pro
- o1, o1-2024-12-17, o1-pro, o1-pro-2025-03-19
- o3, o3-2025-04-16, o3-deep-research, o3-deep-research-2025-06-26, o3-mini, o3-mini-2025-01-31, o3-pro
- o4-mini, o4-mini-2025-04-16, o4-mini-deep-research, o4-mini-deep-research-2025-06-26
Notes
- Copilot API returned HTTP 400 — multiplier and alias analysis is based on Anthropic/Gemini/OpenAI direct APIs and the official GitHub Copilot billing docs table only. Re-run when Copilot API is available.
- Many OpenAI models in the live list are internal alpha/staging variants (e.g.,
crest-alpha-*, glacier-alpha-*, willow-alpha-*) — these are excluded from alias and multiplier considerations.
- Gemini 4 Gemma models (
gemma-4-26b-a4b-it, gemma-4-31b-it) are already covered by gemini/gemma* in the gemma alias.
- Gemini 3.x models are already covered by the
gemini-flash/gemini-pro wildcard aliases; the proposed Gemini 3 explicit aliases are optional quality-of-life improvements.
Generated by Daily Model Inventory Checker · ● 21.2M · ◷
Summary
Daily model inventory check for 2026-05-12. The Copilot API returned HTTP 400 (not available), so only Anthropic, Gemini, and OpenAI were queried. Key findings: one multiplier entry gap (
claude-sonnet-4-6dash-notation), several new GPT-5.x sub-version aliases proposed, and all Gemini 3.x models are already covered by existing wildcard patterns.Provider Model Counts
Raw API Fields Discovered
Anthropic (
/v1/models):id,display_name,created_at,type— minimal metadata; no billing or capability fields in the public list endpointGemini (
/v1beta/models):id,display_name,description,input_token_limit,output_token_limit,supported_generation_methods,versioninput_token_limit,output_token_limit)supported_generation_methods(generateContent, bidiGenerateContent, embedContent, predict, predictLongRunning)OpenAI (
/v1/models):id,owned_by,created— minimal metadata; no token limits or billing in the list endpointowned_bydistinguishesopenai/openai-internal/system(internal staging) modelsToken Multiplier Analysis
Missing from model_multipliers.json
claude-sonnet-4-6claude-sonnet-4.6is storedgpt-5.3gpt-5.3-codexis stored (6.0), basegpt-5.3is notgpt-5-chat-latestgpt-5.1-chat-latestgpt-5.2-chat-latestgpt-5.3-chat-latestStale entries (no longer returned by any API)
The following are in
model_multipliers.jsonbut absent from all live inventories (Anthropic, Gemini, OpenAI). They may still exist in Copilot (API unavailable today), so no removals are recommended without Copilot data:claude-3-haiku,claude-3-sonnet,claude-3-opus(Claude 3 generation)claude-3-5-haiku,claude-3-5-sonnet,claude-3-7-sonnet(Claude 3.x generation)claude-haiku-4-5,claude-haiku-4.5(base alias variants —claude-haiku-4-5-20251001is live but unnormalized forms are not returned)gpt-4,gpt-4-turbo,gpt-4o,gpt-4o-mini— present in OpenAI live API ✅ (not stale)Inferred vs stored discrepancies
All models in the docs billing table match
model_multipliers.jsonwith the new multiplier already applied. No discrepancies.Proposed Alias Updates
1. Fix
claude-sonnet-4-6multiplier gapWhat: Add dash-notation ID to
model_multipliers.jsonWhy: Live Anthropic API returns
claude-sonnet-4-6(dashes) but onlyclaude-sonnet-4.6(dot) is stored. Workflows using this model ID will fall back to the default multiplier.This is a
model_multipliers.jsonfix, not an alias change:2. Add
gpt-5.1throughgpt-5.5version-pinning aliasesWhat: New aliases in
model_aliases.jsonfor GPT-5 sub-generationsWhy: OpenAI live API has
gpt-5.1,gpt-5.2,gpt-5.3,gpt-5.4,gpt-5.5as distinct model generations with distinct multipliers (3→3→6→6→7.5). The currentgpt-5alias usesgpt-5*which resolves to whichever version comes first alphabetically, making version targeting impossible for workflow authors.{ "gpt-5.1": ["copilot/gpt-5.1*", "openai/gpt-5.1*"], "gpt-5.2": ["copilot/gpt-5.2*", "openai/gpt-5.2*"], "gpt-5.3": ["copilot/gpt-5.3*", "openai/gpt-5.3*"], "gpt-5.4": ["copilot/gpt-5.4*", "openai/gpt-5.4*"], "gpt-5.5": ["copilot/gpt-5.5*", "openai/gpt-5.5*"] }3. Add
gemini-3-flashandgemini-3.1-flashexplicit aliasesWhat: Explicit aliases for Gemini 3 Flash and 3.1 Flash families
Why: Both are live in the Gemini API. They are technically covered by the
gemini-flashwildcard (gemini-*flash*) but an explicit alias gives workflow authors a stable, intention-revealing name.{ "gemini-3-flash": ["copilot/gemini-3*flash*", "google/gemini-3*flash*", "gemini/gemini-3*flash*"], "gemini-3.1-flash": ["copilot/gemini-3.1*flash*", "google/gemini-3.1*flash*", "gemini/gemini-3.1*flash*"], "gemini-3-pro": ["copilot/gemini-3*pro*", "google/gemini-3*pro*", "gemini/gemini-3*pro*"], "gemini-3.1-pro": ["copilot/gemini-3.1*pro*", "google/gemini-3.1*pro*", "gemini/gemini-3.1*pro*"] }Full Model Lists by Provider
Anthropic (9 models)
Gemini (selected text-generation models)
OpenAI (key text/reasoning models)
Notes
crest-alpha-*,glacier-alpha-*,willow-alpha-*) — these are excluded from alias and multiplier considerations.gemma-4-26b-a4b-it,gemma-4-31b-it) are already covered bygemini/gemma*in thegemmaalias.gemini-flash/gemini-prowildcard aliases; the proposed Gemini 3 explicit aliases are optional quality-of-life improvements.