Summary
Daily model inventory check for 2026-05-05. Three providers returned data (Anthropic, OpenAI, Gemini); Copilot API returned HTTP 400 error. Several new model families and versions were found that warrant alias and multiplier updates.
- Providers queried: Anthropic, OpenAI, Gemini, Copilot
- Total models found: 257 (9 + 198 + 50 + 0)
- Proposed alias changes: 4
- Multiplier gaps found: 9
Provider Model Counts
| Provider |
Models Available |
Status |
| anthropic |
9 |
✅ ok |
| openai |
198 |
✅ ok |
| gemini |
50 |
✅ ok |
| copilot |
0 |
❌ HTTP 400 |
Raw API Fields Discovered
Anthropic (/data[] array):
id, display_name, created_at, type, max_input_tokens, max_tokens
capabilities: batch, citations, code_execution, context_management, effort, image_input, pdf_input, structured_outputs, thinking (with types.enabled / types.adaptive)
- Useful fields:
max_input_tokens (context window), capabilities.thinking.supported (reasoning model flag), capabilities.image_input.supported (vision flag), capabilities.effort.supported (extended reasoning)
OpenAI (/data[] array):
id, object, created (Unix timestamp), owned_by
- Very minimal; no token limits, no capability flags in public list endpoint
owned_by values: openai, openai-internal, system — useful for filtering internal/experimental models
Gemini (/models[] array):
name (e.g. models/gemini-2.5-pro), version, displayName, description
inputTokenLimit, outputTokenLimit — useful proxy for model tier
supportedGenerationMethods — capability flags (generateContent, generateAnswer, generateImage, etc.)
thinking boolean — flags reasoning/deep-think models
temperature, topP, topK, maxTemperature
Token Multiplier Analysis
Missing from model_multipliers.json
| Model ID |
Provider |
Inferred Multiplier |
Basis |
gemini-3-flash-preview |
gemini |
~0.33 |
Billing docs: "Gemini 3 Flash" = 0.33 new mult |
gemini-3-pro-preview |
gemini |
~6.0 |
Billing docs: "Gemini 3 Pro" = 6 new mult |
gemini-3-pro-image-preview |
gemini |
~6.0 |
Same family as Gemini 3 Pro |
gemini-3.1-pro-preview |
gemini |
~6.0 |
Billing docs: "Gemini 3.1 Pro" = 6 new mult |
gemini-3.1-flash-lite-preview |
gemini |
~0.1 |
Flash-Lite tier heuristic |
gemini-3.1-flash-image-preview |
gemini |
~0.33 |
Flash tier (Gemini 3 Flash) |
gemini-2.5-flash-lite |
gemini |
~0.1 |
Same tier as gemini-2.5-flash-lite in billing |
gemini-2.5-flash-image |
gemini |
~0.2 |
Same as gemini-2.5-flash |
gpt-5.4-pro |
openai |
~2.0 |
Same as gpt-5-pro / gpt-5.5-pro pattern |
gpt-5.2-pro |
openai |
~2.0 |
Same as gpt-5-pro pattern |
Stale entries (no longer returned by any API)
The following entries in model_multipliers.json use the dotted Claude naming convention (e.g. claude-opus-4.5) or reference older Claude 3.x generations that the Anthropic API no longer returns. They may still be served under aliases but are not surfaced in the live model list:
claude-haiku-4.5, claude-sonnet-4.5, claude-sonnet-4.6, claude-opus-4.5, claude-opus-4.6 (dotted variants — likely alias IDs)
claude-3-5-haiku, claude-3-haiku, claude-3-sonnet, claude-3-5-sonnet, claude-3-7-sonnet, claude-3-opus, claude-3-5-opus
These should be kept as defensive entries in case the models are still routed, but they are no longer returned by the live Anthropic API.
Inferred vs stored discrepancies
Note: model_multipliers.json uses an internal ET normalization (reference = claude-sonnet-4.5 = 1.0), while GitHub billing multipliers use a separate scale. Direct numeric comparison is not meaningful; the table below highlights relative tier mismatches only.
| Model ID |
Stored ET Multiplier |
GitHub Billing New Mult |
Notes |
claude-sonnet-4-5 |
1.0 |
6 |
Billing shows significant increase; ET ratio unchanged vs reference |
claude-opus-4-5 |
5.0 |
15 |
Opus tier increasing 3×; ET relative ratio may need review |
claude-opus-4-6 |
5.0 |
27 |
Very large billing increase; should verify ET multiplier |
claude-opus-4-7 |
5.0 |
27 |
Same; needs verification |
Proposed Alias Updates
1. Extend gemini-flash to explicitly cover Gemini 3 Flash
What: Add google/gemini-3*flash* pattern to gemini-flash alias.
Why: Gemini 3 Flash (gemini-3-flash-preview, gemini-3.1-flash-*-preview) is a new generation that matches the existing glob but making it explicit aids documentation.
models:
gemini-flash:
- "copilot/gemini-*flash*"
- "google/gemini-*flash*" # already covers gemini-3-flash-preview ✅
The existing wildcard google/gemini-*flash* already covers all Gemini 3 Flash variants. No change needed — just confirming coverage.
2. Extend gemini-pro to explicitly cover Gemini 3 Pro
What: The existing google/gemini-*pro* glob already covers gemini-3-pro-preview and gemini-3.1-pro-preview.
Why: Confirming the pattern is sufficient; no change needed.
3. New deep-research alias
What: New semantic alias for Gemini deep-research models.
Why: deep-research-* models are a distinct family (specialized research agents with very large context) not covered by any existing alias. They appear in Gemini's model list.
models:
deep-research:
- "google/deep-research*"
- "copilot/deep-research*"
4. New gpt-5-pro alias
What: New alias targeting the GPT-5 Pro tier models.
Why: gpt-5-pro, gpt-5.2-pro, gpt-5.4-pro, gpt-5.5-pro are a distinct high-capability tier above gpt-5 base. These are returned by the OpenAI API and in billing docs but no alias targets them specifically.
models:
gpt-5-pro:
- "copilot/gpt-5*pro*"
- "openai/gpt-5*pro*"
Update large meta-alias to include gpt-5-pro:
models:
large:
- "sonnet"
- "gpt-5-pro"
- "gpt-5"
- "gemini-pro"
5. External vendor aliases (Grok, Raptor)
What: GitHub billing docs list "Grok Code Fast 1" and "Raptor mini" as Copilot-served models, suggesting new vendor support.
Why: No IDs for these were found in the live Copilot inventory (HTTP 400 error prevented Copilot API inspection). Once Copilot API access is restored, consider:
models:
grok-fast:
- "copilot/grok*fast*"
- "xai/grok*fast*"
Deferred until Copilot API issue is resolved.
Full Model Lists by Provider
Anthropic (9 models)
- claude-haiku-4-5-20251001
- claude-opus-4-1-20250805
- claude-opus-4-20250514
- claude-opus-4-5-20251101
- claude-opus-4-6
- claude-opus-4-7
- claude-sonnet-4-20250514
- claude-sonnet-4-5-20250929
- claude-sonnet-4-6
Gemini (selected — 50 total)
- deep-research-max-preview-04-2026
- deep-research-preview-04-2026
- deep-research-pro-preview-12-2025
- gemini-2.0-flash, gemini-2.0-flash-001
- gemini-2.0-flash-lite, gemini-2.0-flash-lite-001
- gemini-2.5-computer-use-preview-10-2025
- gemini-2.5-flash, gemini-2.5-flash-image, gemini-2.5-flash-lite
- gemini-2.5-pro
- gemini-3-flash-preview
- gemini-3-pro-image-preview, gemini-3-pro-preview
- gemini-3.1-flash-image-preview, gemini-3.1-flash-lite-preview
- gemini-3.1-pro-preview, gemini-3.1-pro-preview-customtools
- gemini-flash-latest, gemini-flash-lite-latest, gemini-pro-latest
- nano-banana-pro-preview (experimental)
- (plus gemma-3/4, imagen-4, veo-2/3, lyria — non-text models)
OpenAI (selected GPT-5 + reasoning — 198 total)
- gpt-5, gpt-5-2025-08-07, gpt-5-chat-latest
- gpt-5-codex, gpt-5-codex-alpha/beta/mini-alpha/mini-beta
- gpt-5-mini, gpt-5-mini-2025-08-07
- gpt-5-nano, gpt-5-nano-2025-08-07
- gpt-5-pro, gpt-5-pro-2025-10-06
- gpt-5.1, gpt-5.1-2025-11-13, gpt-5.1-chat-latest
- gpt-5.1-codex, gpt-5.1-codex-max, gpt-5.1-codex-mini
- gpt-5.2, gpt-5.2-2025-12-11, gpt-5.2-chat-latest
- gpt-5.2-codex, gpt-5.2-pro, gpt-5.2-pro-2025-12-11
- gpt-5.3-chat-latest, gpt-5.3-codex
- gpt-5.4, gpt-5.4-2026-03-05, gpt-5.4-mini, gpt-5.4-nano, gpt-5.4-pro
- gpt-5.5, gpt-5.5-2026-04-23, gpt-5.5-pro, gpt-5.5-pro-2026-04-23
- o1-pro, o3-2025-04-16, o3-mini, o3-pro, o4-mini, o4-mini-deep-research
Notes
- Copilot API unavailable: The Copilot API returned HTTP 400 for this run. Copilot-prefixed patterns (
copilot/*) could not be validated. The analysis relies on direct vendor APIs only.
- Existing wildcard coverage: The current
gemini-*flash* and gemini-*pro* globs already cover all Gemini 3.x and 3.1.x models returned by the live API. No urgent pattern changes are needed for these.
- GPT-5 codex variants: The
gpt-5-codex alias uses gpt-5*codex* which correctly covers gpt-5.1-codex, gpt-5.2-codex, gpt-5.3-codex etc.
- Billing multiplier scale: The GitHub billing docs multipliers are independent of the internal ET multipliers in
model_multipliers.json. The discrepancies table above flags relative tier changes worth reviewing, not direct numeric mismatches.
Generated by Daily Model Inventory Checker · ● 1.2M · ◷
Summary
Daily model inventory check for 2026-05-05. Three providers returned data (Anthropic, OpenAI, Gemini); Copilot API returned HTTP 400 error. Several new model families and versions were found that warrant alias and multiplier updates.
Provider Model Counts
Raw API Fields Discovered
Anthropic (
/data[]array):id,display_name,created_at,type,max_input_tokens,max_tokenscapabilities:batch,citations,code_execution,context_management,effort,image_input,pdf_input,structured_outputs,thinking(withtypes.enabled/types.adaptive)max_input_tokens(context window),capabilities.thinking.supported(reasoning model flag),capabilities.image_input.supported(vision flag),capabilities.effort.supported(extended reasoning)OpenAI (
/data[]array):id,object,created(Unix timestamp),owned_byowned_byvalues:openai,openai-internal,system— useful for filtering internal/experimental modelsGemini (
/models[]array):name(e.g.models/gemini-2.5-pro),version,displayName,descriptioninputTokenLimit,outputTokenLimit— useful proxy for model tiersupportedGenerationMethods— capability flags (generateContent,generateAnswer,generateImage, etc.)thinkingboolean — flags reasoning/deep-think modelstemperature,topP,topK,maxTemperatureToken Multiplier Analysis
Missing from model_multipliers.json
gemini-3-flash-previewgemini-3-pro-previewgemini-3-pro-image-previewgemini-3.1-pro-previewgemini-3.1-flash-lite-previewgemini-3.1-flash-image-previewgemini-2.5-flash-litegemini-2.5-flash-litein billinggemini-2.5-flash-imagegemini-2.5-flashgpt-5.4-progpt-5-pro/gpt-5.5-propatterngpt-5.2-progpt-5-propatternStale entries (no longer returned by any API)
The following entries in
model_multipliers.jsonuse the dotted Claude naming convention (e.g.claude-opus-4.5) or reference older Claude 3.x generations that the Anthropic API no longer returns. They may still be served under aliases but are not surfaced in the live model list:claude-haiku-4.5,claude-sonnet-4.5,claude-sonnet-4.6,claude-opus-4.5,claude-opus-4.6(dotted variants — likely alias IDs)claude-3-5-haiku,claude-3-haiku,claude-3-sonnet,claude-3-5-sonnet,claude-3-7-sonnet,claude-3-opus,claude-3-5-opusThese should be kept as defensive entries in case the models are still routed, but they are no longer returned by the live Anthropic API.
Inferred vs stored discrepancies
Note:
model_multipliers.jsonuses an internal ET normalization (reference =claude-sonnet-4.5= 1.0), while GitHub billing multipliers use a separate scale. Direct numeric comparison is not meaningful; the table below highlights relative tier mismatches only.claude-sonnet-4-5claude-opus-4-5claude-opus-4-6claude-opus-4-7Proposed Alias Updates
1. Extend
gemini-flashto explicitly cover Gemini 3 FlashWhat: Add
google/gemini-3*flash*pattern togemini-flashalias.Why: Gemini 3 Flash (
gemini-3-flash-preview,gemini-3.1-flash-*-preview) is a new generation that matches the existing glob but making it explicit aids documentation.The existing wildcard
google/gemini-*flash*already covers all Gemini 3 Flash variants. No change needed — just confirming coverage.2. Extend
gemini-proto explicitly cover Gemini 3 ProWhat: The existing
google/gemini-*pro*glob already coversgemini-3-pro-previewandgemini-3.1-pro-preview.Why: Confirming the pattern is sufficient; no change needed.
3. New
deep-researchaliasWhat: New semantic alias for Gemini deep-research models.
Why:
deep-research-*models are a distinct family (specialized research agents with very large context) not covered by any existing alias. They appear in Gemini's model list.4. New
gpt-5-proaliasWhat: New alias targeting the GPT-5 Pro tier models.
Why:
gpt-5-pro,gpt-5.2-pro,gpt-5.4-pro,gpt-5.5-proare a distinct high-capability tier abovegpt-5base. These are returned by the OpenAI API and in billing docs but no alias targets them specifically.Update
largemeta-alias to includegpt-5-pro:5. External vendor aliases (Grok, Raptor)
What: GitHub billing docs list "Grok Code Fast 1" and "Raptor mini" as Copilot-served models, suggesting new vendor support.
Why: No IDs for these were found in the live Copilot inventory (HTTP 400 error prevented Copilot API inspection). Once Copilot API access is restored, consider:
Deferred until Copilot API issue is resolved.
Full Model Lists by Provider
Anthropic (9 models)
Gemini (selected — 50 total)
OpenAI (selected GPT-5 + reasoning — 198 total)
Notes
copilot/*) could not be validated. The analysis relies on direct vendor APIs only.gemini-*flash*andgemini-*pro*globs already cover all Gemini 3.x and 3.1.x models returned by the live API. No urgent pattern changes are needed for these.gpt-5-codexalias usesgpt-5*codex*which correctly coversgpt-5.1-codex,gpt-5.2-codex,gpt-5.3-codexetc.model_multipliers.json. The discrepancies table above flags relative tier changes worth reviewing, not direct numeric mismatches.