feat: let model providers own model discovery#18950
Merged
Conversation
87f6914 to
3666f94
Compare
52559ff to
a7c6a56
Compare
a2f9688 to
c93a7b6
Compare
Collaborator
Author
|
@codex review |
Contributor
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
Reviewed commit: c93a7b66bc
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
f9906b5 to
c93a7b6
Compare
f83056d to
5e0e3f4
Compare
pakrym-oai
approved these changes
Apr 24, 2026
102904e to
cf74eea
Compare
This was referenced Apr 24, 2026
cf74eea to
4712f34
Compare
270a84e to
b862812
Compare
b862812 to
16af6f0
Compare
This file contains hidden or bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
Sign up for free
to subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Add this suggestion to a batch that can be applied as a single commit.This suggestion is invalid because no changes were made to the code.Suggestions cannot be applied while the pull request is closed.Suggestions cannot be applied while viewing a subset of changes.Only one suggestion per line can be applied in a batch.Add this suggestion to a batch that can be applied as a single commit.Applying suggestions on deleted lines is not supported.You must change the existing code in this line in order to create a valid suggestion.Outdated suggestions cannot be applied.This suggestion has been applied or marked resolved.Suggestions cannot be applied from pending reviews.Suggestions cannot be applied on multi-line comments.Suggestions cannot be applied while the pull request is queued to merge.Suggestion cannot be applied right now. Please check back later.
Why
codex-models-managerhad grown to own provider-specific concerns: constructing OpenAI-compatible/modelsrequests, resolving provider auth, emitting request telemetry, and deciding how provider catalogs should be sourced. That made the manager harder to reuse for providers whose model catalog is not fetched from the OpenAI/modelsendpoint, such as Amazon Bedrock.This change moves provider-specific model discovery behind provider-owned implementations, so the models manager can focus on refresh policy, cache behavior, picker ordering, and model metadata merging.
What Changed
ModelsManagertrait with separateOpenAiModelsManagerandStaticModelsManagerimplementations.ModelsEndpointClientso OpenAI-compatible HTTP fetching lives outsidecodex-models-manager./modelsrequest construction, provider auth resolution, timeout handling, and request telemetry intocodex-model-providerviaOpenAiModelsEndpoint.models_manager(...)construction so configured OpenAI-compatible providers useOpenAiModelsManager, while static/catalog-backed providers can returnStaticModelsManager.Arc<dyn ModelsManager>.codex_models_manager::test_support.Metadata References
The Bedrock catalog metadata is based on the official Amazon Bedrock OpenAI model documentation:
128,000token context window forgpt-oss-20bandgpt-oss-120b.gpt-oss-120bmodel card lists thebedrock-runtimemodel IDopenai.gpt-oss-120b-1:0, thebedrock-mantlemodel IDopenai.gpt-oss-120b, text-only modalities, and128Kcontext window.gpt-oss-120bmodel docs document configurable reasoning effort withlow,medium, andhigh, plus text input/output modality.The display names, default reasoning effort, and priority ordering are Codex-local catalog choices.
Test Plan
The response returned the Bedrock catalog with
openai.gpt-oss-120b-1:0as the default model andopenai.gpt-oss-20b-1:0as the second listed model, both text-only and supporting low/medium/high reasoning effort.