Remove legacy ModelInfo and merge it with ModelFamily#7748
Remove legacy ModelInfo and merge it with ModelFamily#7748aibrahim-oai merged 10 commits intomainfrom
ModelInfo and merge it with ModelFamily#7748Conversation
ModelInfo and merge it with ModelFamily
|
@codex review this |
There was a problem hiding this comment.
💡 Codex Review
Here are some automated review suggestions for this pull request.
ℹ️ About Codex in GitHub
Your team has set up Codex to review pull requests in this repo. Reviews are triggered when you
- Open a pull request for review
- Mark a draft as ready
- Comment "@codex review".
If Codex has suggestions, it will comment; otherwise it will react with 👍.
Codex can also answer questions or update the PR. Try commenting "@codex address that feedback".
| let config = Self { | ||
| model, | ||
| review_model, | ||
| model_context_window, | ||
| model_auto_compact_token_limit, | ||
| model_context_window: cfg.model_context_window, | ||
| model_auto_compact_token_limit: cfg.model_auto_compact_token_limit, |
There was a problem hiding this comment.
Preserve derived defaults for model context limits
Loading config no longer derives model_context_window and model_auto_compact_token_limit from the selected model family; these fields now mirror only the raw TOML/profile values. In the provided fixtures (e.g., the o3, gpt3, and gpt5 profiles in create_test_fixture) those settings are omitted, so Config::load_from_base_config_with_overrides now returns None for both fields where the tests and callers previously received model-specific defaults (200_000/180_000 for o3, 16_385/14_746 for gpt‑3.5, 272_000/244_800 for gpt‑5.1). This will break the precedence tests around lines 2923–3235 and removes context-window metadata from telemetry for default profiles. Please restore deriving these defaults from the model family when the user has not overridden them.
Useful? React with 👍 / 👎.
| (context_window * 9) / 10 | ||
| } | ||
|
|
||
| fn finalize_auto_compact_default(mut self) -> Self { |
There was a problem hiding this comment.
Can you make auto_compact_token_limit a getter instead?
| review_model: OPENAI_DEFAULT_REVIEW_MODEL.to_string(), | ||
| model_context_window: Some(272_000), | ||
| model_auto_compact_token_limit: Some(244_800), | ||
| model_context_window: None, |
There was a problem hiding this comment.
Do we have coverage for this?
There was a problem hiding this comment.
Mostly interested in ensuring model_auto_compact_token_limit is set correctly for well known models
There was a problem hiding this comment.
codex/codex-rs/core/tests/suite/compact.rs
Lines 454 to 466 in b519267
There was a problem hiding this comment.
This should cover it although not very precise
This is a step towards removing the need to know
modelwhen constructing config. We firstly don't need to knowmodel_infoand just respect if the user has already set it. Next step, we don't need to knowmodelunless the user explicitly set it inconfig.toml