-
Notifications
You must be signed in to change notification settings - Fork 747
Bump Gemini models to 2.5 #5285
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
base: main
Are you sure you want to change the base?
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull request overview
This PR updates Gemini model references from version 2.0 to version 2.5 across the codebase. The changes include updating model names from preview versions (e.g., gemini-2.5-flash-preview-04-17) to stable versions (e.g., gemini-2.5-flash), and bumping all 2.0 models to their 2.5 equivalents.
Key Changes:
- Updated Gemini model versions from 2.0 to 2.5 across all test configurations
- Changed from preview model names to stable release names (removed
-preview-04-17suffix) - Updated documentation and examples to reference the new model versions
Reviewed changes
Copilot reviewed 25 out of 26 changed files in this pull request and generated 1 comment.
Show a summary per file
| File | Description |
|---|---|
| ui/fixtures/config/tensorzero.toml | Updated variant model references from preview to stable 2.5-flash |
| tensorzero-optimizers/tests/common/gcp_vertex_gemini_sft.rs | Updated optimizer test model from 2.0-flash-lite-001 to 2.5-flash-lite |
| tensorzero-core/tests/e2e/providers/google_ai_studio_gemini.rs | Updated all test provider model names from 2.0 to 2.5 versions |
| tensorzero-core/tests/e2e/providers/gcp_vertex_gemini.rs | Updated test provider model names and shorthand URLs to 2.5 versions |
| tensorzero-core/tests/e2e/providers/common.rs | Updated model references in test configurations to 2.5 versions |
| tensorzero-core/tests/e2e/db/select_queries.rs | Updated query assertions to match new model names |
| tensorzero-core/tests/e2e/config/tensorzero.models.toml | Updated model definitions and provider configurations to 2.5 versions |
| tensorzero-core/tests/e2e/config/tensorzero.functions.weather_helper.toml | Updated function variant model references to 2.5 |
| tensorzero-core/tests/e2e/config/tensorzero.functions.toml | Updated evaluator model references to 2.5-flash |
| tensorzero-core/tests/e2e/config/tensorzero.functions.json_success.toml | Updated all variant model references to 2.5 versions |
| tensorzero-core/tests/e2e/config/tensorzero.functions.dynamic_json.toml | Updated variant model references to 2.5 versions |
| tensorzero-core/tests/e2e/config/tensorzero.functions.basic_test.toml | Updated variant model references and shorthand URLs to 2.5 |
| tensorzero-core/tests/e2e/best_of_n.rs | Updated test assertions and model name checks to 2.5-flash |
| tensorzero-core/src/providers/gcp_vertex_gemini/optimization.rs | Updated test mock data to reference 2.5-flash-tuned models |
| tensorzero-core/src/providers/gcp_vertex_gemini/mod.rs | Updated unit tests with new model names and shorthand URL examples |
| tensorzero-core/src/config/tests.rs | Updated model configuration test to use 2.5-flash |
| recipes/supervised_fine_tuning/gcp-vertex-gemini/gcp_vertex_gemini_nb.py | Updated default model name to 2.5-flash-lite |
| recipes/supervised_fine_tuning/gcp-vertex-gemini/gcp_vertex_gemini.ipynb | Updated notebook default model name to 2.5-flash-lite |
| examples/guides/providers/google-ai-studio-gemini/config/tensorzero.toml | Updated example configuration to use 2.5-flash-lite |
| examples/guides/providers/gcp-vertex-ai-gemini/config/tensorzero.toml | Updated example configuration to use 2.5-flash |
| docs/integrations/model-providers/google-ai-studio-gemini.mdx | Updated documentation examples to reference 2.5-flash-lite |
| docs/integrations/model-providers/gcp-vertex-ai-gemini.mdx | Updated documentation examples to reference 2.5-flash |
| docs/gateway/configure-models-and-providers.mdx | Updated shorthand example from 2.0-flash-exp to 2.5-flash |
| docs/gateway/configuration-reference.mdx | Updated configuration examples to use 2.5-flash |
| .github/workflows/batch-completion-cron.yml | Updated batch inference workflow to use 2.5-flash |
| supports_batch_inference: false, | ||
| variant_name: "gcp-vertex-gemini-flash-lite-tuned".to_string(), | ||
| model_name: "gemini-2.0-flash-lite-tuned".into(), | ||
| model_name: "gemini-2.5-flash-lite-tuned".into(), |
Copilot
AI
Dec 18, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The model name was updated to "gemini-2.5-flash-lite-tuned" here, but the corresponding model definition in tensorzero.models.toml still uses the old name "gemini-2.0-flash-lite-tuned". This creates an inconsistency where tests will reference a model name that doesn't exist in the configuration. Either revert this change to keep using "gemini-2.0-flash-lite-tuned" (if the tuned model is intentionally based on 2.0), or update the model definition and all function variant references to use "gemini-2.5-flash-lite-tuned".
| model_name: "gemini-2.5-flash-lite-tuned".into(), | |
| model_name: "gemini-2.0-flash-lite-tuned".into(), |
No description provided.