test(utils): support silent model fallbacks in TestRig#23570
test(utils): support silent model fallbacks in TestRig#23570
Conversation
Support model fallbacks in integration tests by allowing a comma-separated list of models via the `GEMINI_MODEL` environment variable or the `model` option in `TestRig.setup()`. This will automatically configure a fallback chain using `dynamicModelConfiguration` with `silent` actions. Fixes #23568
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request enhances the Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request enhances the TestRig in packages/test-utils to allow specifying a model or a comma-separated list of models for integration tests, either via setup options or the GEMINI_MODEL environment variable. When multiple models are provided, it sets up a dynamic model configuration with a fallback chain. However, the integration-test-model is currently defined with tier: 'auto', which will bypass the intended fallback chain. The tier property should be removed to ensure the fallback mechanism functions as designed.
| modelSettings['modelConfigs'] = { | ||
| modelDefinitions: { | ||
| [chainName]: { | ||
| tier: 'auto', |
There was a problem hiding this comment.
The integration-test-model is defined with tier: 'auto', which will cause the ModelRouterService to treat it as an auto-classifying model and trigger the ClassifierStrategy. This will resolve to a single concrete model, bypassing the fallback chain you've defined and defeating the purpose of this change.
To ensure the chain is used as intended, the integration-test-model should not be classified as an auto tier model. Please remove the tier property from its definition.
|
Size Change: -4 B (0%) Total Size: 26.5 MB
ℹ️ View Unchanged
|
Description
This PR adds support for automatically testing multiple models in
TestRig.Currently, integration tests can hang if they encounter a quota exhaustion error (or other persistent errors), as the test rig defaults to waiting for user input. This PR introduces a
modelfallback chain in the test configuration. When a model fails, the fallback mechanism silently switches to the next available model.Changes included:
modeloption toTestRig.setup()options, accepting a comma-separated list of models.integration-test-modelchain when multiple models are specified, usingdynamicModelConfiguration.terminal,transient,not_found, andunknownstates tosilentinside the fallback configuration to prevent tests from hanging on user prompts.Related Issues
Fixes #23568
Testing
vitest run integration-tests/model-fallback.test.ts