[AI] Add configurable model generation for AI On-Device #8043
Conversation
Introduced `GenerationConfig`, `ModelConfig`, `ModelReleaseStage`, and `ModelPreference` to `firebase-ai-ondevice-interop` to allow for configurable model selection. Updated the internal `genaiPrompt` dependency to `1.0.0-beta2` to support the new configuration options. Deprecated the parameter-less `FirebaseAIOnDeviceGenerativeModelFactory.newGenerativeModel()` method in favor of a new overload that accepts a `GenerationConfig`.
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. |
📝 PRs merging into main branchOur main branch should always be in a releasable state. If you are working on a larger change, or if you don't want this change to see the light of the day just yet, consider using a feature branch first, and only merge into the main branch when the code complete and ready to be released. |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces configuration options for the on-device generative model by adding GenerationConfig and ModelConfig classes to the interop layer. It updates the FirebaseAIOnDeviceGenerativeModelFactory to support these configurations, deprecating the parameterless model creation method. Additionally, it includes converter functions to bridge interop and MLKit types and updates the genaiPrompt dependency version. Feedback was provided regarding incorrect ReplaceWith expressions in deprecation annotations and a minor inconsistency in KDoc documentation.
…le/firebase/ai/ondevice/interop/FirebaseAIOnDeviceGenerativeModelFactory.kt Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…ase/ai/ondevice/FirebaseAIOnDeviceComponent.kt Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
…le/firebase/ai/ondevice/interop/FirebaseAIOnDeviceGenerativeModelFactory.kt Co-authored-by: gemini-code-assist[bot] <176961590+gemini-code-assist[bot]@users.noreply.github.com>
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request introduces support for model selection and performance configuration (required for nano-v4) across the Firebase AI on-device modules. Key changes include the addition of GenerationConfig and ModelConfig in the interop layer, updates to the FirebaseAIOnDevice API to support these options during status checks and downloads, and integration into the high-level GenerativeModel configuration. Feedback focuses on reducing code duplication for the OnDeviceModelOption class, ensuring exhaustive when blocks for safety, implementing standard equals and hashCode methods for robustness, and utilizing Kotlin data classes to reduce boilerplate in configuration objects.
|
The public api surface has changed for the subproject ai-logic_firebase-ai-ondevice: The public api surface has changed for the subproject ai-logic_firebase-ai: Please update the api.txt files for the subprojects being affected by this change by running ./gradlew ${subproject}:generateApiTxtFile. Also perform a major/minor bump accordingly. |
1 similar comment
|
The public api surface has changed for the subproject ai-logic_firebase-ai-ondevice: The public api surface has changed for the subproject ai-logic_firebase-ai: Please update the api.txt files for the subprojects being affected by this change by running ./gradlew ${subproject}:generateApiTxtFile. Also perform a major/minor bump accordingly. |
|
Metalava check is complaining about major changes (public preview API) and minor (already a minor bump) Safe to ignore |
milaGGL
left a comment
There was a problem hiding this comment.
LGTM, just some minor questions
Introduced
GenerationConfig,ModelConfig,ModelReleaseStage, andModelPreferencetofirebase-ai-ondevice-interopto allow for configurable model selection.Updated the internal
genaiPromptdependency to1.0.0-beta2to support the new configuration options.Deprecated the parameter-less
FirebaseAIOnDeviceGenerativeModelFactory.newGenerativeModel()method in favor of a new overload that accepts aGenerationConfig.