Skip to content

Conversation

@Nash0x7E2
Copy link
Member

@Nash0x7E2 Nash0x7E2 commented Nov 18, 2025

Summary by CodeRabbit

  • New Features

    • Added thinking level and media resolution configuration options for enhanced model control
  • Installation & Dependencies

    • Updated installation method; minimum google-genai version requirement increased to 1.51.0
  • Documentation

    • Added Gemini 3 migration guide with updated configuration best practices
  • Updates

    • Updated default model selection and configuration initialization

@coderabbitai
Copy link

coderabbitai bot commented Nov 18, 2025

Walkthrough

Updates the Gemini plugin to support Gemini 3 models with version bumps, new configuration options (thinking_level, media_resolution), and a DEFAULT_MODEL constant. Documentation includes migration guidance, and example code adopts default LLM initialization.

Changes

Cohort / File(s) Summary
Example Code Updates
examples/01_simple_agent_example/simple_agent_example.py
Removed explicit model identifier from LLM initialization; now uses gemini.LLM() default instead of gemini.LLM("gemini-2.5-flash-lite")
Documentation & Migration
plugins/gemini/README.md
Installation command updated to uv add vision-agents[gemini], google-genai dependency bumped to >=1.51.0, new Gemini 3 migration section added with guidance on thinking_level, temperature, and token consumption changes
Dependency Configuration
plugins/gemini/pyproject.toml
google-genai version requirement updated from >=1.29.0 to >=1.51.0
API Exports
plugins/gemini/vision_agents/plugins/gemini/__init__.py
Added exports for ThinkingLevel and MediaResolution from google.genai.types to public API
Core LLM Implementation
plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py
Introduced DEFAULT_MODEL constant ("gemini-3-pro-preview"), extended init with thinking_level, media_resolution, and config parameters, added _build_config helper method for constructing GenerateContentConfig with system instructions and multimodal settings, updated config handling throughout streaming and tool-call flows

Estimated code review effort

🎯 3 (Moderate) | ⏱️ ~20 minutes

  • Primary attention area: gemini_llm.py — review the _build_config logic, config propagation through streaming/tool-response paths, and kwargs handling to ensure all new parameters integrate correctly
  • Secondary attention: Verify DEFAULT_MODEL constant aligns with intended default behavior and that migration guidance in README is complete
  • Cross-check: Ensure dependency bump to google-genai>=1.51.0 is compatible with all usage patterns of ThinkingLevel and MediaResolution

Possibly related PRs

Suggested labels

plugins, dependencies, examples, docs, config

Suggested reviewers

  • dangusev
  • d3xvn

Poem

A bell jar of models descends—gemini-3-pro-preview
descends like mercury, thick with thinking,
configuration settles in the throat like pills.
We build and rebuild, config upon config,
a Möbius strip of migration, where
version numbers bloom and wilt,
and defaults swallow the old names whole.

Pre-merge checks and finishing touches

❌ Failed checks (1 inconclusive)
Check name Status Explanation Resolution
Title check ❓ Inconclusive The title 'Update for Gemini 3' is vague and generic, using non-descriptive language that doesn't convey specific details about the changeset. Consider a more specific title that highlights the primary change, such as 'Upgrade Gemini plugin to support Gemini 3 with thinking level and media resolution' or similar.
✅ Passed checks (2 passed)
Check name Status Explanation
Description Check ✅ Passed Check skipped - CodeRabbit’s high-level summary is enabled.
Docstring Coverage ✅ Passed No functions found in the changed files to evaluate docstring coverage. Skipping docstring coverage check.
✨ Finishing touches
  • 📝 Generate docstrings
🧪 Generate unit tests (beta)
  • Create PR with unit tests
  • Post copyable unit tests in a comment
  • Commit unit tests in branch feat/gemini-3

Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out.

❤️ Share

Comment @coderabbitai help to get the list of available commands and usage tips.

Copy link

@coderabbitai coderabbitai bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Actionable comments posted: 2

Caution

Some comments are outside the diff and can’t be posted inline due to platform limitations.

⚠️ Outside diff range comments (1)
examples/01_simple_agent_example/simple_agent_example.py (1)

21-21: Update outdated model reference in comment.

The comment references "gemini-2.5-flash-lite", but the code now uses the default model "gemini-3-pro-preview".

Apply this diff:

-- gemini-2.5-flash-lite for fast responses
+- gemini-3-pro-preview for fast responses
🧹 Nitpick comments (2)
plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (2)

86-92: Clarify config and kwargs precedence.

If both config and **kwargs are provided, kwargs are silently ignored. This could confuse users who expect kwargs to extend or override the config.

Consider either:

  1. Documenting this precedence clearly in the docstring
  2. Raising an error if both are provided
  3. Merging kwargs into the provided config

Example docstring addition:

             config: Optional[GenerateContentConfig] to use as base. Any kwargs will be passed
                 to GenerateContentConfig constructor if config is not provided.
+                Note: If config is provided, kwargs are ignored.
             **kwargs: Additional arguments passed to GenerateContentConfig constructor.
+                Only used if config is not provided.

169-187: Simplify config handling logic for better maintainability.

The config handling has multiple conditional branches that are difficult to follow. The logic is functionally correct but could be clearer.

Consider consolidating the logic:

         # Add tools if available - Gemini uses GenerateContentConfig
         tools_spec = self.get_available_functions()
+        
+        # Get or build base config
+        cfg = kwargs.get("config")
+        if not isinstance(cfg, GenerateContentConfig):
+            cfg = self._build_config()
+        else:
+            cfg = self._build_config(base_config=cfg)
+        
+        # Add tools if available
         if tools_spec:
             conv_tools = self._convert_tools_to_provider_format(tools_spec)
-            cfg = kwargs.get("config")
-            if not isinstance(cfg, GenerateContentConfig):
-                cfg = self._build_config()
-            else:
-                cfg = self._build_config(base_config=cfg)
             cfg.tools = conv_tools  # type: ignore[assignment]
-            kwargs["config"] = cfg
-        else:
-            cfg = kwargs.get("config")
-            if cfg is None or not isinstance(cfg, GenerateContentConfig):
-                cfg = self._build_config()
-                kwargs["config"] = cfg
-            elif self.thinking_level or self.media_resolution:
-                cfg = self._build_config(base_config=cfg)
-                kwargs["config"] = cfg
+        
+        kwargs["config"] = cfg
📜 Review details

Configuration used: Path: .coderabbit.yaml

Review profile: CHILL

Plan: Pro

Disabled knowledge base sources:

  • Linear integration is disabled by default for public repositories

You can enable these sources in your CodeRabbit configuration.

📥 Commits

Reviewing files that changed from the base of the PR and between 7693813 and 26853e5.

⛔ Files ignored due to path filters (1)
  • uv.lock is excluded by !**/*.lock
📒 Files selected for processing (5)
  • examples/01_simple_agent_example/simple_agent_example.py (1 hunks)
  • plugins/gemini/README.md (2 hunks)
  • plugins/gemini/pyproject.toml (1 hunks)
  • plugins/gemini/vision_agents/plugins/gemini/__init__.py (1 hunks)
  • plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (4 hunks)
🧰 Additional context used
🧬 Code graph analysis (2)
examples/01_simple_agent_example/simple_agent_example.py (2)
plugins/gemini/tests/test_gemini_llm.py (1)
  • llm (31-34)
agents-core/vision_agents/core/llm/llm.py (1)
  • LLM (49-418)
plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (2)
agents-core/vision_agents/core/llm/llm.py (2)
  • get_available_functions (227-229)
  • _convert_tools_to_provider_format (122-136)
plugins/gemini/vision_agents/plugins/gemini/gemini_realtime.py (1)
  • _convert_tools_to_provider_format (448-470)
⏰ Context from checks skipped due to timeout of 90000ms. You can increase the timeout in your CodeRabbit configuration to a maximum of 15 minutes (900000ms). (6)
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Ruff
  • GitHub Check: unit / Mypy
  • GitHub Check: unit / Test "not integration"
  • GitHub Check: unit / Ruff
  • GitHub Check: unit / Mypy
🔇 Additional comments (9)
examples/01_simple_agent_example/simple_agent_example.py (1)

29-29: LGTM! Simplified initialization using defaults.

The change to use gemini.LLM() without explicit model specification leverages the new DEFAULT_MODEL constant, making the example cleaner.

plugins/gemini/vision_agents/plugins/gemini/__init__.py (1)

3-5: LGTM! API expansion for Gemini 3 configuration.

Exposing ThinkingLevel and MediaResolution enables users to configure these options when initializing GeminiLLM, as documented in the migration guide.

plugins/gemini/README.md (2)

8-8: LGTM! Updated installation instruction.

The installation command now uses the Vision Agents extras syntax, which is clearer and more maintainable.


113-120: LGTM! Helpful migration guidance for Gemini 3.

The migration section provides clear, actionable advice for users upgrading from Gemini 2.5, covering thinking level, temperature, PDF handling, and token consumption considerations.

plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py (4)

216-275: LGTM! Robust multi-hop tool calling with safeguards.

The implementation includes important safeguards:

  • MAX_ROUNDS limit prevents infinite loops
  • Deduplication prevents redundant tool executions
  • Output sanitization prevents oversized responses
  • Proper error handling in the execution flow

341-362: LGTM! Correct tool conversion to Gemini format.

The implementation properly converts ToolSchema objects to Gemini's function_declarations format, consistent with the pattern used in gemini_realtime.py.


364-428: LGTM! Defensive tool call extraction with fallbacks.

The implementation includes appropriate fallbacks and error handling:

  • Tries top-level function_calls first
  • Falls back to candidates.content.parts structure
  • Silently ignores extraction errors to prevent disruption

430-461: LGTM! Robust tool result handling with fallback.

The method properly handles various result formats and includes a fallback mechanism to create text parts if serialization fails.

plugins/gemini/pyproject.toml (1)

15-15: google-genai version 1.51.0 verified as available and secure.

Version 1.51.0 exists on PyPI and has no known security advisories in the GitHub vulnerability database, confirming the dependency specification is valid and poses no immediate security concerns.

Comment on lines +100 to +133
def _build_config(
self,
system_instruction: Optional[str] = None,
base_config: Optional[GenerateContentConfig] = None,
) -> GenerateContentConfig:
"""
Build GenerateContentConfig with Gemini 3 features.
Args:
system_instruction: Optional system instruction to include
base_config: Optional base config to extend (takes precedence over self._base_config)
Returns:
GenerateContentConfig with thinking_level and media_resolution if set
"""
if base_config is not None:
config = base_config
elif self._base_config is not None:
config = self._base_config
else:
config = GenerateContentConfig()

if system_instruction:
config.system_instruction = system_instruction

if self.thinking_level:
from google.genai.types import ThinkingConfig

config.thinking_config = ThinkingConfig(thinking_level=self.thinking_level)

if self.media_resolution:
config.media_resolution = self.media_resolution

return config
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

⚠️ Potential issue | 🟠 Major

🧩 Analysis chain

Config mutation risk: shared config objects may be unexpectedly modified.

The method mutates the base_config or self._base_config object in place. If a user passes a GenerateContentConfig to the constructor and calls send_message multiple times, the same config object is mutated repeatedly. This could lead to unexpected behavior if the config is shared or reused elsewhere.

Consider creating a copy of the config before mutating:

     def _build_config(
         self,
         system_instruction: Optional[str] = None,
         base_config: Optional[GenerateContentConfig] = None,
     ) -> GenerateContentConfig:
         """
         Build GenerateContentConfig with Gemini 3 features.
 
         Args:
             system_instruction: Optional system instruction to include
             base_config: Optional base config to extend (takes precedence over self._base_config)
 
         Returns:
             GenerateContentConfig with thinking_level and media_resolution if set
         """
         if base_config is not None:
-            config = base_config
+            # Create a copy to avoid mutating the caller's config
+            config = GenerateContentConfig(**base_config.__dict__)
         elif self._base_config is not None:
-            config = self._base_config
+            # Create a copy to avoid mutating stored config
+            config = GenerateContentConfig(**self._base_config.__dict__)
         else:
             config = GenerateContentConfig()

Note: Verify the appropriate copy mechanism for GenerateContentConfig - it may have a copy() method or require different handling.


🏁 Script executed:

#!/bin/bash
# Check if GenerateContentConfig has a copy method or how to properly copy it
ast-grep --pattern $'class GenerateContentConfig {
  $$$
}'

Length of output: 0


🏁 Script executed:

#!/bin/bash
# Find how GenerateContentConfig is imported and used throughout the codebase
rg -n 'GenerateContentConfig' --type=py -B2 -A2

Length of output: 5758


🏁 Script executed:

#!/bin/bash
# Check if there are any copy, clone, or deepcopy patterns used in the codebase
rg -n 'from copy import|import copy|\.copy\(\)|deepcopy' --type=py | head -20

Length of output: 898


🏁 Script executed:

#!/bin/bash
# Look for the __init__ method of the GeminiLLM class to see how base_config is stored
rg -n -A 15 'def __init__' plugins/gemini/vision_agents/plugins/gemini/gemini_llm.py

Length of output: 624


🌐 Web query:

google.genai GenerateContentConfig copy clone method

💡 Result:

Use the Pydantic copy API — GenerateContentConfig is a Pydantic model so call model_copy (or model_copy(deep=True) for a deep clone). Example (Python):

from google.genai import types
orig = types.GenerateContentConfig(temperature=0.5, candidate_count=2)
shallow = orig.model_copy()
deep = orig.model_copy(deep=True)

or with updates:

updated = orig.model_copy(update={"temperature": 0.1})

Sources: python-genai types docs and Pydantic model_copy documentation. [1][2]


Config mutation risk: Use model_copy() to avoid mutating shared config objects.

The method mutates base_config or self._base_config directly. If the same config object is passed or stored and _build_config() is called multiple times, the mutations accumulate on the same object. Related code in gemini_realtime.py already demonstrates copying patterns (lines 140, 316).

Since GenerateContentConfig is a Pydantic model, use model_copy() to create copies:

     def _build_config(
         self,
         system_instruction: Optional[str] = None,
         base_config: Optional[GenerateContentConfig] = None,
     ) -> GenerateContentConfig:
         """
         Build GenerateContentConfig with Gemini 3 features.
 
         Args:
             system_instruction: Optional system instruction to include
             base_config: Optional base config to extend (takes precedence over self._base_config)
 
         Returns:
             GenerateContentConfig with thinking_level and media_resolution if set
         """
         if base_config is not None:
-            config = base_config
+            config = base_config.model_copy()
         elif self._base_config is not None:
-            config = self._base_config
+            config = self._base_config.model_copy()
         else:
             config = GenerateContentConfig()
📝 Committable suggestion

‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.

Suggested change
def _build_config(
self,
system_instruction: Optional[str] = None,
base_config: Optional[GenerateContentConfig] = None,
) -> GenerateContentConfig:
"""
Build GenerateContentConfig with Gemini 3 features.
Args:
system_instruction: Optional system instruction to include
base_config: Optional base config to extend (takes precedence over self._base_config)
Returns:
GenerateContentConfig with thinking_level and media_resolution if set
"""
if base_config is not None:
config = base_config
elif self._base_config is not None:
config = self._base_config
else:
config = GenerateContentConfig()
if system_instruction:
config.system_instruction = system_instruction
if self.thinking_level:
from google.genai.types import ThinkingConfig
config.thinking_config = ThinkingConfig(thinking_level=self.thinking_level)
if self.media_resolution:
config.media_resolution = self.media_resolution
return config
def _build_config(
self,
system_instruction: Optional[str] = None,
base_config: Optional[GenerateContentConfig] = None,
) -> GenerateContentConfig:
"""
Build GenerateContentConfig with Gemini 3 features.
Args:
system_instruction: Optional system instruction to include
base_config: Optional base config to extend (takes precedence over self._base_config)
Returns:
GenerateContentConfig with thinking_level and media_resolution if set
"""
if base_config is not None:
config = base_config.model_copy()
elif self._base_config is not None:
config = self._base_config.model_copy()
else:
config = GenerateContentConfig()
if system_instruction:
config.system_instruction = system_instruction
if self.thinking_level:
from google.genai.types import ThinkingConfig
config.thinking_config = ThinkingConfig(thinking_level=self.thinking_level)
if self.media_resolution:
config.media_resolution = self.media_resolution
return config

@Nash0x7E2 Nash0x7E2 merged commit 55b2b5a into main Nov 18, 2025
8 checks passed
@Nash0x7E2 Nash0x7E2 deleted the feat/gemini-3 branch November 18, 2025 20:22
@coderabbitai coderabbitai bot mentioned this pull request Jan 6, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants