Skip to content

fix: resolve no matching overload errors in LLMGenerator#33243

Open
warren830 wants to merge 1 commit intolanggenius:mainfrom
warren830:fix/issue-32494
Open

fix: resolve no matching overload errors in LLMGenerator#33243
warren830 wants to merge 1 commit intolanggenius:mainfrom
warren830:fix/issue-32494

Conversation

@warren830
Copy link
Copy Markdown
Contributor

Description

Fixes #32494

The type checker reports no-matching-overload errors for invoke_llm calls in llm_generator.py because prompt message lists were inferred as list[UserPromptMessage] instead of list[PromptMessage].

Root Cause

When creating a list like [UserPromptMessage(content=prompt)], Python's type system infers it as list[UserPromptMessage]. The invoke_llm overload with stream: Literal[False] expects list[PromptMessage]. Since list is invariant in Python's type system, list[UserPromptMessage] doesn't match list[PromptMessage] even though UserPromptMessage is a subclass of PromptMessage.

Changes

  • Added explicit list[PromptMessage] type annotations to all prompt message list declarations in LLMGenerator
  • Removed unnecessary list() wrapping calls since the variables are already properly typed lists
  • Affects 10 call sites across generate_conversation_name, generate_suggested_questions_after_answer, generate_rule_config, generate_code, generate_structured_output, and __instruction_modify_common

Type of Change

  • Refactoring / chore (type annotation fix, no runtime behavior change)

…m_generator

Add explicit list[PromptMessage] type annotations to prompt message lists
in LLMGenerator to resolve type checker overload matching errors.

The issue was that lists containing UserPromptMessage or SystemPromptMessage
subclasses were inferred as list[UserPromptMessage] etc., which didn't match
the invoke_llm overload signature expecting list[PromptMessage].

Changes:
- Added explicit list[PromptMessage] type annotations to all prompt_messages
  variable declarations in llm_generator.py
- Removed unnecessary list() wrapping since the variables are already lists
  with the correct type annotation
@warren830 warren830 requested a review from QuantumGhost as a code owner March 11, 2026 01:59
@dosubot dosubot bot added the size:M This PR changes 30-99 lines, ignoring generated files. label Mar 11, 2026
@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request addresses a type checking issue in the LLMGenerator class where Python's type system incorrectly inferred prompt message lists, leading to no-matching-overload errors. The changes introduce explicit type hints for prompt message lists and remove redundant list conversions, ensuring type compatibility without altering runtime behavior. This refactoring improves the robustness of the type checking process for LLM invocation calls.

Highlights

  • Type Annotation Fix: Explicitly added list[PromptMessage] type annotations to prompt message list declarations within the LLMGenerator class to resolve no-matching-overload errors.
  • Redundant Code Removal: Removed unnecessary list() wrapping calls around variables that were already properly typed as lists, improving code clarity and efficiency.
  • Affected Call Sites: Applied these changes across 10 call sites in various functions including generate_conversation_name, generate_suggested_questions_after_answer, generate_rule_config, generate_code, generate_structured_output, and __instruction_modify_common.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • api/core/llm_generator/llm_generator.py
    • Added explicit list[PromptMessage] type annotations to prompt message list variables in generate_conversation_name, generate_suggested_questions_after_answer, generate_rule_config, generate_code, generate_structured_output, and __instruction_modify_common.
    • Removed redundant list() constructor calls when passing prompt message lists to model_instance.invoke_llm.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

@dosubot dosubot bot added the refactor label Mar 11, 2026
Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request improves type safety by explicitly annotating prompt message lists as list[PromptMessage] and removing unnecessary list() calls in llm_generator.py. While these changes are beneficial for code maintainability, a high-severity Insecure Direct Object Reference (IDOR) vulnerability was identified in the instruction_modify methods. These methods fetch application and message data based on user-supplied IDs without verifying tenant ownership, which could lead to unauthorized data access. It is recommended to implement proper authorization checks in the data fetching logic.

try:
response: LLMResult = model_instance.invoke_llm(
prompt_messages=list(prompt_messages), model_parameters=model_parameters, stream=False
prompt_messages=prompt_messages, model_parameters=model_parameters, stream=False
Copy link
Copy Markdown
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

security-high high

The invoke_llm call on this line processes last_run data that is fetched in the calling methods (instruction_modify_workflow and instruction_modify_legacy) without proper authorization checks. Specifically, instruction_modify_workflow (line 434) and instruction_modify_legacy (line 392) fetch App and Message objects using only a user-supplied flow_id (App ID) without verifying that these objects belong to the provided tenant_id. This allows an authenticated user to potentially leak sensitive information (inputs, outputs, logs) from any workflow run by providing its ID.

To remediate this, update the calling methods to verify ownership before processing the data. For example, in instruction_modify_workflow, change the query to: session.query(App).where(App.id == flow_id, App.tenant_id == tenant_id).first().

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

refactor size:M This PR changes 30-99 lines, ignoring generated files.

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[Refactor/Chore] No matching overload found for function

1 participant