Skip to content

Conversation

@flyinglandlord
Copy link
Collaborator

@flyinglandlord flyinglandlord commented Oct 15, 2025

@gemini-code-assist
Copy link

Summary of Changes

Hello @flyinglandlord, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly upgrades the system's capability to handle function and tool calls from various large language models. It brings the Tool Call API in line with the latest OpenAI standards, resolves a specific issue, and introduces a more modular and robust parsing architecture. The changes ensure broader compatibility and improved performance when interacting with advanced models that utilize complex tool calling mechanisms, particularly in streaming and parallel contexts.

Highlights

  • Tool Call API Compatibility: The Tool Call API has been updated to be compatible with the latest OpenAI version, addressing issue [BUG]Error when I was trying to run Mixtral-8x7B-v0.1 #1062 and supporting new function call formats.
  • New Tool Call Parsers: Added support for several new tool call parsers, including DeepSeekV3, Kimi-K2, and Qwen models, enhancing the system's ability to interpret diverse model outputs.
  • Refactored Tool Call Parsing Logic: The underlying tool call parsing mechanism has been significantly refactored to improve handling of streaming and parallel tool calls, making it more robust and efficient.
  • MoE Layer Parameter Adjustment: Minor adjustments were made to the parameter order and optionality within the Mixture-of-Experts (MoE) layer functions.
  • Enhanced API Models: The API models now include support for parallel_tool_calls in chat completion requests and more flexible message structures with optional id and index for tool calls.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces significant improvements to the tool calling API, aligning it with the latest OpenAI version and adding support for new model formats like Kimi-K2 and DeepSeek. The refactoring of the function_call_parser.py is a major step forward in terms of modularity and extensibility. The changes to the API models and the OpenAI API implementation are well-executed to support the new features, including robust streaming logic. I've identified one issue in the function_call_parser.py where tool_index is not correctly assigned during non-streaming parsing, for which I've provided a suggestion. Overall, this is a high-quality contribution.

Comment on lines 123 to +168
for act in action:
name = act.get("name")
if name and name in tool_indices:
results.append(
ToolCallItem(
tool_index=tool_indices[name],
tool_index=-1, # Caller should update this based on the actual tools array called
name=name,
parameters=json.dumps(
act.get("parameters") or act.get("arguments", {}),
ensure_ascii=False,
),
)
)
else:
logger.warning(f"Model attempted to call undefined function: {name}")

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

The tool_index is consistently set to -1 for all tool calls parsed in non-streaming mode. This is incorrect as tool_index should represent the index of the tool call within the list of calls in the assistant's message. A value of -1 will be passed to the ToolCall object, which is not ideal.

I suggest using enumerate to get the correct index for each tool call.

Suggested change
for act in action:
name = act.get("name")
if name and name in tool_indices:
results.append(
ToolCallItem(
tool_index=tool_indices[name],
tool_index=-1, # Caller should update this based on the actual tools array called
name=name,
parameters=json.dumps(
act.get("parameters") or act.get("arguments", {}),
ensure_ascii=False,
),
)
)
else:
logger.warning(f"Model attempted to call undefined function: {name}")
for i, act in enumerate(action):
name = act.get("name")
if name and name in tool_indices:
results.append(
ToolCallItem(
tool_index=i,
name=name,
parameters=json.dumps(
act.get("parameters") or act.get("arguments", {}),
ensure_ascii=False,
),
)
)
else:
logger.warning(f"Model attempted to call undefined function: {name}")

@shihaobai shihaobai merged commit 230d9d8 into ModelTC:main Oct 23, 2025
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[BUG]Error when I was trying to run Mixtral-8x7B-v0.1

2 participants