Skip to content

Conversation

@Jintao-Huang
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request introduces enhanced support for Qwen3 Omni models, specifically enabling them to process mixed data inputs including images, videos, and audio. The core changes involve refactoring the visual data handling into a shared utility function and integrating this, along with audio processing logic, into the Qwen3OmniMoeThinkerCausalLM's forward method. This ensures robust multimodal capabilities for these models. Additionally, a new test case has been added to validate the Qwen3Guard model.

Highlights

  • Mixed Data Handling Refactor: Extracted common logic for processing mixed image and video data in Qwen3 models into a dedicated helper function, _forward_qwen3_vl_or_qwen3_omni, to improve code organization and reusability.
  • Qwen3 Omni Mixed Data Support: Implemented compatibility for Qwen3 Omni models to handle mixed data inputs (images, videos, and audio) by overriding the model's forward pass through the _compat_qwen3_omni_mixed_data function.
  • New Test for Qwen3Guard: Added a new test case, test_qwen3_guard, to verify the functionality of the Qwen3Guard-Gen-0.6B model, ensuring its proper integration and behavior.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for mixed image and video data in Qwen3 Omni models. This is achieved by refactoring the visual data processing logic from the existing Qwen3 VL forward pass into a new shared function, _forward_qwen3_vl_or_qwen3_omni. A new compatibility patch, _compat_qwen3_omni_mixed_data, is added to handle the forward pass for Qwen3 Omni models, utilizing this new shared function. This is a good refactoring that promotes code reuse.

However, I've found a critical typo in the new _compat_qwen3_omni_mixed_data function that would cause visual embeddings to be discarded, breaking multimodal functionality. Please see the specific comment for details.

Comment on lines 1213 to 1215
nputs_embeds, visual_pos_masks, visual_embeds_multiscale = _forward_qwen3_vl_or_qwen3_omni(
self, processor, input_ids, inputs_embeds, pixel_values, pixel_values_videos, image_grid_thw,
video_grid_thw)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

critical

There is a typo in the variable name. nputs_embeds should be inputs_embeds. Due to this typo, the visual embeddings computed by _forward_qwen3_vl_or_qwen3_omni are discarded. The subsequent audio embedding logic is then applied to the original text embeddings, leading to incorrect multimodal inputs where visual information is lost.

Suggested change
nputs_embeds, visual_pos_masks, visual_embeds_multiscale = _forward_qwen3_vl_or_qwen3_omni(
self, processor, input_ids, inputs_embeds, pixel_values, pixel_values_videos, image_grid_thw,
video_grid_thw)
inputs_embeds, visual_pos_masks, visual_embeds_multiscale = _forward_qwen3_vl_or_qwen3_omni(
self, processor, input_ids, inputs_embeds, pixel_values, pixel_values_videos, image_grid_thw,
video_grid_thw)

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for mixed image and video data in qwen3-omni models by refactoring and reusing logic from qwen3-vl. The changes are well-structured, extracting shared functionality into helper functions, which significantly improves code organization and maintainability. A new test for qwen3-guard is also included. My review includes a few suggestions to enhance code clarity and remove minor redundancies.

image_mask_joint = image_mask[visual_pos_masks]
video_mask_joint = video_mask[visual_pos_masks]
for img_embed, vid_embed in zip(deepstack_image_embeds, deepstack_video_embeds):
embed_joint = img_embed.new_zeros(visual_pos_masks.sum(), img_embed.shape[-1]).to(img_embed.device)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The .to(img_embed.device) call here is redundant. According to PyTorch documentation, torch.Tensor.new_zeros creates a tensor on the same device as the original tensor (img_embed in this case). Removing this will make the code slightly cleaner.

Suggested change
embed_joint = img_embed.new_zeros(visual_pos_masks.sum(), img_embed.shape[-1]).to(img_embed.device)
embed_joint = img_embed.new_zeros(visual_pos_masks.sum(), img_embed.shape[-1])

video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
The temporal, height and width of feature shape of each video in LLM.
"""
) -> Union[tuple, output_cls]:
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The docstring for this forward function was removed. This function has a complex signature with several optional tensor arguments like image_grid_thw and video_grid_thw. Re-adding the docstring explaining these parameters would improve code readability and maintainability.
For example:

    ) -> Union[tuple, output_cls]:
        r"""
        image_grid_thw (`torch.LongTensor` of shape `(num_images, 3)`, *optional*):
            The temporal, height and width of feature shape of each image in LLM.
        video_grid_thw (`torch.LongTensor` of shape `(num_videos, 3)`, *optional*):
            The temporal, height and width of feature shape of each video in LLM.
        """

Comment on lines +1257 to +1258
if (cache_position is None or (cache_position is not None and cache_position[0] == 0)
or self.rope_deltas is None):
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

The condition in this if statement is a bit complex and can be simplified for better readability. The sub-expression (cache_position is None or (cache_position is not None and cache_position[0] == 0)) is logically equivalent to cache_position is None or cache_position[0] == 0. Simplifying this would make the overall condition easier to understand.

            if cache_position is None or cache_position[0] == 0 or self.rope_deltas is None:

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for Qwen3 Omni models with mixed data types, including a new example script and necessary patches for DeepSpeed compatibility. The refactoring of media processing logic into a shared function is a good improvement for maintainability. A notable and important change is the bug fix in the multimodal optimizer, which now correctly handles parameter grouping when weight decay is zero. The overall changes are well-implemented and enhance the model's training capabilities.

Comment on lines +143 to 147
for use_wd, wd in zip([False, True], [0., args.weight_decay]):
if use_wd:
params = [p for n, p in parameters if n in decay_parameters]
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

high

This change fixes a subtle but important bug in how optimizer parameter groups are created. The previous logic would fail to correctly assign parameters to the weight decay group if args.weight_decay was 0.0. Specifically, it would add the non-decay parameters twice and never add the decay parameters. The new implementation using zip with a boolean flag use_wd correctly and more explicitly separates the parameters, ensuring the optimizer is configured correctly regardless of the weight_decay value.

Suggested change
for use_wd, wd in zip([False, True], [0., args.weight_decay]):
if use_wd:
params = [p for n, p in parameters if n in decay_parameters]
for use_wd, wd in zip([False, True], [0., args.weight_decay]):
if use_wd:
params = [p for n, p in parameters if n in decay_parameters]
else:
params = [p for n, p in parameters if n not in decay_parameters]

@Jintao-Huang Jintao-Huang merged commit 26d845a into modelscope:main Oct 18, 2025
1 of 2 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants