Skip to content

[megatron] Qwen3.5 supports larger num_query_groups (mcore 0.16)#8253

Merged
Jintao-Huang merged 4 commits intomodelscope:mainfrom
Jintao-Huang:update_qwen3_5_tp
Mar 10, 2026
Merged

[megatron] Qwen3.5 supports larger num_query_groups (mcore 0.16)#8253
Jintao-Huang merged 4 commits intomodelscope:mainfrom
Jintao-Huang:update_qwen3_5_tp

Conversation

@Jintao-Huang
Copy link
Copy Markdown
Collaborator

@gemini-code-assist
Copy link
Copy Markdown
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the flexibility and scalability of the Qwen3.5 model within Megatron-LM by allowing more diverse configurations of query groups in a tensor-parallel setup. The changes address the challenge of distributing query, key, and value heads efficiently across multiple devices, particularly when the number of query groups is less than the tensor parallel size. This improvement is crucial for optimizing the performance and resource utilization of large language models during distributed training.

Highlights

  • Enhanced Qwen3.5 Support: Enabled the Qwen3.5 model to support configurations where num_query_groups can be smaller than the tensor_model_parallel_size, leveraging capabilities from mcore 0.16.
  • Improved Tensor Parallelism for QKV: Introduced all_gather_last_dim_from_tensor_parallel_region and refined query tensor indexing to correctly partition query, key, and value heads across tensor parallel ranks, especially when tp_size exceeds num_kv_heads.
  • Relaxed Configuration Constraints: Removed a previous validation check that required num_query_groups to be a multiple of tensor_model_parallel_size, allowing for greater flexibility in model configuration.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Changelog
  • swift/megatron/model/gpts/qwen3_next.py
    • Imported get_tensor_model_parallel_rank and all_gather_last_dim_from_tensor_parallel_region for advanced tensor parallelism operations.
    • Implemented new logic within get_query_key_value_tensors to handle QKV partitioning when num_query_groups is less than world_size.
    • Adjusted query tensor indexing to ensure correct query head distribution across tensor parallel ranks.
  • swift/megatron/model/model_config.py
    • Removed a validation check in __post_init__ that enforced num_query_groups to be a multiple of tensor_model_parallel_size.
Activity
  • No specific activity (comments, reviews, or progress updates) has been recorded for this pull request yet.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to add support for Grouped-Query Attention configurations where num_query_groups is smaller than the tensor parallel world size, by porting changes from an upstream Megatron-LM pull request. The changes introduce logic to gather and slice tensors across tensor parallel ranks. My review identified two critical potential issues in the new logic. First, a missing assertion for a divisibility requirement could lead to out-of-bounds memory access. Second, an incorrect calculation for slicing the query tensor could result in dropping query heads and incorrect attention computation. Both issues need to be addressed to ensure correctness.

Comment thread swift/megatron/model/gpts/qwen3_next.py Outdated
Comment thread swift/megatron/model/gpts/qwen3_next.py Outdated
@Jintao-Huang
Copy link
Copy Markdown
Collaborator Author

/gemini review

Copy link
Copy Markdown
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for configurations where num_query_groups is smaller than the tensor parallel world size in Qwen3.5 models, aligning with an update in mcore 0.16. The implementation involves adding logic to correctly handle the distributed query, key, and value projections across multiple tensor parallel ranks. The corresponding validation check that prevented this configuration has been correctly removed. The logic appears sound, but there is a minor opportunity to improve code clarity by refactoring a repeated calculation.

Comment thread swift/megatron/model/gpts/qwen3_next.py
@Jintao-Huang Jintao-Huang merged commit 5db695c into modelscope:main Mar 10, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants