Skip to content

Conversation

@Jintao-Huang
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request primarily focuses on enhancing the efficiency of communication within the gpt_bridge module, which is crucial for large-scale distributed training involving expert and pipeline parallelism. By creating a unified communication group for both expert and pipeline ranks, the changes aim to reduce communication overhead and simplify the underlying distributed operations. Additionally, a minor adjustment was made to the checkpoint initialization logic in the export arguments to prevent unnecessary operations.

Highlights

  • Consolidated Communication Groups: Introduced a combined EP-PP-GROUP to unify communication for expert and pipeline parallelism within the gpt_bridge component, streamlining distributed operations.
  • Optimized Distributed Operations: Refactored all_reduce and broadcast operations in gpt_bridge to leverage the new combined communication group, reducing redundant calls and simplifying the logic for handling distributed tensors.
  • Conditional Checkpoint Initialization: Added a new method _init_ckpt_dir to ExportArguments that conditionally skips checkpoint directory initialization if to_cached_dataset is enabled.
Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request aims to optimize communication in the gpt_bridge by combining expert parallel (EP) and pipeline parallel (PP) communication into a single group (ep_pp_group). This is a sound optimization strategy. However, the implementation introduces a few critical issues. The new combined communication group is incorrectly used for non-expert layers in several places, which would lead to incorrect state synchronization. Additionally, there's a subtle issue in the weight broadcasting logic that could mask error conditions. I have provided detailed comments and suggestions to address these problems. The change in swift/llm/argument/export_args.py appears correct.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces a significant optimization for communication in the gpt_bridge by creating and utilizing a combined expert-pipeline parallel (ep-pp) process group. This change effectively consolidates two separate collective communication operations into a single one for expert layers, which should lead to performance improvements. The modifications across _get_weight, _set_state_dict, and _set_mlp_state are consistent with this optimization goal. My review includes suggestions to refactor some duplicated logic to enhance code maintainability and readability.

Comment on lines +292 to 295
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(state, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(state, group=self.pp_group)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This if/elif block for the all_reduce operation is repeated in _set_mlp_state as well. To improve code maintainability and reduce duplication, this logic can be refactored. By first selecting the appropriate parallel group and size based on whether it's an expert layer, and then performing the communication, the code becomes more concise and easier to read.

Suggested change
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(state, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(state, group=self.pp_group)
pp_group = self.ep_pp_group if is_expert else self.pp_group
pp_size = self.ep_pp_size if is_expert else self.pp_size
if pp_size > 1:
dist.all_reduce(state, group=pp_group)

Comment on lines +646 to 649
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(is_lora, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(is_lora, group=self.pp_group)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

Similar to the logic in _set_state_dict, this if/elif block for all_reduce can be refactored to improve readability and avoid code duplication. This will make the codebase more consistent and easier to maintain.

Suggested change
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(is_lora, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(is_lora, group=self.pp_group)
pp_group = self.ep_pp_group if is_expert else self.pp_group
pp_size = self.ep_pp_size if is_expert else self.pp_size
if pp_size > 1:
dist.all_reduce(is_lora, group=pp_group)

Comment on lines +798 to 801
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(is_lora, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(is_lora, group=self.pp_group)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

This is another instance of the same communication logic. Applying the same refactoring here will further improve code consistency and maintainability across the file.

Suggested change
if is_expert and self.ep_pp_size > 1:
dist.all_reduce(is_lora, group=self.ep_pp_group)
elif not is_expert and self.pp_size > 1:
dist.all_reduce(is_lora, group=self.pp_group)
pp_group = self.ep_pp_group if is_expert else self.pp_group
pp_size = self.ep_pp_size if is_expert else self.pp_size
if pp_size > 1:
dist.all_reduce(is_lora, group=pp_group)

@Jintao-Huang Jintao-Huang merged commit 1174780 into modelscope:main Nov 18, 2025
1 of 2 checks passed
vx120 pushed a commit to vx120/ms-swift that referenced this pull request Nov 19, 2025
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants