Skip to content

[megatron] support megatron tuner_type 'lora_llm'#8388

Merged
Jintao-Huang merged 11 commits intomodelscope:mainfrom
Jintao-Huang:support_megatron_lora_llm
Mar 20, 2026
Merged

[megatron] support megatron tuner_type 'lora_llm'#8388
Jintao-Huang merged 11 commits intomodelscope:mainfrom
Jintao-Huang:support_megatron_lora_llm

Conversation

@Jintao-Huang
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request significantly enhances the fine-tuning capabilities for Megatron models, particularly for multimodal scenarios, by introducing a dedicated lora_llm tuner type. This new mode allows for more granular control over which parts of a multimodal model are trained, enabling full training of visual and aligner components while applying LoRA to the language model. The changes also include important documentation updates and the removal of deprecated compatibility code, improving the clarity and maintainability of the codebase.

Highlights

  • New Tuner Type 'lora_llm': Introduced a new tuner_type option, 'lora_llm', specifically for Megatron models, enabling a specialized fine-tuning approach for multimodal architectures.
  • Documentation Updates: Updated various documentation files (READMEs, installation guides, command-line parameter descriptions) to reflect the new lora_llm tuner type and adjusted the recommended transformers library version from 5.3.0 to 5.2.0.
  • Deprecated train_type Removal: Removed the deprecated train_type argument and its associated compatibility logic from BaseArguments and MegatronTunerMixin classes, streamlining argument handling.
  • Conditional Freezing Logic for 'lora_llm': Implemented specific logic for the 'lora_llm' tuner type, ensuring that merge_lora is enabled and adjusting how freeze_llm, freeze_vit, and freeze_aligner parameters are applied during multimodal target module selection and model preparation.
  • Full ViT/Aligner Training for 'lora_llm': Added a new helper function _prepare_full_vit to explicitly enable gradient computation for visual and aligner modules when using the 'lora_llm' tuner type, allowing these components to be fully trained alongside LoRA applied to the LLM.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for a new tuner_type called lora_llm for Megatron, which is designed for multimodal models. This new type applies LoRA to the language model part while keeping the vision and aligner parts fully trainable. The changes include updating argument parsing, adding logic to configure LoRA target modules, and unfreezing the vision/aligner components. Additionally, backward compatibility for the deprecated train_type argument has been removed, which is a good cleanup. The changes are well-implemented and consistent with the feature's goal. I have one suggestion to improve code clarity.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for a new tuner_type called 'lora_llm' for Megatron, which enables applying LoRA to the LLM part of a multimodal model while fully fine-tuning the ViT and aligner components. The changes are well-implemented across argument parsing, model preparation, and documentation. I have one suggestion to enhance the robustness of the implementation.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for a new tuner_type, lora_llm, for Megatron training. This feature allows for a hybrid fine-tuning approach on multimodal models, applying LoRA to the LLM component while fully fine-tuning the ViT and aligner components. The implementation includes updates to argument handling, validation logic, and model preparation routines to support this new mode. An example script demonstrating the usage of lora_llm is also included. Documentation has been updated accordingly, and the deprecated train_type argument has been removed. The changes appear to be consistent and correctly implement the intended functionality.

@Jintao-Huang Jintao-Huang merged commit fe928a9 into modelscope:main Mar 20, 2026
2 of 3 checks passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

2 participants