[megatron] support glm4_6 flash megatron#7172
Conversation
Summary of ChangesHello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request extends the testing capabilities within the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces support for the GLM-4.6V-Flash model within the Megatron framework by adding a new test case for it. Additionally, a test for the GLM-4-9B-0414 model is included. The changes are straightforward and mainly involve adding new test functions. My feedback includes a suggestion to rename one of the new test functions for better clarity and consistency within the test suite.
| def test_glm4(): | ||
| _test_model('ZhipuAI/GLM-4-9B-0414') |
There was a problem hiding this comment.
For improved clarity and consistency with other tests like test_glm4_5, it's recommended to rename this function to be more specific. Since it tests the ZhipuAI/GLM-4-9B-0414 model, which belongs to the glm4_0414 model type, a name like test_glm4_0414 would be more descriptive and prevent potential confusion with other glm4 models.
| def test_glm4(): | |
| _test_model('ZhipuAI/GLM-4-9B-0414') | |
| def test_glm4_0414(): | |
| _test_model('ZhipuAI/GLM-4-9B-0414') |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request adds support for glm4_1v models, including glm4.6v-flash, within the Megatron framework. The changes are comprehensive, covering model registration, configuration updates, and documentation. A notable improvement is the refactoring in swift/llm/template/template/glm.py, where a GLM4vPackingTemplateMixin has been introduced to abstract away common packing logic, reducing code duplication and enhancing maintainability. The addition of a new test case ensures the new model support is validated. My only suggestion is to remove a commented-out line in the test file to improve code clarity.
|
|
||
| def test_glm4_6v_flash(): | ||
| _test_model('ZhipuAI/GLM-4.1V-9B-Thinking') | ||
| # _test_model('ZhipuAI/GLM-4.6V-Flash') |
|
/gemini review |
There was a problem hiding this comment.
Code Review
This pull request adds support for GLM-4.1V and GLM-4.6V models, including GLM-4.6V-Flash, within the Megatron framework. Key changes include updating documentation to reflect this new support, refactoring template code for better code reuse by introducing a GLM4vPackingTemplateMixin, and registering the new glm4_1v model type for Megatron. The configuration handling for rotary embeddings has been updated, and the rotary embedding logic itself has been adjusted to correctly support interleaved embeddings for these models. Additionally, new tests have been added to validate the changes. The code modifications are well-organized and improve maintainability.
|
Can't train GLM_4.6V_Flash. MS-SWIFT训练脚本设置环境变量export CUDA_VISIBLE_DEVICES=0,5 安装依赖(如果需要)pip install ms-swift[llm] -Upip install transformers accelerate datasets使用torchrun启动3卡分布式训练torchrun echo "Training completed!" |
No description provided.