Skip to content

[megatron] support glm4_6 flash megatron#7172

Merged
Jintao-Huang merged 24 commits intomodelscope:mainfrom
Jintao-Huang:support_glm4_6_flash_megatron
Dec 23, 2025
Merged

[megatron] support glm4_6 flash megatron#7172
Jintao-Huang merged 24 commits intomodelscope:mainfrom
Jintao-Huang:support_glm4_6_flash_megatron

Conversation

@Jintao-Huang
Copy link
Collaborator

No description provided.

@gemini-code-assist
Copy link
Contributor

Summary of Changes

Hello @Jintao-Huang, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed!

This pull request extends the testing capabilities within the megatron project by integrating new models from ZhipuAI. It specifically adds test cases for the GLM-4-9B-0414 large language model and the GLM-4.6V-Flash multi-modal large language model, ensuring their proper functionality and compatibility within the existing alignment testing suite. The changes primarily involve adding new test functions and updating the test execution flow.

Highlights

  • New LLM Test: Added a new test function test_glm4 to test_llm.py to validate the ZhipuAI/GLM-4-9B-0414 large language model.
  • New MLLM Test: Introduced a new test function test_glm4_6v_flash in test_mllm.py for the ZhipuAI/GLM-4.6V-Flash multi-modal large language model.
  • Test Execution Update: Modified the if __name__ == '__main__': block in test_mllm.py to enable direct execution of the test_glm4_6v_flash function, while commenting out test_llama4.

🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console.

Using Gemini Code Assist

The full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips.

Invoking Gemini

You can request assistance from Gemini at any point by creating a comment using either /gemini <command> or @gemini-code-assist <command>. Below is a summary of the supported commands on the current page.

Feature Command Description
Code Review /gemini review Performs a code review for the current pull request in its current state.
Pull Request Summary /gemini summary Provides a summary of the current pull request in its current state.
Comment @gemini-code-assist Responds in comments when explicitly tagged, both in pull request comments and review comments.
Help /gemini help Displays a list of available commands.

Customization

To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a .gemini/ folder in the base of the repository. Detailed instructions can be found here.

Limitations & Feedback

Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here.

You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension.

Footnotes

  1. Review the Privacy Notices, Generative AI Prohibited Use Policy, Terms of Service, and learn how to configure Gemini Code Assist in GitHub here. Gemini can make mistakes, so double check it and use code with caution.

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request introduces support for the GLM-4.6V-Flash model within the Megatron framework by adding a new test case for it. Additionally, a test for the GLM-4-9B-0414 model is included. The changes are straightforward and mainly involve adding new test functions. My feedback includes a suggestion to rename one of the new test functions for better clarity and consistency within the test suite.

Comment on lines +142 to +143
def test_glm4():
_test_model('ZhipuAI/GLM-4-9B-0414')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

For improved clarity and consistency with other tests like test_glm4_5, it's recommended to rename this function to be more specific. Since it tests the ZhipuAI/GLM-4-9B-0414 model, which belongs to the glm4_0414 model type, a name like test_glm4_0414 would be more descriptive and prevent potential confusion with other glm4 models.

Suggested change
def test_glm4():
_test_model('ZhipuAI/GLM-4-9B-0414')
def test_glm4_0414():
_test_model('ZhipuAI/GLM-4-9B-0414')

@Jintao-Huang Jintao-Huang linked an issue Dec 23, 2025 that may be closed by this pull request
@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for glm4_1v models, including glm4.6v-flash, within the Megatron framework. The changes are comprehensive, covering model registration, configuration updates, and documentation. A notable improvement is the refactoring in swift/llm/template/template/glm.py, where a GLM4vPackingTemplateMixin has been introduced to abstract away common packing logic, reducing code duplication and enhancing maintainability. The addition of a new test case ensures the new model support is validated. My only suggestion is to remove a commented-out line in the test file to improve code clarity.


def test_glm4_6v_flash():
_test_model('ZhipuAI/GLM-4.1V-9B-Thinking')
# _test_model('ZhipuAI/GLM-4.6V-Flash')
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

medium

To maintain code clarity, it's best to remove this commented-out test case. If it's intended for future use, consider adding a TODO with more context or creating a separate issue to track it.

@Jintao-Huang
Copy link
Collaborator Author

/gemini review

Copy link
Contributor

@gemini-code-assist gemini-code-assist bot left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Code Review

This pull request adds support for GLM-4.1V and GLM-4.6V models, including GLM-4.6V-Flash, within the Megatron framework. Key changes include updating documentation to reflect this new support, refactoring template code for better code reuse by introducing a GLM4vPackingTemplateMixin, and registering the new glm4_1v model type for Megatron. The configuration handling for rotary embeddings has been updated, and the rotary embedding logic itself has been adjusted to correctly support interleaved embeddings for these models. Additionally, new tests have been added to validate the changes. The code modifications are well-organized and improve maintainability.

@Jintao-Huang Jintao-Huang merged commit e1ac0cc into modelscope:main Dec 23, 2025
2 of 3 checks passed
@flag2fish
Copy link

Can't train GLM_4.6V_Flash.
The error log
[rank1]: Traceback (most recent call last):
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/cli/_megatron/sft.py", line 5, in
[rank1]: megatron_sft_main()
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/megatron/train/sft.py", line 79, in megatron_sft_main
[rank1]: return MegatronSft(args).main()
[rank1]: ^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/megatron/train/sft.py", line 36, in init
[rank1]: self.model, self.processor = args.get_model_processor(**kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/llm/argument/base_args/base_args.py", line 317, in get_model_processor
[rank1]: return get_model_tokenizer(**kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/llm/model/register.py", line 752, in get_model_tokenizer
[rank1]: model, processor = get_function(model_dir, model_info, model_kwargs, load_model, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/llm/model/model/glm.py", line 259, in get_model_tokenizer_glm4_1v
[rank1]: model, processor = get_model_tokenizer_multimodal(*args, **kwargs)
[rank1]: ^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/swift/llm/model/register.py", line 437, in get_model_tokenizer_multimodal
[rank1]: kwargs['tokenizer'] = processor.tokenizer
[rank1]: ^^^^^^^^^^^^^^^^^^^
[rank1]: File "/usr/local/lib/python3.11/site-packages/transformers/tokenization_utils_base.py", line 1127, in getattr
[rank1]: raise AttributeError(f"{self.class.name} has no attribute {key}")
[rank1]: AttributeError: PreTrainedTokenizerFast has no attribute tokenizer. Did you mean: '_tokenizer'?
Train shell file
#!/bin/bash

MS-SWIFT训练脚本

设置环境变量

export CUDA_VISIBLE_DEVICES=0,5
export PYTHONPATH=$PYTHONPATH:.

安装依赖(如果需要)

pip install ms-swift[llm] -U

pip install transformers accelerate datasets

使用torchrun启动3卡分布式训练

torchrun
--nproc_per_node 2
--master_port 28500
$(which megatron) sft
--model ZhipuAI/GLM-4.6V-Flash
--dataset /workspace/output_bbox_new_2/swift_training_data_no_testpoints_20251226_183123.jsonl
--no_initialization false
--load_from_cache_file true
--tensor_model_parallel_size 2
--sequence_parallel true
--packing true
--freeze_llm false
--freeze_vit true
--freeze_aligner true
--split_dataset_ratio 0.01
--micro_batch_size 1
--global_batch_size 4
--recompute_granularity full
--recompute_method uniform
--recompute_num_layers 1
--finetune true
--cross_entropy_loss_fusion true
--lr 1e-5
--lr_warmup_fraction 0.05
--min_lr 1e-6
--max_epochs 1
--save megatron_output/GLM-4.6V-Flash
--save_interval 200
--vit_gradient_checkpointing false
--max_length 2048
--num_workers 4
--no_save_optim true
--no_save_rng true
--dataset_num_proc 8

echo "Training completed!"

meichangsu1 pushed a commit to tpx818/ms-swift that referenced this pull request Jan 22, 2026
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment

Labels

None yet

Projects

None yet

Development

Successfully merging this pull request may close these issues.

[FEATURE] 支持glm-4.6v的megatron 训练

3 participants