-
Notifications
You must be signed in to change notification settings - Fork 1.1k
add npu fsdp example #6697
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
add npu fsdp example #6697
Conversation
Summary of ChangesHello @addsubmuldiv, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request provides a comprehensive new example for fine-tuning the Qwen3-32B model using LoRA and Fully Sharded Data Parallel (FSDP) on Ascend NPU hardware. It aims to offer a clear, runnable demonstration of distributed training best practices, incorporating bfloat16 precision and Flash Attention 2 for efficient large language model training. Highlights
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Code Review
This pull request introduces a new example for training on NPU with FSDP. The changes consist of an accelerate configuration file and a training script. My review provides suggestions to improve file formatting and script conventions, such as removing trailing whitespace, adding missing newlines, and including a shebang in the shell script for better portability and consistency.
| { | ||
| "compute_environment": "LOCAL_MACHINE", | ||
| "debug": false, | ||
| "distributed_type": "FSDP", | ||
| "downcast_bf16": "no", | ||
| "mixed_precision": "bf16", | ||
| "num_machines": 1, | ||
| "num_processes": 8, | ||
| "machine_rank": 0, | ||
| "rdzv_backend": "static", | ||
| "same_network": true, | ||
| "use_cpu": false, | ||
| "fsdp_config": { | ||
| "fsdp_auto_wrap_policy": "TRANSFORMER_BASED_WRAP", | ||
| "fsdp_transformer_cls_names_to_wrap": "Qwen3DecoderLayer", | ||
| "fsdp_sharding_strategy": "FULL_SHARD", | ||
| "fsdp_backward_prefetch": "BACKWARD_PRE", | ||
| "fsdp_forward_prefetch": true, | ||
| "fsdp_limit_all_gathers": true, | ||
| "fsdp_state_dict_type": "FULL_STATE_DICT", | ||
| "fsdp_sync_module_states": true, | ||
| "fsdp_cpu_ram_efficient_loading": true, | ||
| "fsdp_use_orig_params": true, | ||
| "fsdp_offload_params": false | ||
| } | ||
| } No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This JSON file has a couple of formatting issues that should be addressed for consistency:
- There are trailing whitespaces on lines 6, 8, 19, 20, and 23.
- The file is missing a newline character at the end.
I've provided a suggestion to fix both issues.
{
"compute_environment": "LOCAL_MACHINE",
"debug": false,
"distributed_type": "FSDP",
"downcast_bf16": "no",
"mixed_precision": "bf16",
"num_machines": 1,
"num_processes": 8,
"machine_rank": 0,
"rdzv_backend": "static",
"same_network": true,
"use_cpu": false,
"fsdp_config": {
"fsdp_auto_wrap_policy": "TRANSFORMER_BASED_WRAP",
"fsdp_transformer_cls_names_to_wrap": "Qwen3DecoderLayer",
"fsdp_sharding_strategy": "FULL_SHARD",
"fsdp_backward_prefetch": "BACKWARD_PRE",
"fsdp_forward_prefetch": true,
"fsdp_limit_all_gathers": true,
"fsdp_state_dict_type": "FULL_STATE_DICT",
"fsdp_sync_module_states": true,
"fsdp_cpu_ram_efficient_loading": true,
"fsdp_use_orig_params": true,
"fsdp_offload_params": false
}
}| # hardware: Atlas 900 A2 (910B1 8*64G) | ||
| export TASK_QUEUE_ENABLE=2 | ||
| export CPU_AFFINITY_CONF=2 | ||
| ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \ | ||
| accelerate launch --config_file "./examples/ascend/train/qwen3_lora_fsdp/fsdp.json" \ | ||
| swift/cli/sft.py \ | ||
| --model 'Qwen/Qwen3-32B' \ | ||
| --train_type lora \ | ||
| --dataset 'swift/self-cognition#1000' \ | ||
| --torch_dtype bfloat16 \ | ||
| --per_device_train_batch_size 10 \ | ||
| --gradient_accumulation_steps 2 \ | ||
| --gradient_checkpointing true \ | ||
| --gradient_checkpointing_kwargs '{"use_reentrant": false}' \ | ||
| --max_length 1200 \ | ||
| --num_train_epochs 2 \ | ||
| --eval_strategy no \ | ||
| --save_steps 500 \ | ||
| --logging_steps 1 \ | ||
| --dataloader_num_workers 8 \ | ||
| --dataset_num_proc 8 \ | ||
| --save_total_limit 2 \ | ||
| --save_only_model true \ | ||
| --output_dir output \ | ||
| --attn_impl 'flash_attention_2' \ | ||
| --packing true No newline at end of file |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
To improve the script and adhere to standard practices, please consider the following:
- Add a shebang (e.g.,
#!/bin/bash) at the top of the script. This ensures it's executed with the correct interpreter. - Add a newline character at the end of the file. This is a common convention for text files.
#!/bin/bash
# hardware: Atlas 900 A2 (910B1 8*64G)
export TASK_QUEUE_ENABLE=2
export CPU_AFFINITY_CONF=2
ASCEND_RT_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
accelerate launch --config_file "./examples/ascend/train/qwen3_lora_fsdp/fsdp.json" \
swift/cli/sft.py \
--model 'Qwen/Qwen3-32B' \
--train_type lora \
--dataset 'swift/self-cognition#1000' \
--torch_dtype bfloat16 \
--per_device_train_batch_size 10 \
--gradient_accumulation_steps 2 \
--gradient_checkpointing true \
--gradient_checkpointing_kwargs '{"use_reentrant": false}' \
--max_length 1200 \
--num_train_epochs 2 \
--eval_strategy no \
--save_steps 500 \
--logging_steps 1 \
--dataloader_num_workers 8 \
--dataset_num_proc 8 \
--save_total_limit 2 \
--save_only_model true \
--output_dir output \
--attn_impl 'flash_attention_2' \
--packing true
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Pull Request Overview
This PR adds a new example demonstrating FSDP (Fully Sharded Data Parallel) training for Qwen3-32B model with LoRA on Ascend NPU hardware (Atlas 900 A2 with 910B1 8*64G). It provides a complete setup for distributed training on Huawei's Ascend NPU platform.
Key Changes
- Adds FSDP configuration file for Qwen3 model training with 8-way data parallelism
- Provides training script with NPU-specific environment settings and optimized hyperparameters
- Enables flash attention and gradient checkpointing for memory efficiency
Reviewed Changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 5 comments.
| File | Description |
|---|---|
| examples/ascend/train/qwen3_lora_fsdp/fsdp.json | FSDP configuration with full sharding strategy, backward/forward prefetch optimizations, and Qwen3DecoderLayer wrapping policy |
| examples/ascend/train/qwen3_lora_fsdp/train.sh | Training script with NPU device visibility settings, accelerate launch configuration, and training hyperparameters for Qwen3-32B LoRA fine-tuning |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
| "debug": false, | ||
| "distributed_type": "FSDP", | ||
| "downcast_bf16": "no", | ||
| "mixed_precision": "bf16", |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing whitespace at the end of this line. Trailing whitespace can cause issues with version control and code formatting tools.
| "mixed_precision": "bf16", | |
| "mixed_precision": "bf16", |
| "fsdp_sharding_strategy": "FULL_SHARD", | ||
| "fsdp_backward_prefetch": "BACKWARD_PRE", | ||
| "fsdp_forward_prefetch": true, | ||
| "fsdp_limit_all_gathers": true, |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing whitespace at the end of this line. Trailing whitespace can cause issues with version control and code formatting tools.
| "fsdp_limit_all_gathers": true, | |
| "fsdp_limit_all_gathers": true, |
| "fsdp_backward_prefetch": "BACKWARD_PRE", | ||
| "fsdp_forward_prefetch": true, | ||
| "fsdp_limit_all_gathers": true, | ||
| "fsdp_state_dict_type": "FULL_STATE_DICT", |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing whitespace at the end of this line. Trailing whitespace can cause issues with version control and code formatting tools.
| "fsdp_state_dict_type": "FULL_STATE_DICT", | |
| "fsdp_state_dict_type": "FULL_STATE_DICT", |
| "fsdp_state_dict_type": "FULL_STATE_DICT", | ||
| "fsdp_sync_module_states": true, | ||
| "fsdp_cpu_ram_efficient_loading": true, | ||
| "fsdp_use_orig_params": true, |
Copilot
AI
Nov 21, 2025
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Remove trailing whitespace at the end of this line. Trailing whitespace can cause issues with version control and code formatting tools.
| "fsdp_use_orig_params": true, | |
| "fsdp_use_orig_params": true, |
| @@ -0,0 +1,26 @@ | |||
| # hardware: Atlas 900 A2 (910B1 8*64G) | |||
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
| # hardware: Atlas 900 A2 (910B1 8*64G) | |
| # hardware: Atlas 900 A2 |
f8d5c9a to
ad9d4dc
Compare
ad9d4dc to
d1ea330
Compare
PR type
PR information
Write the detail information belongs to this PR.
Experiment results
Paste your experiment result here(if needed).