update npu fsdp example#8308
Conversation
Summary of ChangesHello, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request updates the NPU FSDP example configuration for Qwen3 LoRA training. The primary goal is to enhance stability and prevent synchronization issues during model loading on NPU devices by adjusting a key FSDP parameter and providing clear guidance within the training script. Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for GitHub and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request updates an FSDP example for NPU training. The change disables fsdp_cpu_ram_efficient_loading in the JSON configuration and adds a comment to the corresponding shell script explaining that this is to avoid timeout issues on NPUs. The changes are localized to the specific example and appear to be a targeted fix for the described environment.
There was a problem hiding this comment.
Pull request overview
Updates the Ascend/NPU Qwen3 LoRA+FSDP training example to avoid a known first-synchronization hang by disabling RAM-efficient FSDP loading and documenting the rationale in the training script.
Changes:
- Add guidance in
train.shrecommending disabling FSDP CPU RAM efficient loading for Transformers v5+ on NPU to avoid first sync timeouts. - Flip
fsdp_cpu_ram_efficient_loadingfromtruetofalsein the examplefsdp.jsonconfig.
Reviewed changes
Copilot reviewed 2 out of 2 changed files in this pull request and generated 2 comments.
| File | Description |
|---|---|
| examples/ascend/train/qwen3/qwen3_lora_fsdp/train.sh | Adds explanatory comment about disabling CPU RAM efficient loading to avoid NPU desync/timeouts. |
| examples/ascend/train/qwen3/qwen3_lora_fsdp/fsdp.json | Disables fsdp_cpu_ram_efficient_loading in the example FSDP config to match the recommended workaround. |
💡 Add Copilot custom instructions for smarter, more guided reviews. Learn how to get started.
You can also share your feedback on Copilot code review. Take the survey.
| @@ -1,4 +1,7 @@ | |||
| # hardware: Atlas 900 A2 | |||
| # For NPU, in Transformers versions 5.0 and above, it is recommended to disable | |||
| # cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first | |||
There was a problem hiding this comment.
The new header comment refers to disabling cpu_ram_efficient_loading, but the actual Accelerate/FSDP config key in fsdp.json is fsdp_cpu_ram_efficient_loading. Please align the comment with the exact key name to avoid confusion when users try to apply this workaround.
| # cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first | |
| # fsdp_cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first |
| @@ -1,4 +1,7 @@ | |||
| # hardware: Atlas 900 A2 | |||
| # For NPU, in Transformers versions 5.0 and above, it is recommended to disable | |||
| # cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first | |||
There was a problem hiding this comment.
Trailing whitespace at the end of this comment line (after "first"). Please remove it to avoid churn in future diffs and keep formatting clean.
| # cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first | |
| # cpu_ram_efficient_loading in fsdp.json to avoid timeout issues at the first |
|
thanks |
* update npu fsdp example * fix
PR type
PR information
Write the detail information belongs to this PR.
Experiment results
Paste your experiment result here(if needed).