Conversation
Summary of ChangesHello @kevssim, I'm Gemini Code Assist1! I'm currently reviewing this pull request and will post my feedback shortly. In the meantime, here's a summary to help you and other reviewers quickly get up to speed! This pull request significantly upgrades the Highlights
🧠 New Feature in Public Preview: You can now enable Memory to help Gemini Code Assist learn from your team's feedback. This makes future code reviews more consistent and personalized to your project's style. Click here to enable Memory in your admin console. Changelog
Activity
Using Gemini Code AssistThe full guide for Gemini Code Assist can be found on our documentation page, here are some quick tips. Invoking Gemini You can request assistance from Gemini at any point by creating a comment using either
Customization To customize Gemini Code Assist for GitHub experience, repository maintainers can create a configuration file and/or provide a custom code review style guide (such as PEP-8 for Python) by creating and adding files to a Limitations & Feedback Gemini Code Assist may make mistakes. Please leave feedback on any instances where its feedback is incorrect or counter productive. You can react with 👍 and 👎 on @gemini-code-assist comments. If you're interested in giving your feedback about your experience with Gemini Code Assist for Github and other Google products, sign up here. You can also get AI-powered code generation, chat, as well as code reviews directly in the IDE at no cost with the Gemini Code Assist IDE Extension. Footnotes
|
There was a problem hiding this comment.
Code Review
This pull request introduces a fix for running FSDP with MoE models, specifically allowing expert parallelism to be disabled. This is achieved by adding an enable_ep flag to the NativeFSDPStrategy. The changes also include bug fixes in the data_world_size calculation within DeviceMesh. A new example script for fine-tuning a Qwen3 MoE model with FSDP is added, likely to test this new capability.
My review has identified a couple of issues in the new example script:
- A critical issue with the gradient accumulation logic in the training loop, which will lead to incorrect model updates.
- A minor point of confusion in how the data processor is selected, which could be improved for clarity.
The core changes to enable FSDP without EP and the bug fixes in platform.py look correct.
I am having trouble creating individual review comments. Click here to see my feedback.
cookbook/sft/fsdp_qwen3_moe.py (83-98)
The gradient accumulation logic in this training loop is incorrect. Due to an issue in the underlying twinkle library's do_grad_sync method, the first optimizer update will use gradients from N+1 steps instead of N (where N is GRAD_ACCUM_STEPS). This will lead to incorrect training behavior. While the root cause is in the library, this script is directly affected and will not function as expected. The do_grad_sync method should be fixed to ensure updates happen after exactly N steps.
cookbook/sft/fsdp_qwen3_moe.py (60-62)
For better readability and to avoid confusion, it's better to check and modify the processor variable directly, rather than re-checking the original PROCESSOR_ID.
processor = PROCESSOR_ID
if processor.lower() == "alpaca":
processor = "AlpacaProcessor"
No description provided.