-
Notifications
You must be signed in to change notification settings - Fork 4.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How does DeepSpeed implement multi-machine model parallelism? #98
Comments
Hi there! DeepSpeed does not implement model parallelism, but it does support models that use it. It's up to the user to implement model parallelism (e.g., a user might use some The difficulty of model parallelism was one major motivation for ZeRO. If you enable ZeRO, you can avoid the need for model parallelism in many cases. For an example, the Megatron-LM tutorial combines Megatron's model parallelism with ZeRO. |
@ShadenSmith Got it. Is it because the model parallelism is not efficient or scalable that you study memory optimization, the ZeRO? |
There are several trade-offs to consider, so for a full answer let me first recommend this excellent survey on parallelism in deep learning: https://arxiv.org/abs/1802.09941 From a library perspective, it's difficult to provide general model parallelism because it is specific to the user model. Model parallelism certainly has uses such as being more memory scalable than data parallelism such as batch splitting. ZeRO is a set of complementary optimizations that improve scalability without users having to implement model parallelism. The key idea is that users still provide a model without designing for parallelism and DeepSpeed can facilitate data parallelism and ZeRO to scale to large model and large degrees of parallelism. DeepSpeed has scaled to models with 6 billion parameters using only data parallelism and ZeRO on V100 GPUs. Adding model parallelism via Megatron-LM got DeepSpeed to 100B parameters. I'd like to note that we are of course not anti-model parallelism. DeepSpeed is meant to work with model parallelism if the user has a model-parallel program. The Megatron tutorial touches on this in more depth. |
The link to The Megatron tutorial is 404, here is a steady link: |
Co-authored-by: Shaden Smith <Shaden.Smith@microsoft.com> Co-authored-by: Olatunji Ruwase <olruwase@microsoft.com>
…armup before get perf data (microsoft#98) commit 73f012f5f4b22923201b72adb95d5b4d506d3263 Author: Guo Yejun <yejun.guo@intel.com> Date: Wed Oct 19 16:29:53 2022 +0800 pretrain_gpt2.py: skip the first 3 iterations to get perf. number (#30)
Hi
How does DeepSpeed implement multi-machine model parallelism, while pytorch only supports single-machine model parallelism.
Is there any other docs about DeepSpeed's model parallelism?
The text was updated successfully, but these errors were encountered: