Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

How does DeepSpeed implement multi-machine model parallelism? #98

Closed
hhwode opened this issue Feb 21, 2020 · 4 comments
Closed

How does DeepSpeed implement multi-machine model parallelism? #98

hhwode opened this issue Feb 21, 2020 · 4 comments

Comments

@hhwode
Copy link

hhwode commented Feb 21, 2020

Hi
How does DeepSpeed implement multi-machine model parallelism, while pytorch only supports single-machine model parallelism.
Is there any other docs about DeepSpeed's model parallelism?

@ShadenSmith
Copy link
Contributor

Hi there! DeepSpeed does not implement model parallelism, but it does support models that use it. It's up to the user to implement model parallelism (e.g., a user might use some dist.XXX() communication routines to coordinate forward/backward passes). DeepSpeed just needs an mpu object at initialization to query things process ranks and groups (and world_size) during training.

The difficulty of model parallelism was one major motivation for ZeRO. If you enable ZeRO, you can avoid the need for model parallelism in many cases. For an example, the Megatron-LM tutorial combines Megatron's model parallelism with ZeRO.

@hhwode
Copy link
Author

hhwode commented Feb 22, 2020

@ShadenSmith Got it. Is it because the model parallelism is not efficient or scalable that you study memory optimization, the ZeRO?
I'm new to this field, there is too little material about model parallelism. Can I ask why it is hard to do distributed model parallelism, in which way, inter-machine communication, network splitting, or algorithm level.
And thank you.

@ShadenSmith
Copy link
Contributor

There are several trade-offs to consider, so for a full answer let me first recommend this excellent survey on parallelism in deep learning: https://arxiv.org/abs/1802.09941

From a library perspective, it's difficult to provide general model parallelism because it is specific to the user model. Model parallelism certainly has uses such as being more memory scalable than data parallelism such as batch splitting.

ZeRO is a set of complementary optimizations that improve scalability without users having to implement model parallelism. The key idea is that users still provide a model without designing for parallelism and DeepSpeed can facilitate data parallelism and ZeRO to scale to large model and large degrees of parallelism. DeepSpeed has scaled to models with 6 billion parameters using only data parallelism and ZeRO on V100 GPUs. Adding model parallelism via Megatron-LM got DeepSpeed to 100B parameters.

I'd like to note that we are of course not anti-model parallelism. DeepSpeed is meant to work with model parallelism if the user has a model-parallel program. The Megatron tutorial touches on this in more depth.

@stas00
Copy link
Collaborator

stas00 commented Jan 5, 2021

The Megatron tutorial touches on this in more depth.

The link to The Megatron tutorial is 404, here is a steady link:
https://github.com/microsoft/DeepSpeed/blob/46d2e2872b64ebccb8bf4eb5c8a3a55f9adaaa6c/docs/_tutorials/megatron.md

Seong-yeop pushed a commit to Seong-yeop/DeepSpeed that referenced this issue Nov 10, 2021
Co-authored-by: Shaden Smith <Shaden.Smith@microsoft.com>
Co-authored-by: Olatunji Ruwase <olruwase@microsoft.com>
delock pushed a commit to delock/DeepSpeedSYCLSupport that referenced this issue Nov 8, 2022
…armup before get perf data (microsoft#98)

commit 73f012f5f4b22923201b72adb95d5b4d506d3263
Author: Guo Yejun <yejun.guo@intel.com>
Date:   Wed Oct 19 16:29:53 2022 +0800

    pretrain_gpt2.py: skip the first 3 iterations to get perf. number (#30)
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants