-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How model parallelize across GPUs? #10
Comments
You can use PyTorch Lightning instead. It automatically parallelizes the model training across GPUs and also supports TPU with just a single argument. |
We use the nccl backend with PyTorch to parallelize the stream while inference (testing). For training, we use usual distributed setup. |
So we need at least three GPUs to inference three streams in ParNet? |
Yes if you want to do the multi-GPU inference. Otherwise, you can also do single gpu inference but it will be slower. |
foe the edge devcie, using mulit-gpu for inference is expensive, what is your opinion? |
Could you introduce more details in parallelizing across GPUs, like how to implement through PyTorch.
The text was updated successfully, but these errors were encountered: