Skip to content

Simple PyTorch Distributed Training Example

Notifications You must be signed in to change notification settings

PinkDiamond1/pytorch_ddp

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 
 
 

Repository files navigation

Simple PyTorch Distributed Training (Multiple Nodes)

Preparations

Do the following steps on all nodes:

Clone Repo

git clone https://github.com/lambdal/pytorch_ddp
cd pytorch_ddp

Download the dataset on each node before starting distributed training

mkdir -p data
cd data
wget -c --quiet https://www.cs.toronto.edu/~kriz/cifar-10-python.tar.gz
tar -xvzf cifar-10-python.tar.gz
rm cifar-10-python.tar.gz
cd ..

Creating directories for saving models before starting distributed training

mkdir -p saved_models

Run

Do the following steps to run 2x Nodes distributed training (3xGPUs per node)

Node One

NCCL_DEBUG=INFO NCCL_ALGO=Ring NCCL_NET_GDR_LEVEL=4 python3 -m torch.distributed.launch \
--nproc_per_node=3 --nnodes=2 --node_rank=0 \
--master_addr="xxx.xxx.xxx.xxx" --master_port=1234 \
resnet_ddp.py \
--backend=nccl

Node Two

NCCL_DEBUG=INFO NCCL_ALGO=Ring NCCL_NET_GDR_LEVEL=4 python3 -m torch.distributed.launch \
--nproc_per_node=3 --nnodes=2 --node_rank=1 \
--master_addr="xxx.xxx.xxx.xxx" --master_port=1234 \
resnet_ddp.py \
--backend=nccl

Note: backend options are nccl, gloo, and mpi. "By default for Linux, the Gloo and NCCL backends are built and included in PyTorch distributed (NCCL only when building with CUDA). MPI is an optional backend that can only be included if you build PyTorch from source". More details can be found here

Credit

The resnet_ddp.py script is written by Lei Mao.

About

Simple PyTorch Distributed Training Example

Resources

Code of conduct

Security policy

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%