This repository facilitates job submission with multi-node and multi-GPU configurations in Slurm using Torchrun.
This repository provides scripts that are helpful when using multinode with PyTorch DDP module in Slurm. It consists of master.sh and original.sh, where master.sh submits original.sh for each node. The example involves using a total of 32 GPUs across 4 nodes, each with 8 GPUs. The scripts offer color coding for sanity checks and also include a memo feature.
- Slurm Version: 21.08.8-2
- Pytorch Version: Recommended 1.8 and above
- Ubuntu Version: 20.04.5 LTS
Ensure proper socket naming by following the instructions provided in the repository.
If you intend to use distributed launch, set the "appropriate interface" for your environment. You can check your available interfaces using the following commands:
$ ifconfig
or
$ /sbin/ifconfig -a
The chosen Ethernet Interface name should have both inet6 and broadcast properties. This socket name will be used later, so carefully note your interface name.
en***: flags=****<UP,BROADCAST,RUNNING,MULTICAST> mtu 1500
inet <inet ip> netmask 255.255.255.0 broadcast <broadcast add>
inet6 f****::****:****:****:**** prefixlen 64 scopeid 0x20<link>
ether **:36:**:**:**:** txqueuelen 1000 (Ethernet)
RX packets 16632361209 bytes 24172178960947 (24.1 TB)
RX errors 0 dropped 43641438 overruns 0 frame 0
TX packets 16585505941 bytes 24290665224417 (24.2 TB)
TX errors 0 dropped 0 overruns 0 carrier 0 collisions 0
Choose the Ethernet Interface name with inet6 and broadcast properties for later use.
export NCCL_SOCKET_IFNAME=en***
Feel free to enhance the design for improved clarity and aesthetics.