Description
🐛 Describe the bug
Not sure if I'm doing something dumb, but I couldn't find docs on it and even LLMs were puzzled:
Repro:
# CRASH=1 torchrun --nproc_per_node=8 try_async_pg.py
import os
import torch
import torch.distributed as dist
rank = int(os.environ["RANK"])
world_size = int(os.environ["WORLD_SIZE"])
device = torch.device("cuda", int(rank))
torch.cuda.set_device(device)
dist.init_process_group(backend="nccl", device_id=device)
pg2 = torch.distributed.new_group(backend="nccl", device_id=device)
crash = bool(int(os.environ["CRASH"]))
if crash:
dist.barrier()
else:
dist.barrier(group=pg2)
dist.barrier()
dist.destroy_process_group()
Error:
(/home/xmfan/core/a/pytorch-env) [16:41:24] ~/core/a/modded-nanogpt (ca) > CRASH=1 torchrun --nproc_per_node=8 try_async_pg.py
WARNING:torch.distributed.run:
*****************************************
Setting OMP_NUM_THREADS environment variable for each process to be 1 in default, to avoid your system being overloaded, please further tune the variable for optimal performance in your application as needed.
*****************************************
NCCL version 2.25.1+cuda12.4
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620074 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620075 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620076 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620077 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620078 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620079 closing signal SIGTERM
WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 2620081 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -11) local_rank: 6 (pid: 2620080) of binary: /home/xmfan/core/a/pytorch-env/bin/python
Traceback (most recent call last):
File "/home/xmfan/core/a/pytorch-env/bin/torchrun", line 33, in <module>
sys.exit(load_entry_point('torch', 'console_scripts', 'torchrun')())
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/elastic/multiprocessing/errors/__init__.py", line 355, in wrapper
return f(*args, **kwargs)
^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/run.py", line 892, in main
run(args)
File "/home/xmfan/core/a/pytorch/torch/distributed/run.py", line 883, in run
elastic_launch(
File "/home/xmfan/core/a/pytorch/torch/distributed/launcher/api.py", line 139, in __call__
return launch_agent(self._config, self._entrypoint, list(args))
^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^^
File "/home/xmfan/core/a/pytorch/torch/distributed/launcher/api.py", line 270, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:
=========================================================
try_async_pg.py FAILED
---------------------------------------------------------
Failures:
<NO_OTHER_FAILURES>
---------------------------------------------------------
Root Cause (first observed failure):
[0]:
time : 2025-04-02_16:41:45
host : devvm062.dkl0.facebook.com
rank : 6 (local_rank: 6)
exitcode : -11 (pid: 2620080)
error_file: <N/A>
traceback : Signal 11 (SIGSEGV) received by PID 2620080
=========================================================
Versions
Collecting environment information...
PyTorch version: 2.8.0a0+git78300c8
Is debug build: False
CUDA used to build PyTorch: 12.4
ROCM used to build PyTorch: N/A
OS: CentOS Stream 9 (x86_64)
GCC version: (GCC) 11.5.0 20240719 (Red Hat 11.5.0-5)
Clang version: 19.1.7 (CentOS 19.1.7-1.el9)
CMake version: version 3.31.4
Libc version: glibc-2.34
Python version: 3.12.2 | packaged by conda-forge | (main, Feb 16 2024, 20:50:58) [GCC 12.3.0] (64-bit runtime)
Python platform: Linux-6.4.3-0_fbk14_hardened_2601_gcd42476b84e9-x86_64-with-glibc2.34
Is CUDA available: True
CUDA runtime version: 12.4.131
CUDA_MODULE_LOADING set to: LAZY
GPU models and configuration:
GPU 0: NVIDIA H100
GPU 1: NVIDIA H100
GPU 2: NVIDIA H100
GPU 3: NVIDIA H100
GPU 4: NVIDIA H100
GPU 5: NVIDIA H100
GPU 6: NVIDIA H100
GPU 7: NVIDIA H100
Nvidia driver version: 550.90.07
cuDNN version: Could not collect
HIP runtime version: N/A
MIOpen runtime version: N/A
Is XNNPACK available: True
CPU:
Architecture: x86_64
CPU op-mode(s): 32-bit, 64-bit
Address sizes: 52 bits physical, 57 bits virtual
Byte Order: Little Endian
CPU(s): 368
On-line CPU(s) list: 0-367
Vendor ID: AuthenticAMD
Model name: AMD EPYC 9654 96-Core Processor
CPU family: 25
Model: 17
Thread(s) per core: 1
Core(s) per socket: 368
Socket(s): 1
Stepping: 1
BogoMIPS: 4792.80
Flags: fpu vme de pse tsc msr pae mce cx8 apic sep mtrr pge mca cmov pat pse36 clflush mmx fxsr sse sse2 ht syscall nx mmxext fxsr_opt pdpe1gb rdtscp lm rep_good nopl cpuid extd_apicid tsc_known_freq pni pclmulqdq ssse3 fma cx16 pcid sse4_1 sse4_2 x2apic movbe popcnt tsc_deadline_timer aes xsave avx f16c rdrand hypervisor lahf_lm cmp_legacy svm cr8_legacy abm sse4a misalignsse 3dnowprefetch osvw topoext perfctr_core invpcid_single ssbd ibrs ibpb stibp ibrs_enhanced vmmcall fsgsbase tsc_adjust bmi1 avx2 smep bmi2 erms invpcid avx512f avx512dq rdseed adx smap avx512ifma clflushopt clwb avx512cd sha_ni avx512bw avx512vl xsaveopt xsavec xgetbv1 xsaves avx512_bf16 clzero xsaveerptr wbnoinvd arat npt lbrv nrip_save tsc_scale vmcb_clean pausefilter pfthreshold v_vmsave_vmload vgif vnmi avx512vbmi umip pku ospke avx512_vbmi2 gfni vaes vpclmulqdq avx512_vnni avx512_bitalg avx512_vpopcntdq rdpid fsrm flush_l1d arch_capabilities
Virtualization: AMD-V
Hypervisor vendor: KVM
Virtualization type: full
L1d cache: 23 MiB (368 instances)
L1i cache: 23 MiB (368 instances)
L2 cache: 184 MiB (368 instances)
L3 cache: 16 MiB (1 instance)
NUMA node(s): 1
NUMA node0 CPU(s): 0-367
Vulnerability Gather data sampling: Not affected
Vulnerability Itlb multihit: Not affected
Vulnerability L1tf: Not affected
Vulnerability Mds: Not affected
Vulnerability Meltdown: Not affected
Vulnerability Mmio stale data: Not affected
Vulnerability Retbleed: Not affected
Vulnerability Spec store bypass: Vulnerable
Vulnerability Spectre v1: Vulnerable: __user pointer sanitization and usercopy barriers only; no swapgs barriers
Vulnerability Spectre v2: Vulnerable, IBPB: disabled, STIBP: disabled, PBRSB-eIBRS: Not affected
Vulnerability Srbds: Not affected
Vulnerability Tsx async abort: Not affected
Versions of relevant libraries:
[pip3] bert_pytorch==0.0.1a4
[pip3] flake8==6.1.0
[pip3] flake8-bugbear==23.3.23
[pip3] flake8-comprehensions==3.15.0
[pip3] flake8-executable==2.1.3
[pip3] flake8-logging-format==0.9.0
[pip3] flake8-pyi==23.3.1
[pip3] flake8-simplify==0.19.3
[pip3] functorch==1.14.0a0+b71aa0b
[pip3] mypy==1.14.0
[pip3] mypy-extensions==1.0.0
[pip3] numpy==2.1.0
[pip3] onnx==1.17.0
[pip3] optree==0.13.0
[pip3] pytorch-labs-segment-anything-fast==0.2
[pip3] pytorch-triton==3.2.0+git4b3bb1f8
[pip3] torch==2.8.0a0+git78300c8
[pip3] torch_geometric==2.4.0
[pip3] torchao==0.8.0
[pip3] torchaudio==2.6.0a0+c670ad8
[pip3] torchdata==0.12.0a0+d155220
[pip3] torchdata==0.12.0a0+d155220
[pip3] torchmetrics==1.0.3
[pip3] torchmultimodal==0.1.0b0
[pip3] torchpippy==0.2.0+1bcb2bf
[pip3] torchpippy==0.2.0+1bcb2bf
[pip3] torchrec==1.1.0
[pip3] torchtext==0.17.0a0+bde7ecd
[pip3] torchtitan==0.0.2
[pip3] torchvision==0.22.0a0+d462da2
[conda] bert-pytorch 0.0.1a4 dev_0
[conda] blas 1.0 mkl
[conda] functorch 1.14.0a0+b71aa0b pypi_0 pypi
[conda] magma-cuda116 2.6.1 1 pytorch
[conda] mkl 2023.1.0 h213fc3f_46344
[conda] mkl-include 2023.1.0 h06a4308_46344
[conda] mkl-service 2.4.0 py312h5eee18b_2
[conda] mkl_fft 1.3.11 py312h5eee18b_0
[conda] mkl_random 1.2.8 py312h526ad5a_0
[conda] numpy 2.1.0 pypi_0 pypi
[conda] optree 0.13.0 pypi_0 pypi
[conda] pytorch-labs-segment-anything-fast 0.2 pypi_0 pypi
[conda] pytorch-triton 3.2.0+git4b3bb1f8 pypi_0 pypi
[conda] torch 2.8.0a0+git78300c8 dev_0
[conda] torch-geometric 2.4.0 pypi_0 pypi
[conda] torchao 0.8.0 pypi_0 pypi
[conda] torchaudio 2.6.0a0+c670ad8 dev_0
[conda] torchdata 0.12.0a0+d155220 pypi_0 pypi
[conda] torchfix 0.4.0 pypi_0 pypi
[conda] torchmetrics 1.0.3 pypi_0 pypi
[conda] torchmultimodal 0.1.0b0 pypi_0 pypi
[conda] torchpippy 0.2.0+1bcb2bf pypi_0 pypi
[conda] torchrec 1.1.0 pypi_0 pypi
[conda] torchtext 0.17.0a0+bde7ecd dev_0
[conda] torchtitan 0.0.2 pypi_0 pypi
[conda] torchvision 0.22.0a0+d462da2 dev_0
cc @H-Huang @awgu @wanchaol @fegin @fduwjj @wz337 @wconstab @d4l3k