You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Related to NGC PyTorch Image 24.02 (and maybe later)
Describe the bug
I'm sorry if this is not the correct place to report, I could not find any Github repo or contact address for maintainers of the NVIDIA NGC PyTorch images.
The PyTorch version in the NGC images is compiled from source (I find the choice of just using the latest commit at the time of building, rather than the latest stable release, problematic, but that is not the topic of this issue). Unfortunately, it is compiled with the flag PGNCCL_ENABLE_HASH enabled, so NCCL will print a hash for every collective operation on every rank, making all log output practically unreadable when using distributed training, especially when using deepspeed ZeRO. There is no way to suppress this log output other than recompiling PyTorch.
To Reproduce
Use nvcr.io/nvidia/pytorch:24.02-py3
Run a distributed training using NCCL
Observe excessive logging
Expected behavior
This debug logging is not necessary and should not be enabled.
Environment
Container version: pytorch:24.02-py3
GPUs in the system: Any, more than 1 required.
CUDA driver version 545.23
The text was updated successfully, but these errors were encountered:
Related to NGC PyTorch Image 24.02 (and maybe later)
Describe the bug
I'm sorry if this is not the correct place to report, I could not find any Github repo or contact address for maintainers of the NVIDIA NGC PyTorch images.
The PyTorch version in the NGC images is compiled from source (I find the choice of just using the latest commit at the time of building, rather than the latest stable release, problematic, but that is not the topic of this issue). Unfortunately, it is compiled with the flag
PGNCCL_ENABLE_HASH
enabled, so NCCL will print a hash for every collective operation on every rank, making all log output practically unreadable when using distributed training, especially when using deepspeed ZeRO. There is no way to suppress this log output other than recompiling PyTorch.To Reproduce
nvcr.io/nvidia/pytorch:24.02-py3
Expected behavior
This debug logging is not necessary and should not be enabled.
Environment
pytorch:24.02-py3
The text was updated successfully, but these errors were encountered: