Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

NCCL timeout problem on DPP #7481

Closed
1 task done
LegendSun0 opened this issue Apr 19, 2022 · 8 comments
Closed
1 task done

NCCL timeout problem on DPP #7481

LegendSun0 opened this issue Apr 19, 2022 · 8 comments
Labels
question Further information is requested Stale

Comments

@LegendSun0
Copy link

Search before asking

Question

I practiced custom datasets on official pulled images and code. When using cache training on DDP, NCCL timeout errors occur when the data set is too large.Here is my log:

�[34m�[1mtrain: �[0mweights=./yolov5s.pt, cfg=models/yolov5s.yaml, data=data/dky_34label.yaml, hyp=data/hyps/hyp.scratch.yaml, epochs=300, batch_size=320, imgsz=640, rect=False, resume=False, nosave=False, noval=False, noautoanchor=False, evolve=None, bucket=, cache=ram, image_weights=False, device=0,1, multi_scale=False, single_cls=False, optimizer=SGD, sync_bn=False, workers=32, project=runs/train, name=0419_4.5w_dky_, exist_ok=False, quad=False, linear_lr=False, label_smoothing=0.0, patience=100, freeze=[0], save_period=1, local_rank=0, entity=None, upload_dataset=False, bbox_interval=-1, artifact_alias=latest
�[34m�[1mgithub: �[0mskipping check (not a git repository), for updates see https://github.com/ultralytics/yolov5
YOLOv5 🚀 2022-4-15 torch 1.10.1+cu102 CUDA:0 (Tesla V100-PCIE-32GB, 32510MiB)
CUDA:1 (Tesla V100-PCIE-32GB, 32510MiB)

Added key: store_based_barrier_key:1 to store for rank: 0
Rank 0: Completed store-based barrier for key:store_based_barrier_key:1 with 2 nodes.
�[34m�[1mhyperparameters: �[0mlr0=0.01, lrf=0.1, momentum=0.937, weight_decay=0.0005, warmup_epochs=3.0, warmup_momentum=0.8, warmup_bias_lr=0.1, box=0.05, cls=0.5, cls_pw=1.0, obj=1.0, obj_pw=1.0, iou_t=0.2, anchor_t=4.0, fl_gamma=0.0, hsv_h=0.015, hsv_s=0.7, hsv_v=0.4, degrees=0.0, translate=0.1, scale=0.5, shear=0.0, perspective=0.0, flipud=0.0, fliplr=0.5, mosaic=1.0, mixup=0.0, copy_paste=0.0
�[34m�[1mTensorBoard: �[0mStart with 'tensorboard --logdir runs/train', view at http://localhost:6006/

             from  n    params  module                                  arguments                     

0 -1 1 3520 models.common.Conv [3, 32, 6, 2, 2]
1 -1 1 18560 models.common.Conv [32, 64, 3, 2]
2 -1 1 18816 models.common.C3 [64, 64, 1]
3 -1 1 73984 models.common.Conv [64, 128, 3, 2]
4 -1 2 115712 models.common.C3 [128, 128, 2]
5 -1 1 295424 models.common.Conv [128, 256, 3, 2]
6 -1 3 625152 models.common.C3 [256, 256, 3]
7 -1 1 1180672 models.common.Conv [256, 512, 3, 2]
8 -1 1 1182720 models.common.C3 [512, 512, 1]
9 -1 1 656896 models.common.SPPF [512, 512, 5]
10 -1 1 131584 models.common.Conv [512, 256, 1, 1]
11 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
12 [-1, 6] 1 0 models.common.Concat [1]
13 -1 1 361984 models.common.C3 [512, 256, 1, False]
14 -1 1 33024 models.common.Conv [256, 128, 1, 1]
15 -1 1 0 torch.nn.modules.upsampling.Upsample [None, 2, 'nearest']
16 [-1, 4] 1 0 models.common.Concat [1]
17 -1 1 90880 models.common.C3 [256, 128, 1, False]
18 -1 1 147712 models.common.Conv [128, 128, 3, 2]
19 [-1, 14] 1 0 models.common.Concat [1]
20 -1 1 296448 models.common.C3 [256, 256, 1, False]
21 -1 1 590336 models.common.Conv [256, 256, 3, 2]
22 [-1, 10] 1 0 models.common.Concat [1]
23 -1 1 1182720 models.common.C3 [512, 512, 1, False]
24 [17, 20, 23] 1 105183 models.yolo.Detect [34, [[10, 13, 16, 30, 33, 23], [30, 61, 62, 45, 59, 119], [116, 90, 156, 198, 373, 326]], [128, 256, 512]]
Model Summary: 270 layers, 7111327 parameters, 7111327 gradients, 16.1 GFLOPs

Transferred 342/349 items from yolov5s.pt
Scaled weight_decay = 0.0025
�[34m�[1moptimizer:�[0m SGD with parameter groups 57 weight, 60 weight (no decay), 60 bias

�[34m�[1mtrain: �[0mScanning '../4.5w/labels/train.cache' images and labels... 42000 found, 0 missing, 0 empty, 1 corrupted: 100%|██████████| 42000/42000 [00:00<?, ?it/s]
�[34m�[1mtrain: �[0mScanning '../4.5w/labels/train.cache' images and labels... 42000 found, 0 missing, 0 empty, 1 corrupted: 100%|██████████| 42000/42000 [00:00<?, ?it/s]
�[34m�[1mtrain: �[0mWARNING: ../4.5w/images/train/02_08_03_292.jpg: ignoring corrupt image/label: negative label values [ -0.60509]
../4.5w/labels/train.cache
prefix �[34m�[1mtrain: �[0m

0%| | 0/41999 [00:00<?, ?it/s]
Procedure too long, omitted......
�[34m�[1mtrain: �[0mCaching images (31.2GB ram): 85%|████████▍ | 35525/41999 [29:51<08:15, 13.06it/s]
�[34m�[1mtrain: �[0mCaching images (31.2GB ram): 85%|████████▍ | 35528/41999 [29:52<09:18, 11.59it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35535/41999 [29:52<07:25, 14.50it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35538/41999 [29:52<07:44, 13.92it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35542/41999 [29:53<09:26, 11.39it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35546/41999 [29:53<08:25, 12.76it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35558/41999 [29:53<05:09, 20.81it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35561/41999 [29:53<06:43, 15.95it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35564/41999 [29:55<17:54, 5.99it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35587/41999 [29:56<07:13, 14.77it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35592/41999 [29:56<06:34, 16.24it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35595/41999 [29:57<08:39, 12.34it/s]
[E ProcessGroupNCCL.cpp:587] [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803905 milliseconds before timing out.

�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35598/41999 [29:58<17:34, 6.07it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35608/41999 [29:59<11:44, 9.07it/s]
[E ProcessGroupNCCL.cpp:341] Some NCCL operations have failed or timed out. Due to the asynchronous nature of CUDA kernels, subsequent GPU operations might run on corrupted/incomplete data. To avoid this inconsistency, we are taking the entire process down.
terminate called after throwing an instance of 'std::runtime_error'
what(): [Rank 1] Watchdog caught collective operation timeout: WorkNCCL(OpType=ALLREDUCE, Timeout(ms)=1800000) ran for 1803905 milliseconds before timing out.

�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35610/41999 [29:59<12:22, 8.61it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35617/41999 [29:59<09:11, 11.57it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35623/41999 [30:00<07:18, 14.54it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35626/41999 [30:00<06:39, 15.94it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35630/41999 [30:01<11:13, 9.46it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35639/41999 [30:01<06:50, 15.48it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35647/41999 [30:01<04:53, 21.62it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35652/41999 [30:01<05:37, 18.79it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35656/41999 [30:01<05:30, 19.21it/s]
�[34m�[1mtrain: �[0mCaching images (31.3GB ram): 85%|████████▍ | 35660/41999 [30:02<04:59, 21.18it/s]WARNING:torch.distributed.elastic.multiprocessing.api:Sending process 27715 closing signal SIGTERM
ERROR:torch.distributed.elastic.multiprocessing.api:failed (exitcode: -6) local_rank: 1 (pid: 27716) of binary: /opt/conda/bin/python
Traceback (most recent call last):
File "/opt/conda/lib/python3.8/runpy.py", line 194, in _run_module_as_main
return _run_code(code, main_globals, None,
File "/opt/conda/lib/python3.8/runpy.py", line 87, in _run_code
exec(code, run_globals)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 193, in
main()
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 189, in main
launch(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launch.py", line 174, in launch
run(args)
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/run.py", line 710, in run
elastic_launch(
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 131, in call
return launch_agent(self._config, self._entrypoint, list(args))
File "/opt/conda/lib/python3.8/site-packages/torch/distributed/launcher/api.py", line 259, in launch_agent
raise ChildFailedError(
torch.distributed.elastic.multiprocessing.errors.ChildFailedError:

train.py FAILED

Failures:
<NO_OTHER_FAILURES>

Root Cause (first observed failure):
[0]:
time : 2022-04-19_08:53:33
host : f8e972db9fed
rank : 1 (local_rank: 1)
exitcode : -6 (pid: 27716)
error_file: <N/A>
traceback : Signal 6 (SIGABRT) received by PID 27716

Additional

torch 1.10.1+cu102 CUDA:0 (Tesla V100-PCIE-32GB, 32510MiB)
CUDA:1 (Tesla V100-PCIE-32GB, 32510MiB)

Run the command:
python -m torch.distributed.launch --nproc_per_node 2 train.py --batch-size 320 --data data/test.yaml --cfg models/yolov5s.yaml --weights yolov5s.pt --noval --workers 32

@LegendSun0 LegendSun0 added the question Further information is requested label Apr 19, 2022
@github-actions
Copy link
Contributor

github-actions bot commented Apr 19, 2022

👋 Hello @LegendSun0, thank you for your interest in YOLOv5 🚀! Please visit our ⭐️ Tutorials to get started, where you can find quickstart guides for simple tasks like Custom Data Training all the way to advanced concepts like Hyperparameter Evolution.

If this is a 🐛 Bug Report, please provide screenshots and minimum viable code to reproduce your issue, otherwise we can not help you.

If this is a custom training ❓ Question, please provide as much information as possible, including dataset images, training logs, screenshots, and a public link to online W&B logging if available.

For business inquiries or professional support requests please visit https://ultralytics.com or email support@ultralytics.com.

Requirements

Python>=3.7.0 with all requirements.txt installed including PyTorch>=1.7. To get started:

git clone https://github.com/ultralytics/yolov5  # clone
cd yolov5
pip install -r requirements.txt  # install

Environments

YOLOv5 may be run in any of the following up-to-date verified environments (with all dependencies including CUDA/CUDNN, Python and PyTorch preinstalled):

Status

CI CPU testing

If this badge is green, all YOLOv5 GitHub Actions Continuous Integration (CI) tests are currently passing. CI tests verify correct operation of YOLOv5 training (train.py), validation (val.py), inference (detect.py) and export (export.py) on macOS, Windows, and Ubuntu every 24 hours and on every commit.

@LegendSun0
Copy link
Author

Following the link below is also problematic
https://stackoverflow.com/questions/69693950/error-some-nccl-operations-have-failed-or-timed-out
To view the #4400 did not find a solution

@glenn-jocher
Copy link
Member

glenn-jocher commented Apr 19, 2022

@LegendSun0 before you do anything else, torch.distributed.launch is deprecated, use torch.distributed.run, and use latest docker image for all DDP trainings as Multi-GPU Training tutorial already states:

YOLOv5 Tutorials

Good luck 🍀 and let us know if you have any other questions!

@LegendSun0
Copy link
Author

LegendSun0 commented Apr 20, 2022

@LegendSun0 before you do anything else, torch.distributed.launch is deprecated, use torch.distributed.run, and use latest docker image for all DDP trainings as Multi-GPU Training tutorial already states:

YOLOv5 Tutorials

Good luck 🍀 and let us know if you have any other questions!

Thank you very much for your reply,I used torch.distributed. Run but there is still a problem, same error reported .
Run the command:
python -m torch.distributed.run --nproc_per_node 2 train.py --batch-size 320 --data data/test.yaml --cfg models/yolov5s.yaml --weights yolov5s.pt --noval --workers 32 --cache

NCCL timeout 1800000, is there any way to cache in half an hour or increase the timeout time?

@glenn-jocher
Copy link
Member

@LegendSun0 there's a wealth of documentation available on the distributed process group init function that you can use to customize it to your needs:
https://pytorch.org/docs/stable/distributed.html#torch.distributed.init_process_group

Apply in

yolov5/train.py

Line 560 in ab5b917

dist.init_process_group(backend="nccl" if dist.is_nccl_available() else "gloo")

@LegendSun0
Copy link
Author

Thank you very much for your reply,I am having problems with DDP. For quick training, I used Python train.py for cache training and successfully completed the training quickly

Run the command:
python train.py --batch-size -1 --data data/test.yaml --cfg models/yolov5s.yaml --weights yolov5s.pt --workers 16 --cache

@github-actions
Copy link
Contributor

github-actions bot commented May 21, 2022

👋 Hello, this issue has been automatically marked as stale because it has not had recent activity. Please note it will be closed if no further activity occurs.

Access additional YOLOv5 🚀 resources:

Access additional Ultralytics ⚡ resources:

Feel free to inform us of any other issues you discover or feature requests that come to mind in the future. Pull Requests (PRs) are also always welcomed!

Thank you for your contributions to YOLOv5 🚀 and Vision AI ⭐!

@glenn-jocher
Copy link
Member

@LegendSun0 glad to hear you found a solution! Cache training can indeed significantly speed up the training process. If you have any more questions or run into any other issues, feel free to ask. Good luck with your training!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
question Further information is requested Stale
Projects
None yet
Development

No branches or pull requests

2 participants