You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
index created!
Training YOLOv3 strong baseline!
loading pytorch ckpt... weights/darknet53_feature_mx.pth
using cuda
index created!
Training YOLOv3 strong baseline!
loading pytorch ckpt... weights/darknet53_feature_mx.pth
using cuda
loading pytorch ckpt... weights/darknet53_feature_mx.pth
loading pytorch ckpt... weights/darknet53_feature_mx.pth
using cuda
using cuda
loading pytorch ckpt... weights/darknet53_feature_mx.pth
using cuda
loading pytorch ckpt... weights/darknet53_feature_mx.pth
using cuda
You may change --nproc_per_node=10 to --nproc_per_node=8 respectively. And this deadlock is usually caused by pytorch dataloader or opencv, so you may restart it several times.
Thank you. It solved my problem
excuse me!I encounted enviroment problem with 4 2080ti,could you please tell me about the version of CUDA, cuddn, python, pytorch, torchvision, apex respectively? thx very much
Hello,
When I run the YOLOv3 baseline training script:
The process got strucked at:
I use 8 2080Ti GPUs, and the state is GPUs is:
The text was updated successfully, but these errors were encountered: