-
Notifications
You must be signed in to change notification settings - Fork 457
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Question about using DistributedDataParallel #36
Comments
That depends on your cpu and io. I get about linear speed up.
Please give more details how large decline, both logs(using a link or send to my email), etc.. |
Thank you for your reply.
It's my mistake, the training time did decrease a lot, it has decreased from 28h to 18h. I check the GPU-Util of 4 1080Ti, and most of the time is 0. I think maybe the IO speed of my mechanical hard disk is a bottleneck although I have increased the num_workrs of DataLoader. Are you put your dataset in the SSD?
My model is trained in "car".
And logs are sent to your email. |
Yeah, you can see the log. The data time is much larger than the forward time. I didn't use a SSD for training but this should definitely help a lot for your setting. Can you tell me what specific change did you make to the code? What model is this and how did you create this subset? Regardless, it seems both dist and single training are bad for the car class. You should see something like this for car
|
Well,I used PointPillars as my baseline. And the input is replaced with a BEV created manually, the backbone is replaced with a lighter 2D detection model. |
I see. I can reproduce my result with 2/4/8 gpus so I don't think there is an issue with Distributed data parallel. You may need to look at other part for the discrepancy |
Thank you very much, I will check my code. |
After using DistributedDataParallel:
python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py CONFIG_PATH
There is a decline in the performance of detection compared with using a single GPU. And the training time did not decrease significantly. Has anyone encountered a similar situation, and how to solve it?
The text was updated successfully, but these errors were encountered: