Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Question about using DistributedDataParallel #36

Closed
YaoHan404 opened this issue Sep 1, 2020 · 6 comments
Closed

Question about using DistributedDataParallel #36

YaoHan404 opened this issue Sep 1, 2020 · 6 comments

Comments

@YaoHan404
Copy link

After using DistributedDataParallel:
python -m torch.distributed.launch --nproc_per_node=4 ./tools/train.py CONFIG_PATH

There is a decline in the performance of detection compared with using a single GPU. And the training time did not decrease significantly. Has anyone encountered a similar situation, and how to solve it?

@tianweiy
Copy link
Owner

tianweiy commented Sep 1, 2020

And the training time did not decrease significantly.

That depends on your cpu and io. I get about linear speed up.

There is a decline in the performance of detection compared with using a single GPU.

Please give more details how large decline, both logs(using a link or send to my email), etc..

@YaoHan404
Copy link
Author

Thank you for your reply.

That depends on your cpu and io. I get about linear speed up.

It's my mistake, the training time did decrease a lot, it has decreased from 28h to 18h. I check the GPU-Util of 4 1080Ti, and most of the time is 0. I think maybe the IO speed of my mechanical hard disk is a bottleneck although I have increased the num_workrs of DataLoader. Are you put your dataset in the SSD?

Please give more details how large decline, both logs(using a link or send to my email), etc..

My model is trained in "car".

  • DistributedDataParallel:
    "0.5": 0.41004305537524055,
    "1.0": 0.6311094185348973,
    "2.0": 0.7190380432337594,
    "4.0": 0.7583501639976733
  • Single GPU:
    "0.5": 0.49231069012399,
    "1.0": 0.6895590650747294,
    "2.0": 0.7537213190378271,
    "4.0": 0.7901271570637968

And logs are sent to your email.

@tianweiy
Copy link
Owner

tianweiy commented Sep 2, 2020

It's my mistake, the training time did decrease a lot, it has decreased from 28h to 18h. I check the GPU-Util of 4 1080Ti, and most of the time is 0. I think maybe the IO speed of my mechanical hard disk is a bottleneck although I have increased the num_workrs of DataLoader. Are you put your dataset in the SSD?

Yeah, you can see the log. The data time is much larger than the forward time. I didn't use a SSD for training but this should definitely help a lot for your setting.

Can you tell me what specific change did you make to the code? What model is this and how did you create this subset? Regardless, it seems both dist and single training are bad for the car class. You should see something like this for car

car Nusc dist AP@0.5, 1.0, 2.0, 4.0
72.88, 84.48, 87.77, 89.16 mean AP: 0.8357262620298936

@YaoHan404
Copy link
Author

Well,I used PointPillars as my baseline. And the input is replaced with a BEV created manually, the backbone is replaced with a lighter 2D detection model.

@tianweiy
Copy link
Owner

tianweiy commented Sep 3, 2020

I see. I can reproduce my result with 2/4/8 gpus so I don't think there is an issue with Distributed data parallel. You may need to look at other part for the discrepancy

@YaoHan404
Copy link
Author

Thank you very much, I will check my code.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants