Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

can't train planrcnn with multiple gpu when I modify the batchsize>1 in dataloader? #19

Open
ywcmaike opened this issue Aug 2, 2019 · 3 comments

Comments

@ywcmaike
Copy link

ywcmaike commented Aug 2, 2019

can't train planrcnn with multiple gpu when I modify the batchsize>1 in dataloader?
hello, I have 4 titanxp, and I want to train planercnn, in train_planercnn.py
dataloader = DataLoader(dataset, batch_size=1, shuffle=True, num_workers=16) and it only use one gpu. when I modify the batch_size>1(such as 4), it failed to train. the error is as followed.

return torch.stack(batch, 0, out=out)
RuntimeError: invalid argument 0: Sizes of tensors must match except in dimension 0,
I found the similar issue: https://discuss.pytorch.org/t/problem-with-custom-dataset-invalid-argument-0-sizes-of-tensors-must-match-except-in-dimension-0/34792 , but can't solve it.

can you give me some tips? or how you train planercnn with multiple gpu and more batchsize? @tmbdev @vinodgro @mjgarland @dumerrill @sdalton1

@ywcmaike
Copy link
Author

ywcmaike commented Aug 2, 2019

@art-programmer @kihwan23

@art-programmer
Copy link
Contributor

Sorry for the late response. I'm sorry but I have no experience with multi-gpu training. Maybe you can find more information on the original mask r-cnn repo, https://github.com/multimodallearning/pytorch-mask-rcnn

@shsjxzh
Copy link

shsjxzh commented Oct 24, 2019

I find that in the original mask-rcnn repo, the author mentions in README that he uses the ROIAlign implemented by others, and it only supports one gpu training: https://github.com/longcw/RoIAlign.pytorch. Maybe this information can help you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants