Skip to content
This repository has been archived by the owner on Feb 12, 2022. It is now read-only.

Multi-GPU size mismatch #12

Closed
xubenben opened this issue Nov 9, 2017 · 0 comments
Closed

Multi-GPU size mismatch #12

xubenben opened this issue Nov 9, 2017 · 0 comments

Comments

@xubenben
Copy link

xubenben commented Nov 9, 2017

I try to train lm with multi-GPU(In this case, I use 3 GPU) by set "model = torch.nn.DataParallel(model).cuda()", and change "model.init_hidden" to "model.module.init_hidden", then meet error:

2017-11-09 4 42 48

It seems that only one GPU's result has been collected, I can't explain what happened.

@xubenben xubenben closed this as completed Nov 9, 2017
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant