-
Notifications
You must be signed in to change notification settings - Fork 21.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Data parallel failure when using add_module #3174
Comments
torch 0.2.0_3 with CUDA 8 and cudnn 6.0.20. |
However, it works when I use ModuleList. |
That is expected. Use |
The `self.add_module` is a technique to register parameters, but it does not work with DataParallel because it does not put the parameters into the __dict__ in the normal way. https://discuss.pytorch.org/t/discrepancy-between-manual-parameter-registration-vs-using-nn-modulelist-when-parallelizing/181055 pytorch/pytorch#3174
The `self.add_module` is a technique to register parameters, but it does not work with DataParallel because it does not put the parameters into the __dict__ in the normal way. https://discuss.pytorch.org/t/discrepancy-between-manual-parameter-registration-vs-using-nn-modulelist-when-parallelizing/181055 pytorch/pytorch#3174
The `self.add_module` is a technique to register parameters, but it does not work with DataParallel because it does not put the parameters into the __dict__ in the normal way. https://discuss.pytorch.org/t/discrepancy-between-manual-parameter-registration-vs-using-nn-modulelist-when-parallelizing/181055 pytorch/pytorch#3174
code:
output:
The text was updated successfully, but these errors were encountered: