New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
multi-GPU for training #19
Comments
You can using "model['trans'] = nn.DataParallel(model['trans'])" |
Thank you a lot! It works when I run "python main.py". However, when I run refine, it failed and gives error as follows: run: errors: I tried add code like this: it still doesn't work. |
Maybe you can try torch==1.7.1
|
thank you! It is useful to use torch==1.7.1 to avoid that problem. |
hello, thank you for your awesome work. I have toubles of using multi-gpus when training:
I added "model = nn.DataParallel(model)" before main.py line187: "all_param = []", but it doesn't work and gives an error:
Traceback (most recent call last):
File "main.py", line 190, in
for i_model in model:
TypeError: 'DataParallel' object is not iterable
can you please tell me how to solve this question? thank you!
The text was updated successfully, but these errors were encountered: