-
Notifications
You must be signed in to change notification settings - Fork 561
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Multi-GPU is broken #53
Comments
Something like this would fix it, no? Pass in gpuinfo when it is called in main.py.
model.cuda() tries to call on GPU 0. |
Is multi GPU supposed to be supported? |
Potentially related (
|
The following seems to give a solution:
|
Following in the hope this get supported :) |
No plans to support this, PR's welcome though if you can figure it out |
2x3090 instance on Runpod using the Runpod notebook on their Stable Diffusion image. I can train on GPU 0, but not 0 and 1 together or even separately. Running on GPU 0 works fine.
Here is what happens when I am training on GPU 0, and try to start a separate training on GPU 1. It seems GPU 0 is hardcoded somewhere.
The text was updated successfully, but these errors were encountered: