-
Notifications
You must be signed in to change notification settings - Fork 149
Tensor needs to be moved from one gpu to the other. #114
Comments
When is this happening? Are you using gfpgan/esrgan as an option on generation, or in their standalone tabs?
Have you tried explicitly setting |
The problem seems to be only with GFPGAN. It throws the error in both option on generation as well as in standalone tab mode. esrgan is working correctly. Your assumption is correct: I tried the non-optimized variant as well, same result (esrgan works, gfpgan doesn't)
Thank you for sharing your interface, it helps a lot. |
Traceback led me to a I guess the only way would be to manually patch |
Ok, fixed this way:
And everything is working as intended. |
I've forked, merged that pull request into the fork, and updated the environment.yaml so should be fixed for all now, thanks! |
Replace numerical size inputs with dropdowns
Great feature to be able to run esrgan and gfpgan on another GPU, however, tensor needs to be moved:
Expected all tensors to be on the same device, but found at least two devices, cuda:0 and cuda:1! (when checking argument for argument weight in method wrapper__cudnn_convolution)
The text was updated successfully, but these errors were encountered: