rnn module uses cuda:0 even it moved into cuda:1 by to('cuda:1') #71400
Labels
module: cuda
Related to torch.cuda, and CUDA support in general
module: memory usage
PyTorch is using more memory than it should, or it is leaking memory
triaged
This issue has been looked at a team member, and triaged and prioritized into an appropriate module
馃悰 Describe the bug
When I use rnn such as LSTM or GRU with cuda:1, pytorch internally uses cuda:0 even I moved both input and module into cuda:1.
Code to reproduce:
You can find that pytorch uses memory in both cuda:0 and cuda:1.
Versions
I uses 2 RTX 3090 GPUS.
Version of pytorch is 1.10.1+cu113.
cc @ngimel
The text was updated successfully, but these errors were encountered: