You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Hi,Thank you for releasing your code. When I run R-GSN get error "RuntimeError: CUDA out of memory. Tried to allocate 562.00 MiB (GPU 1; 10.76 GiB total capacity; 8.98 GiB already allocated; 470.56 MiB free; 9.19 GiB reserved in total by PyTorch)".
I try to reduce the batch size. batch_size has been reduced to 64 and test_batch_size has been reduced to 4, I still get the same error. I used GeForce RTX 2080, can u tell me why and how to fix it, thanks a lot!
@Sanchez2020 Hello, the GPU I used in the experiment was GTX1080Ti, 11GB. But I currently don't have a GPU on hand. If adjusting test_batch_size cannot solve the problem, maybe you can only try to use a GPU with a slightly larger memory. I haven't thought of a better solution right now, so sorry.
@xjtuwxliang ,thank you for your reply.
I used GPU has 10.76 GiB total capacity, and no other programs occupied. So I am confused.
I'm looking for other ways, thank you again.
Hi,Thank you for releasing your code. When I run R-GSN get error "RuntimeError: CUDA out of memory. Tried to allocate 562.00 MiB (GPU 1; 10.76 GiB total capacity; 8.98 GiB already allocated; 470.56 MiB free; 9.19 GiB reserved in total by PyTorch)".
I try to reduce the batch size.
batch_size
has been reduced to 64 andtest_batch_size
has been reduced to 4, I still get the same error. I used GeForce RTX 2080, can u tell me why and how to fix it, thanks a lot!Environment
full error information
The text was updated successfully, but these errors were encountered: