Join GitHub today
2 processes cannot use same GPU #3871
Since a few weeks ago (or maybe more), I can't have more than one python script accessing one GPU. For instance, if I have GPU 0 used by one process (where the process is not necessarily running, but just allocating some memory on GPU 0), then I can't use GPU 0 at all, and I have to find the process that uses it and kill it before I can run some other code on that GPU. That's very inconvenient, and in the past it was not like this, I was able to run several scripts on a same GPU in parallel.
Is it an expected behavior due do some recent changes, or the problem only comes from me?
And this is the error I get when I run:
import torch torch.FloatTensor(3).normal_().cuda()