-
-
Notifications
You must be signed in to change notification settings - Fork 5.1k
Closed
Labels
enhancementNew feature or requestNew feature or request
Description
I am unable to switch the GPU used for training by setting device: cuda:1, which is not conducive to tasks that involve training multiple models simultaneously.
if torch.cuda.is_available():
if args.distributed:
device = 'cuda:%d' % args.local_rank
else:
device = 'cuda:0'
torch.cuda.set_device(device)
else:
device = 'cpu'
args.device = device
Is it possible to use a change similar to the one below?
if torch.cuda.is_available():
if args.distributed:
device = 'cuda:%d' % args.local_rank
else:
device = args.device if args.device else 'cuda:0'
torch.cuda.set_device(device)
else:
device = 'cpu'
device = torch.device(device)
Metadata
Metadata
Assignees
Labels
enhancementNew feature or requestNew feature or request