Data parallel
Data parallel for multi-gpu training.
Select gpu with device indexing:
model = load_model(model_path, device="cuda:1")
model.set_device("cuda:0")
For multi-gpu you can use list of devices:
params = {
...,
'device': ['cuda:0', 'cuda:1']
}
model = CnnFinetune(params)
model = load_model(model_path, device=["cuda:1", "cuda:0"])
model.set_device(["cuda:0", "cuda:1"])
Batch tensors will be scattered on dim 0. First device in list is location of output.
By default device "cuda" is one gpu training on torch.cuda.current_device.