Skip to content

Data parallel

Compare
Choose a tag to compare
@lRomul lRomul released this 20 Aug 13:46
· 294 commits to master since this release
f94b34d

Data parallel for multi-gpu training.

Select gpu with device indexing:

model = load_model(model_path, device="cuda:1")
model.set_device("cuda:0")

For multi-gpu you can use list of devices:

params = {
    ...,
    'device': ['cuda:0', 'cuda:1']
}
model = CnnFinetune(params)

model = load_model(model_path, device=["cuda:1", "cuda:0"])
model.set_device(["cuda:0", "cuda:1"])

Batch tensors will be scattered on dim 0. First device in list is location of output.

By default device "cuda" is one gpu training on torch.cuda.current_device.