Skip to content

Test code for running PyTorch deep learning models using multiple GPUs.

Notifications You must be signed in to change notification settings

JiahongChen/multiGPU

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

17 Commits
 
 
 
 
 
 

Repository files navigation

Running PyTorch moodel on multiple GPUs

This repo provides test codes for running PyTorch model using multiple GPUs.

You can find the environment setup for mutiple GPUs on this repo.

How to make your code run on multiple GPUs

You only need to warp your model using torch.nn.DataParallel function:

model = nn.DataParallel(model)

You may check codes here to test your multiple GPU environment. These codes are mainly from this tutorial.

Sample codes to run deep learning model are provided in this folder, which replicates the paper Maximum Classifier Discrepancy for Unsupervised Domain Adaptation.

Error: 'DataParallel' object has no attribute 'xxx'

Instead of using model.xxx, access the model attributes by model.module.xxx.

[ref: https://discuss.pytorch.org/t/how-to-reach-model-attributes-wrapped-by-nn-dataparallel/1373]

About

Test code for running PyTorch deep learning models using multiple GPUs.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages