Skip to content

gradient-ai/PyTorch-Tutorial-Data-Parallelism

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 

Repository files navigation



GitHub Showcase Status GitHub last commit Run on Gradient


PyTorch Tutorial: Data Parallelism

Learn how to use multiple GPUs with PyTorch.

Description

Pytorch only uses one GPU by default. In this tutorial by Soumith Chintala, one of the creators of PyTorch, you'll learn how to use multiple GPUs in PyTorch with the DataParallel class. This will allow you to split each mini-batch of samples into multiple smaller mini-batches, and run the computation for each of these in parallel.

Tags

PyTorch, Educational

Launching Notebook

By clicking the Run on Gradient button above, you will be launching the contents of this repository into a Jupyter notebook on Paperspace Gradient.

Docs

Docs are available at docs.paperspace.com.

Be sure to read about how to create a notebook or watch the video instead!

About

Learn how to use multiple GPUs with PyTorch. An ML Showcase project from Paperspace Gradient.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Contributors 2

  •  
  •