Skip to content
A GPipe implementation in PyTorch
Branch: master
Clone or download
Latest commit 419ad24 May 22, 2019
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Disallow dense commas May 21, 2019
stubs/torch Configure lints May 10, 2019
tests Raise ValueError on <1 numbers May 21, 2019
torchgpipe Raise ValueError on <1 numbers May 21, 2019
.gitignore Config .gitignore May 10, 2019
.travis.yml Integrate with Coveralls May 14, 2019 Write CONTRIBUTING May 16, 2019
LICENSE Full Apache 2.0 LICENSE May 12, 2019 Attach PyPI badge May 22, 2019
setup.cfg Disallow dense commas May 21, 2019 Rename __about__ -> __version__ May 15, 2019


PyPI Build Status Coverage Status Documentation Status

A GPipe implementation in PyTorch.

How to use

Prerequisites are:

  • Python 3.6+
  • PyTorch 1.0
  • Your nn.Sequential module

Install via PyPI:

$ pip install torchgpipe

Wrap your nn.Sequential module with torchgpipe.GPipe. You have to specify balance to partition the module. Then you can specify the number of micro-batches with chunks:

from torchgpipe import GPipe

model = nn.Sequential(a, b, c, d)
model = GPipe(model, balance=[1, 1, 1, 1], chunks=8)

for input in data_loader:
    output = model(input)

This project is still under development. Any public API would be changed without deprecation warnings until v0.1.0.

You can’t perform that action at this time.