Pre-release

@soumith soumith released this Nov 18, 2016

Assets 2

What's new in Alpha-5?

Usability

  • keyword arguments, improved indexing for all torch and autograd functions!
  • Deterministic data loader even under multiple workers
  • LAPACK bindings with full CUDA support via MAGMA
  • Easier numpy2torch conversion with torch.from_numpy(x)
  • Lot more documentation
    • fully covered neural networks
    • fully covered optim package
    • partly covered torch documentation
  • Tutorials:
    • Increased depth, length and clarity of the tutorials

New Features and modules

  • PyTorch Vision: a package to hold common dataloaders, transforms and utilities for images and videos
    • Data loaders for: COCO (captioning and detection), Imagenet, CIFAR10/100, LSUN etc.
    • Image Transforms: commonly used data augmentation transforms such as random-cropping, normalization
      • Unit-tested
    • Utilities: saving Tensors as images, creating grids of images from a mini-batch of tensors.
  • Recurrent Neural Networks
    • A complete and robust implementation of efficient Stacked LSTMs, RNNs, GRUs (bidirectional and otherwise)
    • Seamlessly integrated CuDNN is used whenever possible for maximum performance
    • A complete word-level language modeling example on the PennTreeBank dataset
      • verification that the perplexity matches the reference Torch implementation
  • an example of Generative Adversarial Networks:
    • DCGAN example in < 250 lines (includes everything)
    • Verified the results to match reference implementations
    • Multi-GPU ready!
  • A redesigned Optim package with the following optimization methods:
    • SGD, AdaDelta, Adagrad, Adam, AdaMax, Averaged SGD, RProp, RMSProp
    • Fully unit tested against their reference implementations
    • Fully documented
  • Improved Multi-GPU performance (and more is coming)
    • Integrated NVIDIA NCCL for maximizing multi-GPU communication performance

Plans for Alpha-6

  • docstrings support and finishing torch and autograd documentation
  • Fully verifying the convergence of ResNet / Imagenet training
  • More examples around:
    • Reinforcement Learning / OpenAI Gym
    • Object Detection
    • Sequence to Sequence methods
    • WaveNet / ByteNet
    • More adversarial networks (text2image, etc.)
  • More gains in performance, and fully flesh out CuDNN integration
  • Half-precision training for GPUs
  • A Lua-Torch model loader, and improved legacy.nn support
  • Lua bridge, to call your existing lua code

Usability

Keyword arguments

All torch and autograd functions used to only support arguments in the correct order.
For example:

torch.clamp(x, -0.1, 0.1)

This is often unreadable, especially for LAPACK usage where one declares booleans such as upper=True

Now, one can simply do:

torch.clamp(x, min=-0.1, max=0.1)

We've also implemented ellipsis indexing similar to NumPy

Deterministic Data Loader

The data loader now generates indices on the main process and regardless of how many workers you use,
the order of data loading will remain consistent if you use the same random seed.

Fully tested LAPACK bindings

Unit tests on both the CPU and CUDA side.
On the CPU, we ship with MKL-integration, and on the GPU, LAPACK is powered by MAGMA

Documentation

We are at a stage where we have converged to stable APIs.
Hence, documentation is going at a rapid pace, and we have covered:

  • nn
  • optim
  • part of torch / Tensors

As always, you can check out the documentation here: pytorch.org/api/latest/en/

Tutorials

We added one new tutorial: Creating extensions using numpy and scipy

  • This covers the case where you would want to quickly write some modules of your neural network using familiar scipy tools like scipy.sparse for example.

We improved the existing tutorials to cover more of the basics, and improved them.

New Features and modules

PyTorch Vision

A one-stop repository for all of your image (and soon) video needs, whether that be data loaders, common neural network definitions (such as alexnet, inception, resnet etc.) or data augmentation routines.
Our plan is to put some serious engineering firepower into this module, with GPU loaders and augmentation routines, especially for video processing. Contributions welcome :)

So far, we have:

Data loaders

All the data loaders are fully documented, and share a basic interface.
They are fully compatible with torch.utils.DataLoader to be parallelized in fetching.

Common Image Transforms

  • Convertors from PIL Image to Torch Tensors
  • Random Cropping, Scaling, Normalization transforms
    • Unit tested

The Imagenet example has been updated to use this package

Recurrent Neural Networks

One of the biggest strengths of PyTorch's new design is to be able to seamlessly share weights and do recurrent nets.
We've emphasized this, and also deeply integrated CuDNN in a way that as a user you do not notice a thing, while having the full power and speed.

nn.RNN, nn.LSTM and nn.GRU are the stacked RecurrentNet modules that you would want to use, and for generally crazy research, we've also given implementations of individual cells: nn.LSTMCell and nn.GRUCell

A fully tested and verified example is provided in https://github.com/pytorch/examples/tree/master/word_language_model
This example does word-level language modeling on the PennTreeBank dataset.

Adversarial Networks

A concise example of Generative Adversarial Networks for Image Generation is provided, integrating multiple datasets (showcasing the power of the vision package).
The example is < 250 lines of code, and gives a lot more clarity towards the usage of PyTorch.
Multiple data loader threads, checkpointing, saving generated images to disk and much more is showcased.

A stable and fleshed out Optim package

It took us some time to design a good and stable Optim API, but now we have converged to a clean design.
The Optim package is fully Multi-GPU and Multi-device ready out of the box.
Now we've implemented and unit tested the following algorithms:

  • SGD, AdaDelta, Adagrad, Adam, AdaMax, Averaged SGD, RProp, RMSProp

Setting per-layer learning rates, or optimizing only part of your neural network is now very trivial.

It is fully documented here: http://pytorch.org/api/latest/en/#torch-optim
It's usage can be seen both in the DCGAN and Imagenet examples.

Improved Multi-GPU performance (and more is coming)

We've improved the Multi-GPU performance since alpha-4, and we are close to squeezing out full performance.
We are working closely with NVIDIA to squeeze out the last drops of performance and make PyTorch future-proof for the P100 and new cards.