@soumith soumith released this Feb 2, 2017

Assets 2

Our last release (v0.1.5) was on November 14th, 2016

We finished, froze and released (v0.1.6) on Jan 21st, 2016.

A lot has happened since 0.1.5.

Summary

  • PyTorch public release on 18th Jan, 2016.
  • An initial Model Zoo, several common Vision models can be initialized with pretrained weights downloaded from the zoo.
  • All the 100+ torch.* functions bar 3 (topk, mode and kthvalue) are GPU-ready, and performance improvements across board for several existing ones.
  • All relevant neural network modules are now CuDNN bound.
  • Stochastic functions added to Autograd, for use in reinforcement learning
  • A functional interface of the nn library is added
  • GPU device initialization has been made lazy (improvement in CUDA initialization time on multi-GPU machines)
  • Pinned memory support, and leveraging it in DataLoader
  • Made error messages across board more informative, especially around shape checks
  • A rich set of examples and tutorials added to pytorch/examples and pytorch/tutorials
  • API Reference at pytorch.org/docs
  • Multiprocessing support for CUDA (Python3 only)
  • An initial version of CPU Sparse Tensors is added and used in nn.Embedding(sparse=True). More to come on this side.
  • Added a lua reader to load existing .t7 files with Torch models
  • Various bug-fixes.
  • Allow returning of changed gradients in hooks

API Changes

  • Conv*d and *Pool*d layers now take a tuple of kernel sizes/strides/padding instead of kh/kw.
  • Unpooling* layers have a changed API
  • Variable.grad is now a Variable (was a Tensor)
  • nn.Container is deprecated and merged into nn.Module. Replace all instances of nn.Container in your code with nn.Module
  • torch.cat changed API to take an iterable of tensors, along with a dimension (previously varargs of Tensors). Also torch.cat's default dimension is changed. It's been made an inverse transform for torch.split and torch.chunk.
  • Variable.no_grad has been renamed to Variable.detach
  • RMSProp's initialization of gradients changed from ones to zeros (#485)
  • Removed cmin, cmax and cinv (functionality of cmin, cmax split between max/min and clamp; cinv renamed to reciprocal)
  • register_hook API changed, names are removed. See: #446
  • torch.*(..., out=Tensor) is adopted for output arguments

Model Zoo

A model zoo has been started with several pre-trained vision models available such as AlexNet, ResNet50, etc. The download and usage of the models is seamless with a keyword argument.

import torchvision.models as models
models.alexnet(pretrained=True)

The models are hosted on Amazon S3, and we look forward to more models from the community.
Basic documentation is found here:

http://pytorch.org/docs/model_zoo.html

You can find specific models listed in the README of torchvision and torchtext

Stochastic Functions in Autograd

We introduced Stochastic functions that needed to be provided with a reward for their backward.
This feature was inspired by Gradient Estimation Using Stochastic Computation Graphs by Schulman et. al. and is helpful to implement reinforcement learning techniques.
Documentation is here: http://pytorch.org/docs/autograd.html#torch.autograd.Variable.reinforce
A showcase of using these nodes is in the REINFORCE example: https://github.com/pytorch/examples/blob/master/reinforcement_learning/reinforce.py#L70

Functional interface to nn

PyTorch neural networks have so far been modeled around nn.Module. However, for most simple functions such as ReLU, using this is a bit cumbersome.
To simplify this, we've introduced a functional interface to nn, and modified the tutorials to use this API where appropriate.

For example:

import torch.nn as nn
import torch.nn.functional as F

# module style
relu = nn.ReLU()
y = relu(x)

# functional style
y = F.relu(x)

The functional style is convenient when using non-parametric and non-learnable functions.

Documentation for these functions is here: http://pytorch.org/docs/nn.html#torch-nn-functional

Faster GPU code

The initialization of the GPU backend has been made lazy. This means that it will automatically be
imported and initialized when needed (and not before-hand). Doing this has improved startup times (especially for multi-GPU systems) and reduced boilerplate code.

We've also integrated support for pinned memory, which accelerates CPU to GPU transfers for specially marked buffers. Using this, we accelerated the multiprocessing data loaders.

A rich set of examples

With the help of some of you, we've added a rich set of examples from Image Super-resolution to Neural Machine Translation.
You can explore more here: https://github.com/pytorch/examples

API Reference and Notes

We've fleshed out a full API reference that is mostly complete at docs.pytorch.org
Contributions are welcome :)

We've also added notes such has CUDA Semantics, Extending PyTorch, etc.

Multiprocessing support for CUDA

Uptil now, Tensor sharing using multiprocessing only worked for CPU Tensors.
We've now enabled Tensor sharing for CUDA tensors when using python-3.
You can read more notes here: http://pytorch.org/docs/notes/multiprocessing.html

Lua Reader

A "lua reader" has been integrated, that can load most LuaTorch .t7 files, including nn models.
nngraph models are not supported.

Example usage can be found here: https://discuss.pytorch.org/t/convert-import-torch-model-to-pytorch/37/2