Proof of concept for a dynamic named tensor for pytorch
Clone or download
Latest commit 9861500 Jan 8, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
docs Add doc and examples Jan 7, 2019
examples . Jan 8, 2019
namedtensor . Jan 8, 2019
notebooks . Jan 8, 2019
.gitignore Initial commit Jan 1, 2019
.travis.yml flake and travis sync Jan 5, 2019
LICENSE Initial commit Jan 1, 2019
README.md Update README.md Jan 8, 2019
format.sh CNN examples Jan 7, 2019
github_deploy_key_harvardnlp_namedtensor.enc Move named tensor down Jan 5, 2019
requirements.txt Add a travis file Jan 5, 2019
setup.py Add distributions wrapper Jan 7, 2019

README.md

NamedTensor

Introduction

A proposal for a named tensor for PyTorch described in the blog post:

http://nlp.seas.harvard.edu/NamedTensor

Currently the library targets the PyTorch ecosystem and Python >= 3.6.

Usage

from namedtensor import ntorch

Creation and manipulation:

x = ntorch.randn(dict(batch=10, h=10, w=20))
x = x.log()
x = x.float()
x = ntorch.exp(x)
x.shape

Transposition:

x = x.transpose("batch", "w", "h")

# or early dim stay in place 

x = x.transpose("w", "h")

View replacements:

x = x.stack(stacked = ("w", "h"))

# Roundtrip

x = x.split(stacked = ("w", "h"), w=20)

Dim replacements:

x = x.narrow("w", 0, 10)
x = x.softmax("w")

Reduction:

x = x.mean("w")
x, argmax = x.max("w")

Matrix Operations / EinSum:

x = ntorch.randn(dict(batch=10, h=10, w=20))
y = ntorch.randn(dict(batch=10, w=20, c=30))
x.dot(y, "w")

Lifting Torch Functions

linear = nn.Linear(20, 25)
x = x.op(linear)

# or 

x = x.op(linear, wout = "w")

Other Goodies

  • Named Distributions libary

Documentation

http://nlp.seas.harvard.edu/namedtensor/

Authors