Skip to content
Supporting Rapid Prototyping with a Toolkit (incl. Datasets and Neural Network Layers)
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
build_tools
docs Merge branch 'master' into pytorch-1.0-logo Jun 2, 2018
examples Fix imports Nov 29, 2018
tests Don't test simple_qa Nov 29, 2018
torchnlp Remove white spaces Dec 9, 2018
.flake8
.gitignore Stop tracking .word_vectors_cache directory Apr 12, 2018
.travis.yml Test Python 3.5 Mar 25, 2018
CODE_OF_CONDUCT.md Create CODE_OF_CONDUCT Mar 18, 2018
CONTRIBUTING.md Update contributing Apr 12, 2018
ISSUE_TEMPLATE.md
LICENSE
MANIFEST.in
README.md Add Downloads Jul 4, 2018
codecov.yml Update codecov May 5, 2018
requirements.txt
setup.cfg
setup.py Update setup.py requirements Nov 29, 2018

README.md

Supporting Rapid Prototyping with a Deep Learning NLP Toolkit   Tweet

PyTorch-NLP, or torchnlp for short, is a library of neural network layers, text processing modules and datasets designed to accelerate Natural Language Processing (NLP) research.

Join our community, add datasets and neural network layers! Chat with us on Gitter and join the Google Group, we're eager to collaborate with you.

PyPI - Python Version Codecov Downloads Documentation Status Build Status

Installation

Make sure you have Python 3.5+ and PyTorch 0.4 or newer. You can then install pytorch-nlp using pip:

pip install pytorch-nlp

Or to install the latest code via:

pip install git+https://github.com/PetrochukM/PyTorch-NLP.git

Docs 📖

The complete documentation for PyTorch-NLP is available via our ReadTheDocs website.

Basics

Add PyTorch-NLP to your project by following one of the common use cases:

Load a Dataset

Load the IMDB dataset, for example:

from torchnlp.datasets import imdb_dataset

# Load the imdb training dataset
train = imdb_dataset(train=True)
train[0]  # RETURNS: {'text': 'For a movie that gets..', 'sentiment': 'pos'}

Apply Neural Networks Layers

For example, from the neural network package, apply a Simple Recurrent Unit (SRU):

from torchnlp.nn import SRU
import torch

input_ = torch.autograd.Variable(torch.randn(6, 3, 10))
sru = SRU(10, 20)

# Apply a Simple Recurrent Unit to `input_`
sru(input_)
# RETURNS: (
#   output [torch.FloatTensor (6x3x20)],
#   hidden_state [torch.FloatTensor (2x3x20)]
# )

Encode Text

Tokenize and encode text as a tensor. For example, a WhitespaceEncoder breaks text into terms whenever it encounters a whitespace character.

from torchnlp.text_encoders import WhitespaceEncoder

# Create a `WhitespaceEncoder` with a corpus of text
encoder = WhitespaceEncoder(["now this ain't funny", "so don't you dare laugh"])

# Encode and decode phrases
encoder.encode("this ain't funny.") # RETURNS: torch.LongTensor([6, 7, 1])
encoder.decode(encoder.encode("This ain't funny.")) # RETURNS: "this ain't funny."

Load Word Vectors

For example, load FastText, state-of-the-art English word vectors:

from torchnlp.word_to_vector import FastText

vectors = FastText()
# Load vectors for any word as a `torch.FloatTensor`
vectors['hello']  # RETURNS: [torch.FloatTensor of size 100]

Compute Metrics

Finally, compute common metrics such as the BLEU score.

from torchnlp.metrics import get_moses_multi_bleu

hypotheses = ["The brown fox jumps over the dog 笑"]
references = ["The quick brown fox jumps over the lazy dog 笑"]

# Compute BLEU score with the official BLEU perl script
get_moses_multi_bleu(hypotheses, references, lowercase=True)  # RETURNS: 47.9

Help

Maybe looking at longer examples may help you at examples/.

Need more help? We are happy to answer your questions via Gitter Chat

Contributing

We've released PyTorch-NLP because we found a lack of basic toolkits for NLP in PyTorch. We hope that other organizations can benefit from the project. We are thankful for any contributions from the community.

Contributing Guide

Read our contributing guide to learn about our development process, how to propose bugfixes and improvements, and how to build and test your changes to PyTorch-NLP.

Related Work

torchtext

torchtext and PyTorch-NLP differ in the architecture and feature set; otherwise, they are similar. torchtext and PyTorch-NLP provide pre-trained word vectors, datasets, iterators and text encoders. PyTorch-NLP also provides neural network modules and metrics. From an architecture standpoint, torchtext is object orientated with external coupling while PyTorch-NLP is object orientated with low coupling.

AllenNLP

AllenNLP is designed to be a platform for research. PyTorch-NLP is designed to be a lightweight toolkit.

Authors

Citing

If you find PyTorch-NLP useful for an academic publication, then please use the following BibTeX to cite it:

@misc{pytorch-nlp,
  author = {Petrochuk, Michael},
  title = {PyTorch-NLP: Rapid Prototyping with PyTorch Natural Language Processing (NLP) Tools},
  year = {2018},
  publisher = {GitHub},
  journal = {GitHub repository},
  howpublished = {\url{https://github.com/PetrochukM/PyTorch-NLP}},
}
You can’t perform that action at this time.