Facebook AI Research Sequence-to-Sequence Toolkit written in Python.
Clone or download
jhcross and facebook-github-bot fix make_positions() typo (#316)
Summary:
Pull Request resolved: #316

This code should actually be keeping the padded positions as `padding_idx` (though note that this is on the ONNX export path, and it has no effect in the most common case when using the exported network to do un-batched inference).

Reviewed By: myleott

Differential Revision: D10431872

fbshipit-source-id: 79fe4ac27cafcd4701e0f2a90e29d1b7362dc6f8
Latest commit 0eea692 Oct 17, 2018
Permalink
Failed to load latest commit information.
docs Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0 Sep 25, 2018
examples Merge internal changes (#295) Sep 30, 2018
fairseq fix make_positions() typo (#316) Oct 17, 2018
scripts Move read_binarized.py to scripts/ Sep 3, 2018
tests Add denoising dataset for denoising autoencoder (#306) Oct 6, 2018
.flake8 fbshipit-source-id: 17992f6a5908f078942544b769eda7a340a5e359 Sep 30, 2018
.gitignore Ignore generated files for temporal convolution tbc Oct 19, 2017
.python3 fbshipit-source-id: 17992f6a5908f078942544b769eda7a340a5e359 Sep 30, 2018
CONTRIBUTING.md Architecture settings and readme updates Sep 15, 2017
LICENSE Initial commit Sep 15, 2017
PATENTS Initial commit Sep 15, 2017
README.md Update README.md Oct 2, 2018
distributed_train.py Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0 Sep 25, 2018
eval_lm.py Merge internal changes Oct 1, 2018
fairseq.gif Initial commit Sep 15, 2017
generate.py Add documentation Sep 3, 2018
interactive.py Pass encoder_input to generator, rather than src_tokens/src_lengths. Sep 25, 2018
multiprocessing_train.py Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0 Sep 25, 2018
preprocess.py Parallel preprocessing Sep 25, 2018
requirements.txt More updates for PyTorch (#114) Mar 1, 2018
score.py Add documentation Sep 3, 2018
setup.py Switch to DistributedDataParallelC10d and bump version 0.5.0 -> 0.6.0 Sep 25, 2018
train.py Merge internal changes (#295) Sep 30, 2018

README.md

Introduction

Fairseq(-py) is a sequence modeling toolkit that allows researchers and developers to train custom models for translation, summarization, language modeling and other text generation tasks. It provides reference implementations of various sequence-to-sequence models, including:

Fairseq features:

  • multi-GPU (distributed) training on one machine or across multiple machines
  • fast beam search generation on both CPU and GPU
  • large mini-batch training even on a single GPU via delayed updates
  • fast half-precision floating point (FP16) training
  • extensible: easily register new models, criterions, and tasks

We also provide pre-trained models for several benchmark translation and language modeling datasets.

Model

Requirements and Installation

Currently fairseq requires PyTorch version >= 0.4.0. Please follow the instructions here: https://github.com/pytorch/pytorch#installation.

If you use Docker make sure to increase the shared memory size either with --ipc=host or --shm-size as command line options to nvidia-docker run.

After PyTorch is installed, you can install fairseq with:

pip install -r requirements.txt
python setup.py build develop

Getting Started

The full documentation contains instructions for getting started, training new models and extending fairseq with new model types and tasks.

Pre-trained Models

We provide the following pre-trained models and pre-processed, binarized test sets:

Translation

Description Dataset Model Test set(s)
Convolutional
(Gehring et al., 2017)
WMT14 English-French download (.tar.bz2) newstest2014:
download (.tar.bz2)
newstest2012/2013:
download (.tar.bz2)
Convolutional
(Gehring et al., 2017)
WMT14 English-German download (.tar.bz2) newstest2014:
download (.tar.bz2)
Convolutional
(Gehring et al., 2017)
WMT17 English-German download (.tar.bz2) newstest2014:
download (.tar.bz2)
Transformer
(Ott et al., 2018)
WMT14 English-French download (.tar.bz2) newstest2014 (shared vocab):
download (.tar.bz2)
Transformer
(Ott et al., 2018)
WMT16 English-German download (.tar.bz2) newstest2014 (shared vocab):
download (.tar.bz2)
Transformer
(Edunov et al., 2018; WMT'18 winner)
WMT'18 English-German download (.tar.bz2) See NOTE in the archive

Language models

Description Dataset Model Test set(s)
Convolutional
(Dauphin et al., 2017)
Google Billion Words download (.tar.bz2) download (.tar.bz2)
Convolutional
(Dauphin et al., 2017)
WikiText-103 download (.tar.bz2) download (.tar.bz2)

Stories

Description Dataset Model Test set(s)
Stories with Convolutional Model
(Fan et al., 2018)
WritingPrompts download (.tar.bz2) download (.tar.bz2)

Usage

Generation with the binarized test sets can be run in batch mode as follows, e.g. for WMT 2014 English-French on a GTX-1080ti:

$ curl https://s3.amazonaws.com/fairseq-py/models/wmt14.v2.en-fr.fconv-py.tar.bz2 | tar xvjf - -C data-bin
$ curl https://s3.amazonaws.com/fairseq-py/data/wmt14.v2.en-fr.newstest2014.tar.bz2 | tar xvjf - -C data-bin
$ python generate.py data-bin/wmt14.en-fr.newstest2014  \
  --path data-bin/wmt14.en-fr.fconv-py/model.pt \
  --beam 5 --batch-size 128 --remove-bpe | tee /tmp/gen.out
...
| Translated 3003 sentences (96311 tokens) in 166.0s (580.04 tokens/s)
| Generate test with beam=5: BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)

# Scoring with score.py:
$ grep ^H /tmp/gen.out | cut -f3- > /tmp/gen.out.sys
$ grep ^T /tmp/gen.out | cut -f2- > /tmp/gen.out.ref
$ python score.py --sys /tmp/gen.out.sys --ref /tmp/gen.out.ref
BLEU4 = 40.83, 67.5/46.9/34.4/25.5 (BP=1.000, ratio=1.006, syslen=83262, reflen=82787)

Join the fairseq community

Citation

If you use the code in your paper, then please cite it as:

@inproceedings{gehring2017convs2s,
  author    = {Gehring, Jonas, and Auli, Michael and Grangier, David and Yarats, Denis and Dauphin, Yann N},
  title     = "{Convolutional Sequence to Sequence Learning}",
  booktitle = {Proc. of ICML},
  year      = 2017,
}

License

fairseq(-py) is BSD-licensed. The license applies to the pre-trained models as well. We also provide an additional patent grant.

Credits

This is a PyTorch version of fairseq, a sequence-to-sequence learning toolkit from Facebook AI Research. The original authors of this reimplementation are (in no particular order) Sergey Edunov, Myle Ott, and Sam Gross.