Skip to content
Minimal Seq2Seq model with Attention for Neural Machine Translation in PyTorch
Branch: master
Clone or download
Latest commit 65ebd8e Oct 18, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
.gitignore
LICENSE Initial commit Dec 2, 2017
README.md
model.py removes is_eval Aug 30, 2018
train.py Update train.py Oct 18, 2018
utils.py smaller dataset Dec 7, 2017

README.md

mini seq2seq

Minimal Seq2Seq model with attention for neural machine translation in PyTorch.

This implementation focuses on the following features:

  • Modular structure to be used in other projects
  • Minimal code for readability
  • Full utilization of batches and GPU.

This implementation relies on torchtext to minimize dataset management and preprocessing parts.

Model description

Requirements

  • GPU & CUDA
  • Python3
  • PyTorch
  • torchtext
  • Spacy
  • numpy
  • Visdom (optional)

download tokenizers by doing so:

sudo python3 -m spacy download de
sudo python3 -m spacy download en

References

Based on the following implementations

You can’t perform that action at this time.