No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
ChungJunyoung ChungJunyoung
ChungJunyoung and ChungJunyoung added appendix
Latest commit e738dc7 Aug 8, 2016
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
character_base added translation scripts May 10, 2016
character_biscale updated Jul 2, 2016
preprocess remove ipdb Jun 1, 2016
presentation added appendix Aug 8, 2016
subword_base added translation scripts May 10, 2016
LICENSE added license May 13, 2016
README.md updated May 29, 2016
__init__.py initial commit May 10, 2016
data_iterator.py minor fix Jun 1, 2016
mixer.py
nmt.py
translate_readme.txt updated May 21, 2016

README.md

Character-Level Neural Machine Translation

This is an implementation of the models described in the paper "A Character-Level Decoder without Explicit Segmentation for Neural Machine Translation". http://arxiv.org/abs/1603.06147

Dependencies:

The majority of the script files are written in pure Theano.
In the preprocessing pipeline, there are the following dependencies.
Python Libraries: NLTK
MOSES: https://github.com/moses-smt/mosesdecoder
Subword-NMT (http://arxiv.org/abs/1508.07909): https://github.com/rsennrich/subword-nmt

This code is based on the dl4mt library.
link: https://github.com/nyu-dl/dl4mt-tutorial

Be sure to include the path to this library in your PYTHONPATH.

We recommend you to use the latest version of Theano.
If you want exact reproduction however, please use the following version of Theano.
commit hash: fdfbab37146ee475b3fd17d8d104fb09bf3a8d5c

Preparing Text Corpora:

The original text corpora can be downloaded from http://www.statmt.org/wmt15/translation-task.html
Once the downloading is finished, use the 'preprocess.sh' in 'preprocess' directory to preprocess the text files. For the character-level decoders, preprocessing is not necessary however, in order to compare the results with subword-level decoders and other word-level approaches, we apply the same process to all of the target corpora. Finally, use 'build_dictionary_char.py' for character-case and 'build_dictionary_word.py' for subword-case to build the vocabulary.
Updating...