modlm: A toolkit for mixture of distributions language models
by Graham Neubig (firstname.lastname@example.org)
The dockerfile sets up modlm for training, all dependencies included:
docker build -t modlm:latest .
First, in terms of standard libraries, you must have autotools, libtool, and Boost. If you are on Ubuntu/Debian linux, you can install them below:
$ sudo apt-get install autotools libtool libboost-all
You must install Eigen and dynet separately. Follow the directions on the
dynet page, which also explain about installing Eigen.
Note that you should use the version of dynet tagged
v2.0 (commit 1241cfc),
eigen from changeset
346ecdb, according to the docs.
git clone https://github.com/clab/dynet cd dynet ; git checkout tags/v2.0 ; cd .. hg clone https://bitbucket.org/eigen/eigen/ -r 346ecdb # NOTE Compile dynet, before proceeding with modlm
Once these two packages are installed, run the following commands, specifying the correct paths for dynet and Eigen.
$ autoreconf -i $ ./configure --with-dynet=/path/to/dynet --with-eigen=/path/to/eigen $ make
In the instructions below, you can see how to use modlm to train and use language models.
More information about the method used in the toolkit can be found in the following paper:
Generalizing and Hybridizing Count-based and Neural Language Models Graham Neubig and Chris Dyer. ArXiv Preprint.
You can find an example of how to run the toolkit in the
example directory, which will reproduce our
main experiments from the paper.
Our main experiments can be run by the following process:
- Enter the directory with
- Decompress the training data with
preproc.shto train count-based language models
process.shto train neurally interpolated n-gram, standard LSTM language model, and neural/ngram hybrid models
Log files and models will be written out to the
Further instructions about how to use the program are currently in preparation.