langdist
is a Python project for experimenting Character-level Multilingual Language Modeling, which is to see how learning a character-level language model in one language helps learning another character-level language model in a different language. The project is still under development and can offer limited functionality.
- Download and preprocess multilingual parallel corpora (Multilingual Bible Parallel Corpus)
- Train a monolingual language model
- This is a language model trained in one language
- Train a bilingual language model
- This is a language model that is trained on top of another language model (the parameters are initialized using another language model's parameters)
- Generate texts using a trained language model
- This repository can run on Ubuntu 14.04 LTS & Mac OSX 10.x (not tested on other OSs)
- Tested only on Python 3.5
langdist
depends on NumPy and Scipy, Python packages for scientific computing. You might need to have them installed prior to installing langdist
.
You can install langdist
by:
pip install langdist
This installs langdist
package to your Python, as well as langdist
command and add it to your PATH
.
langdist
also depends on tensorflow
package. In default, it tries to install the CPU-only version of tensorflow
. If you want to use GPU, you need to install tensorflow
with GPU support by yourself. (C.f. Installing Tensorflow)
After installing, langdist --help
will print help of how to use langdist
command.
langdist
implements a command to download and preprocess a corpus from Multilingual Bible Parallel Corpus. The following command will download an English corpus and save it to ./en_corpus.pkl
.
langdist download-bible en en_corpus.pkl
Note that en
here is the language code of English. Specifying an invalid language code will raise an error message that shows the valid language codes.
You need to fit an encoder to the character used in corpora before you train a language model on them. Note that the same encoder will be used when you train a new language model on top of another language model (multilingual language model). Therefore, you need to fit an encoder to all the corpora you will train multilingual language models on.
The following command will fit an encoder to English, French, and Japanese corpora and save it to ./en_fr_ja_encoder.pkl
:
langdist fit-encoder en_fr_ja_encoder.pkl en_corpus.pkl fr_corpus.pkl ja_corpus.pkl
Note that xx_corpus.pkl
is a pickle file of a corpus, which can be generated by langdist download-bible
command. You can also create a list of texts by yourself and save it to a pickle file. (Each element of the list would correspond to a segment such as sentence, paragraph, article, etc. depending on your purpose.)
The following command will train a French language model and save it to ./fr_model
directory:
langdist train fr_corpus.pkl en_fr_ja_encoder.pkl fr_model --patience=819200 --logpath=fr.log
Note that using an encoder that was not fit to the corpus will throw an exception. --patience
option specifies how many iterations you want to keep training and --logpath
option specifies the path to the log file that records the progress of the training (no log file will be created if you don't specify the option).
During the training, various stats are dumped to path_to_model_dir/tensorboard.log
directory. You can visualize them using tensorboard
by tensorboard --logdir=path_to_model_dir/tensorboard.log
. The model is saved every time after computing validation perplexity and is available to use before finishing the training.
Check the output of langdist --help
to know what other options are available for training a language model.
The following command will train an English language model on top of the French language model we have trained and save it to fr2en_model
directory:
langdist retrain fr_model en_corpus.pkl fr2en_model --patience=819200 --logpath=langdist.log
Note that you don't have to specify the path to an encoder because the model in fr_model
includes it. If the encoder that was used when training fr_model
was not fit to characters in en_corpus.pkl
, it will throw an exception.
During the training, various stats are dumped to path_to_model_dir/tensorboard.log
directory. You can visualize them using tensorboard
by tensorboard --logdir=path_to_model_dir/tensorboard.log
. The model is saved every time after computing validation perplexity and is available to use before finishing the training.
Check the output of langdist --help
to know what other options are available for training a language model.
Once you have trained a language model, the following command will generate texts using the trained language model:
langdist generate fr2en_model --sample-num=50
--sample-num
option decides the number of texts to generate. Note that each text is independently generated (sampled) by the language model.
Check the output of langdist --help
to know what other options are available for training a language model.
langdist
can be used as a normal python package by importing langdist
package, which is installed to your Python environment by pip install langdist
. Reading langdist/cli.py
is a good way to figure out how to use the package.
The language model is implemented using Character-level Multilayer LSTM. The architecture is roughly as follows:
- Character Embedding Layer
- 1st LSTM Layer
- 2nd LSTM Layer
- Fully Connected Layer
For the details, look at _build_graph()
method in langdist/langmodel.py
, which implements the computational graph of the architecture in tensorflow
.
TODO: Add a link to the blog post Bilingual Character-level Neural Language Modeling