PyTorch implementation of the paper:
Non-Monotonic Sequential Text Generation
Sean Welleck, Kiante Brantley, Hal Daume III, Kyunghyun Cho
ICML 2019
We present code and data for training models described in the paper, and notebooks for evaluating pre-trained models.
python setup.py develop
For downloading the datasets below, it may be helpful to use gdown.pl.
- Google drive
- Put the
.jsonl
files into a directory{PCHAT_DIR}
.
- Google drive
- Unzip the dataset, e.g. to
/path/to/iwslt
. Then{MT_DIR}
below will be/path/to/iwslt/IWSLT/en-de/
.
You can use and evaluate pre-trained models in one of the provided notebooks:
Task | Models | Notebook |
---|---|---|
Word Reordering | Google drive | notebooks/word_reorder_eval.ipynb |
Unconditional Generation | Google drive | notebooks/unconditional_eval.ipynb |
Translation (Transformer) | Google drive | notebooks/translation_eval.ipynb |
The word-reordering and translation notebooks reproduce the evaluation metrics (e.g. BLEU) in the paper.
The unconditional notebook demos the models via interactive sampling and tree completion.
First download and unzip GloVe vectors into a directory {GLOVE_DIR}
.
python tree_text_gen/binary/bagorder/train.py --glovepath {GLOVE_DIR}/glove.840B.300d.txt \
--datadir {PCHAT_DIR}
python tree_text_gen/binary/unconditional/train.py --glovepath {GLOVE_DIR}/glove.840B.300d.txt \
--datadir {PCHAT_DIR}
python tree_text_gen/binary/translation/train_transformer.py --datadir {MT_DIR}
Use --multigpu
for multi-GPU.
python tree_text_gen/binary/translation/train.py --datadir {MT_DIR} --model-type translation \
--beta-burnin 2 --beta-step 0.05 \
--self-teach-beta-step 0.05
By default these commands train policies with the annealed oracle. See tree_text_gen/{bagorder, unconditional, translation}/args.py
for hyper-parameters and arguments.