Skip to content
PyTorch implementation of A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text (EMNLP 2019)
Python Perl
Branch: master
Clone or download
Latest commit d59f91c Sep 4, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
config init Sep 4, 2019
data init Sep 4, 2019
modules init Sep 4, 2019
scripts init Sep 4, 2019
README.md init Sep 4, 2019
exp_utils.py init Sep 4, 2019
lm.py init Sep 4, 2019
prepare_data.py init Sep 4, 2019
text_anneal_fb.py init Sep 4, 2019
text_beta.py init Sep 4, 2019
text_get_mean.py init Sep 4, 2019
text_ss_ft.py init Sep 4, 2019
utils.py init Sep 4, 2019

README.md

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text

This is PyTorch implementation of the following paper:

A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text
Bohan Li*, Junxian He*, Graham Neubig, Taylor Berg-Kirkpatrick, Yiming Yang
EMNLP 2019

Please contact bohanl1@cs.cmu.edu if you have any questions.

Requirements

  • Python >= 3.6
  • PyTorch >= 1.0
  • pip install editdistance

Data

Datasets used in this paper can be downloaded with:

python prepare_data.py

Usage

Train a AE first

python text_beta.py \
    --dataset yahoo \
    --beta 0 \
    --lr 0.5

Train VAE with our method

ae_exp_dir=exp_yahoo_beta/yahoos_lr0.5_beta0.0_drop0.5
python text_anneal_fb.py \
    --dataset yahoo \
    --load_path ${ae_exp_dir}/model.pt \
    --reset_dec \
    --kl_start 0 \
    --warm_up 10 \
    --target_kl 8 \
    --fb 2 \
    --lr 0.5

Logs, models and samples would be saved into folder exp.

Reference

@inproceedings{li2019emnlp,
    title = {A Surprisingly Effective Fix for Deep Latent Variable Modeling of Text},
    author = {Bohan Li and Junxian He and Graham Neubig and Taylor Berg-Kirkpatrick and Yiming Yang},
    booktitle = {Conference on Empirical Methods in Natural Language Processing (EMNLP)},
    address = {Hong Kong},
    month = {November},
    year = {2019}
}

Acknowledgements

A large portion of this repo is borrowed from https://github.com/jxhe/vae-lagging-encoder

You can’t perform that action at this time.