Unofficial DyNet implementation of the paper Deep Recurrent Generative Decoder for Abstractive Text Summarization (EMNLP 2017)[1]
- Python 3.6.0+
- DyNet 2.0+
- NumPy 1.12.1+
- scikit-learn 0.19.0+
- tqdm 4.15.0+
To get preprocedded gigaword corpus[2], run
sh download_gigaword_dataset.sh
--gpu
: GPU ID to use. For cpu, set-1
[default:0
]--n_epochs
: Number of epochs [default:3
]--n_train
: Number of training data (up to3803957
) [default:3803957
]--n_valid
: Number of validation data (up to189651
) [default:189651
]--vocab_size
: Vocabulary size [default:60000
]--batch_size
: Mini batch size [default:32
]--emb_dim
: Embedding size [default:256
]--hid_dim
: Hidden state size [default:256
]--lat_dim
: Latent state size [default:256
]--alloc_mem
: Amount of memory to allocate [mb] [default:8192
]
python train.py --n_epochs 10
--gpu
: GPU ID to use. For cpu, set-1
[default:0
]--n_test
: Number of test data [default:189651
]--beam_size
: Beam size [default:5
]--max_len
: Maximum length of decoding [default:100
]--model_file
: Trained model file path [default:./model_e1
]--input_file
: Test file path [default:./data/valid.article.filter.txt
]--output_file
: Output file path [default:./pred_y.txt
]--w2i_file
: Word2Index file path [default:./w2i.dump
]--i2w_file
: Index2Word file path [default:./i2w.dump
]--alloc_mem
: Amount of memory to allocate [mb] [default:1024
]
python test.py --beam_size 10
You can use pythonrouge[2] to measure the rouge scores.
ROUGE-1 (F1) | ROUGE-2 (F1) | ROUGE-L (F1) | |
---|---|---|---|
My implementation | 43.27 | 19.17 | 40.47 |
Work in progress.
Work in progress.
To get the pretrained model, run
sh download_gigaword_pretrained_model.sh
.
- ROUGE scores are much higher than the ones the paper reported, but I don't know why. Please tell me if you know why!
- Original paper lacks some details and notations, and some points do not make sense, so this implementation may be different from the original one.
- [1] P. Li et al. 2017. Deep Recurrent Generative Decoder for Abstractive Text Summarization. In Proceedings of EMNLP 2017 [pdf]
- [2] pythonrouge: https://github.com/tagucci/pythonrouge
- [3] Gigaword/DUC2004 Corpus: https://github.com/harvardnlp/sent-summary