Skip to content
No description, website, or topics provided.
Python Shell
Branch: master
Clone or download
Latest commit 5433e43 Nov 12, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md Update README.md Nov 12, 2019
attention_decoder.py LSTM based model Sep 27, 2019
attention_decoder_softmax.py LSTM based model Sep 27, 2019
batcher.py Update batcher.py Nov 12, 2019
batcher.pyc LSTM based model Sep 27, 2019
beam_search.py Update beam_search.py Nov 12, 2019
corpus.txt
data.py
decode.py Update decode.py Nov 12, 2019
inspect_checkpoint.py LSTM based model Sep 27, 2019
model.py Update model.py Nov 12, 2019
run_summarization.py Update run_summarization.py Nov 12, 2019
test.sh
train.sh LSTM based model Sep 27, 2019
util.py LSTM based model Sep 27, 2019
val.sh LSTM based model Sep 27, 2019

README.md

dialogue-utterance-rewriter

dialogue-utterance-rewriter-corpus

Dataset for ACL 2019 paper "Improving Multi-turn Dialogue Modelling with Utterance ReWriter "

After another two months of human labeling, we release a much more better quality dataset(only positive samples) than the original one we used in our paper for better research. Hope you can get a better result.

Description

The positive dataset, 20000 dialogs. Each line in corpus.txt consists of four utterances of dialog (two context utterances, current utterance), and the rewritten uterance. Each line is tab-delimited (one tab) with the following format:

<A: context_1>\t<B: context_2>\t<A: current>\t<A: A: rewritten current>

LSTM-based Model

About the code

This code is based on the Pointer-Generator code.

Requirements

To run the souce codes, some external packages are required

  • python 2.7
  • Tensorflow 1.4

vocab file:

<word>\t<count>

Run training and Run (concurrent) eval

You may want to run a concurrent evaluation job, that runs your model on the validation set and logs the loss. To do this, run: To train your model, run:

sh train.sh
sh val.sh

Run beam search decoding

To run beam search decoding, first set restore_best_model=1 to restore the best model.

sh train.sh
sh test.sh

Why can't you release the Transformer model? Due to the company legal policy reasons, we cannot realease the Transformer code which has been used in online environment. However, feel free to email us to discuss training and model details.

Citation

@article{su2019improving,
  title={Improving Multi-turn Dialogue Modelling with Utterance ReWriter},
  author={Su, Hui and Shen, Xiaoyu and Zhang, Rongzhi and Sun, Fei and Hu, Pengwei and Niu, Cheng and Zhou, Jie},
  journal={ACL},
  year={2019}
}
You can’t perform that action at this time.