This is the PyTorch implementation of the paper: Multi-Domain Dialogue Acts and Response Co-Generation. We also release the human evaluation results for future research.
The model consists of three components, namely, a shared encoder, an act generator and a response generator.
Our dialogue act generator and response generator share same encoder and input, while having different masking strategies.
We model the act prediction as a sequence generation problem and jointly trained with the response generator.
This module is used to control the response generation based on the output of the act generator.
The dataset is already preprocessed and put in data/ folder (train.json, val.json and test.json). We have also uploaded the model checkpoints in model/ folder for those only want to test the performance.
CUDA_VISIBLE_DEVICES=0 python train_generator.py --option train --model model/ --batch_size 384 --max_seq_length 50 --act_source bert
Delexicalized Testing (The entities are normalzied into placeholder like [restaurant_name])
CUDA_VISIBLE_DEVICES=0 python train_generator.py --option test --model model/MarCo_BERT --batch_size 384 --max_seq_length 50 --act_source bert