Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

MarCo

This is the PyTorch implementation of the paper: Multi-Domain Dialogue Acts and Response Co-Generation. We also release the human evaluation results for future research.

Model Architecture

The model consists of three components, namely, a shared encoder, an act generator and a response generator.

Shared Encoder

Our dialogue act generator and response generator share same encoder and input, while having different masking strategies.

Act Generator

We model the act prediction as a sequence generation problem and jointly trained with the response generator.

Response Generator

This module is used to control the response generation based on the output of the act generator.

Usage

The dataset is already preprocessed and put in data/ folder (train.json, val.json and test.json). We have also uploaded the model checkpoints in model/ folder for those only want to test the performance.

Training

CUDA_VISIBLE_DEVICES=0 python train_generator.py --option train --model model/ --batch_size 384 --max_seq_length 50 --act_source bert

Delexicalized Testing (The entities are normalzied into placeholder like [restaurant_name])

CUDA_VISIBLE_DEVICES=0 python train_generator.py --option test --model model/MarCo_BERT --batch_size 384 --max_seq_length 50 --act_source bert

Requirements

  • torch==1.0.1
  • pytorch_pretrained_bert

Acknowledgements

We sincerely thank the MultiWoZ team for publishing such a great dataset. The code of this work is modified from HDSA-Dialog. We also thank the authors for developing it!

About

The code of ACL 2020 paper "Multi-Domain Dialogue Acts and Response Co-Generation"

Resources

License

Releases

No releases published

Packages

No packages published

Languages