Skip to content
SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking (ACL 2019)
Python Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
code adding save configure Aug 22, 2019
data init commit Jul 18, 2019
.gitignore init commit Jul 18, 2019
LICENSE Create LICENSE Jul 22, 2019 Update Aug 22, 2019 init commit Jul 18, 2019
requirements.txt init commit Jul 18, 2019 Update Aug 6, 2019 fix bugs and update readme Jul 22, 2019

Slot-Utterance Matching for Universal and Scalable Belief Tracking

This is the original PyTorch implemenation of SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking, Hwaran Lee*, Jinsik Lee*, and Tae-Yoon Kim, ACL 2019 (Short)


  • python 3.6
  • pytorch >= 1.0
  • Install python packages:
    • pip install -r requirements.txt


Data prepration & pre-procesisng

  • Download corpus
    • WOZ2.0: download
    • MultiWOZ: download
      • Note: our experiments conducted on MultiWOZ 2.0 corpus
  • Pre-process corpus
    • The download original corpus are loacated in data/$corpus/original
    • See data/$corpus/original/
    • The pre-processed data are located in data/$corpus/


Please see

  • Training and evaluation
python3 code/ --do_train --do_eval --data_dir data/woz --bert_model bert-base-uncased --do_lower_case --task_name bert-gru-slot_query_multi --nbt rnn --output_dir exp-woz/model --target_slot all 
  • Specifying slots (or domains) to train with the option --target_slots=$target_slots

    • e.g., For WOZ2.0, "0:1", "0:2", "1:2" (0=Area, 1=Food, 2=Pricerange)
    • e.g., For MultiWOZ, specify the domain name you want to exclude: "train" or "hotel"
    • If you want train with all slots, then --target_slots=all
  • This code supports Multi-gpu training

    • CUDA_VISIBLE_DEVICES=$cuda python3 code/

Experiment results on MultiWOZ

  • Command
python3 code/ --do_train --do_eval --num_train_epochs 300 --data_dir data/multiwoz --bert_model bert-base-uncased --do_lower_case --task_name bert-gru-sumbt --nbt rnn --output_dir exp-multiwoz/model --target_slot all --warmup_proportion 0.1 --learning_rate 1e-4 --train_batch_size 3 --eval_batch_size 16 --distance_metric euclidean --patience 15 --tf_dir tensorboard --hidden_dim 300 --max_label_length 32 --max_seq_length 64 --max_turn_length 22
  • Experiment result
Hidden dim Joint acc. Slot acc. Joint acc. (Restaurant) Slot acc. (Restaurant)
300 0.48806 0.97329 0.82854 0.96537
600 0.49064 0.97290 0.82840 0.96475

Notes and Acknowledgements

The code is developed based on PyTorch BERT from and The Annotated Transformer


  title={SUMBT: Slot-Utterance Matching for Universal and Scalable Belief Tracking},
  author={Lee, Hwaran and Lee, Jinsik and Kim, Tae-Yoon},
  booktitle={Proceedings of the 57th Conference of the Association for Computational Linguistics},

Contact Information

Contact: Hwaran Lee (, Jinsik Lee (, Tae-Yoon Kim (

You can’t perform that action at this time.