No description, website, or topics provided.
Switch branches/tags
Nothing to show
Clone or download
Latest commit 401b6fe Jun 26, 2018
Type Name Latest commit message Commit time
Failed to load latest commit information.
program add sample code Aug 15, 2016
script revise README Aug 15, 2016 revise README Aug 15, 2016
sample.iob A data sample file Jun 26, 2018

ContextualSLU: Multi-Turn Spoken/Natural Language Understanding

A Keras implementation of the models described in [Chen et al. (2016)] (

This model implements a memory network architecture for multi-turn understanding, where the history utterances are encoded as vectors and stored into memory cells for the current utterance's attention to improve slot tagging.



  1. Python
  2. Numpy pip install numpy
  3. Keras and associated Theano or TensorFlow pip install keras
  4. H5py pip install h5py


  1. Train/Test: word sequences with IOB slot tags and the indicator of the dialogue start point (1: starting point; 0: otherwise) data/cortana.communication.5.[train/dev/test].iob

Getting Started

You can train and test JointSLU with the following commands:

  git clone --recursive
  cd ContextualSLU

You can run a sample tutorial with this command:

  bash script/ memn2n-c-gru theano 0 | sh

Then you can see the predicted result in sample/rnn+emb_H-100_O-adam_A-tanh_WR-embedding.test.3.

Model Running

To reproduce the work described in the paper. You can run the baseline slot filling w/o contextual information using GRU by:

  bash script/ gru theano 0 | sh


Yun-Nung (Vivian) Chen,


Main papers to be cited

  author    = {Chen, Yun-Nung and Hakkani-Tur, Dilek and Tur, Gokhan and Gao, Jianfeng and Deng, Li},
  title     = {End-to-End Memory Networks with Knowledge Carryover for Multi-Turn Spoken Language Understanding},
  booktitle = {Proceedings of Interspeech},
  year      = {2016}