Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning
Switch branches/tags
Nothing to show
Clone or download
Latest commit c7cb29f Nov 1, 2018
Permalink
Failed to load latest commit information.
code/Hierarchical-SP scripts for collecting user simulator data Aug 28, 2018
data/lam data Aug 28, 2018
LICENSE Create LICENSE Aug 28, 2018
README.md Update README.md Nov 1, 2018

README.md

Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning

1. Introduction

This repository contains source code and dataset for paper "Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning" (AAAI'19). Please refer to Table 1 in the paper for an example.

2. Dataset

The processed dataset can be found:

  • Full data (compressed) for reconstructing the model.
  • Toy data that contains a subset of the full training data, for quick model test.

Data format: Python Pickle files. Please open with pickle.load(open(filename)).

Data source: Training set from (Ur et al., CHI'16) and Test set from (Quirk et al., ACL'15).

3. Code

All source code is in cd code/Hierarchical-SP.

Requirements:

  • Python 2.7
  • Tensorflow >= 1.4.0

3.1 Agent training/testing

To train the HRL agent:

python run.py --train --training_stage=0

To train the HRL_fixedOrder agent:

python run.py --train --training_state=1
  • For testing them on the test set, replace --train with --test.
  • To quick test on the toy dataset, append --toy_data.

To interactively test the four agents {HRL, HRL_fixedOrder, LAM_sup, LAM_rule}:

python interactive_test.py --level='VI-3' --user-name=yourname

3.2 User simulator

Please refer to the paper appendix for more details. The scripts for PPDB paraphrasing and collecting from user data or official function descriptions can be found.

4. Citation

Please kindly cite the following paper if you use the code or the dataset in this repo:

@article{yao2018interactive,
  title={Interactive Semantic Parsing for If-Then Recipes via Hierarchical Reinforcement Learning},
  author={Yao, Ziyu and Li, Xiujun and Gao, Jianfeng and Sadler, Brian and Sun, Huan},
  journal={arXiv preprint arXiv:1808.06740},
  year={2018}
}