Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Merging Weak and Active Supervision for Semantic Parsing (WASSP)

This repository contains the experiment code for the AAAI 2020 paper, Merging Weak and Active Supervision for Semantic Parsing.

Note: Ansong Ni has moved from CMU to Yale, please see his new contact info here.

Memory Augmented Policy Optimization (MAPO)

The semantic parsing model we used in our paper is MAPO. If you are looking for more information about MAPO, please refer to this paper and repository.


To run our code, you need to set up the environment with the following steps:

# Go to a convenient location and clone this repo
git clone
cd wassp

# Create the conda environment and install the requirements
conda create --name wassp python=2.7
source activate wassp
pip install requirements.txt

Then you need to download the data and pretrained MAPO models (which we use as baseline) from here. Unzip the downloaded file and put the resulting data folder under the wassp directory so it looks like this:

    ├── data
        └── ...
    ├── images
    ├── nsm
    ├── nsm.egg-info
    └── table
        └── ...

Or you could simply do:

cd wassp

Finally you need to run so the dependencies are set correctly:

source activate wassp
cd wassp
python develop

Running experiments

Starting WikiSQL Experiment

source activate wassp
cd wassp/table/wikisql/
./ active_learning your_experiment_name

Starting WikiTableQuestions Experiment

source activate wassp
cd wassp/table/wtq/
./ active_learning your_experiment_name

Different settings for active learning

To change:

  • Active learning selection heuristic;
  • Forms of extra supervision;
  • Querying budget

please see relevant options described in the file.


Our experiments are run on g3.4xlarge AWS instance, which has 16 vCPUs and 122 GiB of memory as well as a M60 GPU with ~8GiB of GPU memory. It takes ~10 hours to run WikiSQL experiments and ~4 hours to run WikiTableQuestions experiments.

If you are running the experiments on a machine with less CPU Computing Power/RAM, we recommend you to decrease the n_actors(default=30) parameter in

Monitoring training process

You can monitor the training process with tensorboard, specifically:

source activate wassp
cd wassp/data/wikisql # or wtq, depending on which dataset are you using
tensorboard --logdir=ouput

To see the tensorboard, got to [your AWS public DNS]:6006 and avg_return_1 is the main metric (accuracy).

An example of our training process is shown in the screenshot below:


If you use the code in your research, please cite:

title = {Merging Weak and Active Supervision for Semantic Parsing},
author = {Ansong Ni and Pengcheng Yin and Graham Neubig},
booktitle = {Thirty-Fourth AAAI Conference on Artificial Intelligence (AAAI)},
address = {New York, USA},
month = {February},
year = {2020}

  title={Memory Augmented Policy Optimization for Program Synthesis and Semantic Parsing},
  author={Liang, Chen and Norouzi, Mohammad and Berant, Jonathan and Le, Quoc V and Lao, Ni},
  booktitle={Advances in Neural Information Processing Systems},

  title={Neural Symbolic Machines: Learning Semantic Parsers on Freebase with Weak Supervision},
  author={Liang, Chen and Berant, Jonathan and Le, Quoc and Forbus, Kenneth D and Lao, Ni},
  booktitle={Proceedings of the 55th Annual Meeting of the Association for Computational Linguistics (Volume 1: Long Papers)},


This code is developed by Ansong Ni while he was at CMU but he is now at Yale. So if you find issues in running the code or would like to discuss some part of this work, feel free to contact Ansong at this new email address:


Official code for AAAI'20 paper "Merging Weak and Active Supervision for Semantic Parsing"







No releases published


No packages published