Active Decision Boundary Annotation with Deep Generative Models
Switch branches/tags
Nothing to show
Clone or download

Active Decision Boundary Annotation with Deep Generative Models

Miriam W. Huijser and Jan C. van Gemert
International Conference on Computer Vision 2017 (ICCV): spotlight presentation.

arXiv preprint:

We provide the code for the decision boundary annotation active learning algorithm. If you find my code useful for your research, please cite:

  title={Active Decision Boundary Annotation with Deep Generative Models},
  author={Huijser, Miriam W and van Gemert, Jan C},
  booktitle={International Conference on Computer Vision ({ICCV})},


Clone this repository recursively because of the adapted ALI submodule:
git clone --recursive

Then install requirements:

cd ActiveBoundary
pip install -r requirements.txt

Running the code

The code is compatible only with Python2.7 (python).
The training code can be run with the following command:

python <query_strategy> [--iterations ITERATIONS] [--enable_gpu] [--oracle_type {line_labeler,noisy_line_labeler,human_line_labeler}] [--dataset {shoebag,mnist08,svhn08}] [--save_path SAVE_PATH] --percentage_labeled PERCENTAGE_LABELED] [--al_batch_size AL_BATCH_SIZE] 

<query_strategy> should be one of the following:

  1. random - random sampling, also called "passive learning".
  2. uncertainty - uncertainty sampling.
  3. uncertainty-dense - uncertainty-dense sampling.
  4. clustercentroids - 5-cluster centroids.

If --enable-gpu, the query lines (and points) are generated and saved as *.pdf in the save path.

For --oracle_type human_line_labeler, we require --enable-gpu. The user is shown an interface in which the decision boundary can be annotated.

For more (hyper)parameters please refer to

The first time is run for a certain dataset, the required data (and model if gpu is enabled) is downloaded automatically.


Any type of encoding-decoding network can be plugged into our system. See how to create your own class.

Our system currently uses the "Adversarially Learned Inference" model (Dumoulin et al., 2016). See for more information and the code.


We provide most encoded and non-encoded datasets from our paper:

  • MNIST 0 vs. 8
  • SVHN 0 vs. 8
  • ShoeBag

To download a dataset either run --dataset [mnist08, svhn08, shoebag] or run your own python script with the last lines of code in

Please do not hesitate to contact me ( if you have any questions.