Skip to content


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time


Towards Understanding Deep Learning Representations via Interactive Experimentation

The Shape Workshop (ShapeShop) is an interactive system for visualizing and understanding what representations a neural network model has learned in images consisting of simple shapes. It encourages model building, experimentation, and comparison to helps users explore the robustness of image classifiers.

Read the paper.
View the poster.
Watch the teaser.



We suggest creating a new environment to run ShapeShop. If you are using Anaconda, create a new environment called shapeshop by running

conda create --name shapeshop python=3

Switch to this environment by running

source activate shapeshop

Requirements: Python (3.5)

From within the new environment install the following packages with the versions listed below:


For Keras, use our backend provided in keras.json. Your shapeshop environment's keras.json backend is located $HOME/.keras/keras.json. See for more details.

Requirements: JavaScript

D3 4.0 (loaded via web, no installation needed)
jQuery 1.12.4 (loaded via web, no installation needed)

Download or Clone

Once the requirements have been met, simply download or clone the repository.

git clone


Running ShapeShop

Run the system by


from the shapeshop/ directory and pointing your browser to http://localhost:5000.

Using ShapeShop

To use ShapeShop, follow the enumerated steps.

  1. Select Training Data: Choose what training data you want include. The number of training images chosen corresponds to how many classes the image classifier contains. You must select at least two (when two are chosen, this corresponds to binary classification)!
  2. Select Model: Choose which model you want to use. MLP corresponds to a multilayer perceptron and CNN corresponds to a convolutional neural network.
  3. Select Hyperparameters: Choose what hyperparameters you want for model training and the image generation process.
  4. Train and Visualize: Click the button to train the model and generate your results!

ShapeShop uses the class activation maximization visualization technique to maximize each class to produce N images, each corresponding to one class. The system then presents all N resulting images, correlation coefficients, and the original class desired to be visualized back to the user for visual inspection and comparison. This process then repeats, where the user can select different images to train on, produce more images from new models, and compare to the previous results.


ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Fred Hohman, Nathan Hodas, Duen Horng Chau.
Extended Abstracts, ACM Conference on Human Factors in Computing Systems (CHI). May 6-11, 2017. Denver, CO, USA.

Read the paper.
View the poster.
Watch the teaser: ACM, PoloClub.


MIT License. See


For questions and support contact Fred Hohman.