Towards Understanding Deep Learning Representations via Interactive Experimentation
The Shape Workshop (ShapeShop) is an interactive system for visualizing and understanding what representations a neural network model has learned in images consisting of simple shapes. It encourages model building, experimentation, and comparison to helps users explore the robustness of image classifiers.
We suggest creating a new environment to run ShapeShop. If you are using Anaconda, create a new environment called
shapeshop by running
conda create --name shapeshop python=3
Switch to this environment by running
source activate shapeshop
Requirements: Python (3.5)
From within the new environment install the following packages with the versions listed below:
For Keras, use our backend provided in
keras.json backend is located
$HOME/.keras/keras.json. See keras.io/backend for more details.
D3 4.0 (loaded via web, no installation needed)
jQuery 1.12.4 (loaded via web, no installation needed)
Download or Clone
Once the requirements have been met, simply download or clone the repository.
git clone https://github.com/fredhohman/shapeshop.git
Run the system by
shapeshop/ directory and pointing your browser to
To use ShapeShop, follow the enumerated steps.
- Select Training Data: Choose what training data you want include. The number of training images chosen corresponds to how many classes the image classifier contains. You must select at least two (when two are chosen, this corresponds to binary classification)!
- Select Model: Choose which model you want to use. MLP corresponds to a multilayer perceptron and CNN corresponds to a convolutional neural network.
- Select Hyperparameters: Choose what hyperparameters you want for model training and the image generation process.
- Train and Visualize: Click the button to train the model and generate your results!
ShapeShop uses the class activation maximization visualization technique to maximize each class to produce N images, each corresponding to one class. The system then presents all N resulting images, correlation coefficients, and the original class desired to be visualized back to the user for visual inspection and comparison. This process then repeats, where the user can select different images to train on, produce more images from new models, and compare to the previous results.
ShapeShop: Towards Understanding Deep Learning Representations via Interactive Experimentation.
Fred Hohman, Nathan Hodas, Duen Horng Chau.
Extended Abstracts, ACM Conference on Human Factors in Computing Systems (CHI). May 6-11, 2017. Denver, CO, USA.
MIT License. See
For questions and support contact Fred Hohman.