[CVPR'18] Interactive Classification for Deep Learning Interpretation
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 2 commits ahead, 2 commits behind poloclub:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Failed to load latest commit information.


This is the forked repository for the accepted CVPR 2018 Demo: Interactive Classification for Deep Learning Interpretation. As one of its original developers I'll be pushing experimental features and refinements here geared towards Computer Vision research.

Interactive Classification for Deep Learning Interpretation

We have designed and developed an interactive system that allows users to experiment with deep learning image classifiers and explore their robustness and sensitivity. Selected areas of an image can be removed in real time with classical computer vision inpainting algorithms, allowing users to ask a variety of "what if" questions by experimentally modifying images and seeing how the deep learning model reacts. The system also computes class activation maps for any selected class, which highlight the important semantic regions of an image the model uses for classification. The system runs fully in browser using Tensorflow.js, React, and SqueezeNet. An advanced inpainting version is also available using a server running the PatchMatch algorithm from the GIMP Resynthesizer plugin.

UI demo

NEW: upload custom images and square-crop for instant classification

custom upload

Example Scenario: Interpreting "Failed" Classification

The modified image (left), originally classified as dock is misclassified as ocean liner when the masts of a couple boats are removed from the original image (right). The top five classification scores are tabulated underneath each image.

Failed classification


Download or clone this repository:

git clone https://github.com/poloclub/interactive-classification.git

Within the cloned repo, install the required packages with yarn:



To run, type:

yarn start

Advanced Inpainting

The following steps are needed to set up PatchMatch inpainting, which currently only works on Linux:

  1. Clone the Resynthesizer repository and follow the instructions for building the project (stop after running make)
  2. Find the libresynthesizer.a shared library in the generated lib folder and copy it to the inpaint folder in this repository
  3. Run gcc resynth.c -L. -lresynthesizer -lm -lglib-2.0 -o prog (may have to install glib2.0 first) to generate the prog executable
  4. You can now run python3 inpaint_server.py and PatchMatch will be used as the inpainting algorithm when running the React application with yarn start.


Interactive Classification for Deep Learning Interpretation
Angel Cabrera, Fred Hohman, Jason Lin, Duen Horng (Polo) Chau
Demo, Conference on Computer Vision and Pattern Recognition (CVPR). June 18, 2018. Salt Lake City, USA.


MIT License. See LICENSE.md.


For questions or support open an issue.