Skip to content
Interactive Classification for Deep Learning Interpretation
JavaScript C Python CSS HTML
Branch: master
Clone or download

Latest commit

Files

Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
build
images
inpaint Clarified README instructions Jun 12, 2018
public
squeezenet
src reverted to Telea since AWS down Oct 31, 2018
.gitignore
.yarnclean
LICENSE.md
README.md
package-lock.json small changes and build Apr 27, 2018
package.json
yarn.lock

README.md

Interactive Classification for Deep Learning Interpretation

We have designed and developed an interactive system that allows users to experiment with deep learning image classifiers and explore their robustness and sensitivity. Selected areas of an image can be removed in real time with classical computer vision inpainting algorithms, allowing users to ask a variety of "what if" questions by experimentally modifying images and seeing how the deep learning model reacts. The system also computes class activation maps for any selected class, which highlight the important semantic regions of an image the model uses for classification. The system runs fully in browser using Tensorflow.js, React, and SqueezeNet. An advanced inpainting version is also available using a server running the PatchMatch algorithm from the GIMP Resynthesizer plugin.

YouTube video demo

This is the code repository for the accepted CVPR 2018 Demo: Interactive Classification for Deep Learning Interpretation. Visit our research group homepage Polo Club of Data Science at Georgia Tech for more related research!

Example Scenario: Interpreting "Failed" Classification

The modified image (left), originally classified as dock is misclassified as ocean liner when the masts of a couple boats are removed from the original image (right). The top five classification scores are tabulated underneath each image.

Failed classification

Installation

Download or clone this repository:

git clone https://github.com/poloclub/interactive-classification.git

Within the cloned repo, install the required packages with yarn:

yarn

Usage

To run, type:

yarn start

Advanced Inpainting

The following steps are needed to set up PatchMatch inpainting, which currently only works on Linux:

  1. Clone the Resynthesizer repository and follow the instructions for building the project (stop after running make)
  2. Find the libresynthesizer.a shared library in the generated lib folder and copy it to the inpaint folder in this repository
  3. Run gcc resynth.c -L. -lresynthesizer -lm -lglib-2.0 -o prog (may have to install glib2.0 first) to generate the prog executable
  4. You can now run python3 inpaint_server.py and PatchMatch will be used as the inpainting algorithm when running the React application with yarn start.

Citation

Interactive Classification for Deep Learning Interpretation
Angel Cabrera, Fred Hohman, Jason Lin, Duen Horng (Polo) Chau
Demo, Conference on Computer Vision and Pattern Recognition (CVPR). June 18, 2018. Salt Lake City, USA.

@article{cabrera2018interactive,
  title={Interactive Classification for Deep Learning Interpretation},
  author={Cabrera, Angel and Hohman, Fred and Lin, Jason and Chau, Duen Horng},
  journal={Demo, Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition},
  year={2018},
  organization={IEEE}
}

Researchers

Name Affiliation
Angel Cabrera Georgia Tech
Fred Hohman Georgia Tech
Jason Lin Georgia Tech
Duen Horng (Polo) Chau Georgia Tech

License

MIT License. See LICENSE.md.

Contact

For questions or support open an issue.

You can’t perform that action at this time.