This repository implements the explainability techniques described in this paper: Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.
This repo trains a ConvNet on the CIFAR10 dataset and then applies the visualization techniques of the paper to understand what the ConvNet does.
To train the model, run:
python -m src.trainTo perform the explainability techniques, run:
python -m src.explain
