This repository implements the following techniques for interpreting convolutional neural networks:
- Saliency maps [1]
- Guided Backpropagation [2]
- Class visualization [3]
- Grad-CAM [4]
Apart from this, the following techniques are also implemented
- Adversarial fooling (by backpropagating gradients w.r.t. to classification error of required fooling class into the image) [5]
-
Simonyan, K. et al. “Deep Inside Convolutional Networks: Visualising Image Classification Models and Saliency Maps.” CoRR abs/1312.6034 (2014): n. pag.
-
Springenberg, Jost Tobias et al. “Striving for Simplicity: The All Convolutional Net.” CoRR abs/1412.6806 (2015): n. pag.
-
Yosinski, J. et al. “Understanding Neural Networks Through Deep Visualization.” ArXiv abs/1506.06579 (2015): n. pag.
-
Selvaraju, R. R. et al. “Grad-CAM: Visual Explanations from Deep Networks via Gradient-Based Localization.” International Journal of Computer Vision 128 (2019): 336-359.
-
Szegedy, Christian et al. “Intriguing properties of neural networks.” CoRR abs/1312.6199 (2014): n. pag.