Skip to content

TooTouch/tootorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

18 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

tootorch

Implemetation XAI in Computer Vision (Pytorch)

Hits

Requirements

torch
opencv-python
pillow
h5py
tqdm

Installation

pip install tootorch

Interpretable Methods

Attribution Methods

Ensemble Methods

  • SmoothGrad (SG) [5]
  • SmoothGrad-Squared (SG-SQ) [6]
  • SmoothGrad-VAR (SG-VAR) [6]

Evaluation

  • Coherence
  • Selectivity
  • Remove and Retrain (ROAR) [6]
  • Keep and Retrain (KAR) [6]

Attention Methods

  • Residual Attention Network (RAN) [7]
  • Class Activation Methods (CAM) [8]
  • Convolutional Block Attention Module (CBAM) [9]
  • Wide Attention Residual Network (WARN) [10]

Reference

  • [1] Zeiler, M. D., & Fergus, R. (2014, September). Visualizing and understanding convolutional networks. In European conference on computer vision (pp. 818-833). Springer, Cham. [Paper] [Korean version]

  • [2] Springenberg, J. T., Dosovitskiy, A., Brox, T., & Riedmiller, M. (2014). Striving for simplicity: The all convolutional net. arXiv preprint arXiv:1412.6806. [Paper]

  • [3] Sundararajan, M., Taly, A., & Yan, Q. (2017, August). Axiomatic attribution for deep networks. In Proceedings of the 34th International Conference on Machine Learning-Volume 70 (pp. 3319-3328). JMLR. org. [Paper]

  • [4] Selvaraju, R. R., Cogswell, M., Das, A., Vedantam, R., Parikh, D., & Batra, D. (2017). Grad-cam: Visual explanations from deep networks via gradient-based localization. In Proceedings of the IEEE International Conference on Computer Vision (pp. 618-626). [Paper] [Korean version]

  • [5] Smilkov, D., Thorat, N., Kim, B., Viégas, F., & Wattenberg, M. (2017). Smoothgrad: removing noise by adding noise. arXiv preprint arXiv:1706.03825. [Paper] [Korean version]

  • [6] Hooker, S., Erhan, D., Kindermans, P. J., & Kim, B. (2018). Evaluating feature importance estimates. arXiv preprint arXiv:1806.10758. [Paper] [Korean version]

  • [7] Wang, F., Jiang, M., Qian, C., Yang, S., Li, C., Zhang, H., ... & Tang, X. (2017). Residual attention network for image classification. In Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition (pp. 3156-3164). [Paper]

  • [8] Zhou, B., Khosla, A., Lapedriza, A., Oliva, A., & Torralba, A. (2016). Learning deep features for discriminative localization. In Proceedings of the IEEE conference on computer vision and pattern recognition (pp. 2921-2929). [Paper]

  • [9] Woo, S., Park, J., Lee, J. Y., & So Kweon, I. (2018). Cbam: Convolutional block attention module. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 3-19). [Paper]

  • [10] Rodríguez, P., Gonfaus, J. M., Cucurull, G., XavierRoca, F., & Gonzalez, J. (2018). Attend and rectify: a gated attention mechanism for fine-grained recovery. In Proceedings of the European Conference on Computer Vision (ECCV) (pp. 349-364). [Paper]