Skip to content
Python implementation for evaluating explanations presented in "On the (In)fidelity and Sensitivity for Explanations" in NeurIPS 2019 for evaluating any saliency explanations.
Python Jupyter Notebook
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
models add pre-trained models Oct 21, 2019
Neurips_poster.pdf
README.md
config.py
eval_infid_sen.py initial commit mnist Oct 20, 2019
eval_models.py
explanations.py
infid_sen_utils.py
loader.py
vis_mnist.ipynb

README.md

Saliency Evaluation

Python implementation for evaluating explanations presented in On the (In)fidelity and Sensitivity for Explanations published in NeurIPS 2019 for evaluating any saliency explanations.

Get Started

Run vis_mnist.ipynb to see examples of explanations in MNIST along with their sensitivity and infidelity.

Acknowledgements

We build our visualization tools based on codes available in the following repositories:

  1. https://github.com/PAIR-code/saliency
  2. https://github.com/marcoancona/DeepExplain
You can’t perform that action at this time.