Code corresponding to the paper "When and How to Fool Explainable Models (and Humans) with Adversarial Examples" by Jon Vadillo, Roberto Santana and Jose A. Lozano.
To reproduce the adversarial examples:
- Download the COVIDNet-CXR Small pretrained model in xray/COVID-Net/pretrained/COVIDNet-CXR_Small/
- Run xray/XRAY_AdversarialExamples.ipynb
- Download the (ILSVRC2012) Imagenet training set in ilsvrc/datasets/
- Download the (blurred) Imagenet validation set in ilsvrc/datasets/
- Run ilsvrc/ILSVRC_AdversarialExamples.ipynb