This repository is the official implementation of XOOD: Extreme Value Based Out-Of-Distribution Detection For Image Classification.
- Python 3.8.
- Numpy, sklearn, matplotlib, scipy, skimage, pytorch, torchvision, tensorflow.
conda create -n xood python=3.8 pytorch numpy matplotlib scipy scikit-learn pandas scikit-image tensorflow-gpu torchvision cudatoolkit=11.3 -c pytorch
A standard set of pretrained models was used for testing XOOD on CIFAR-10, CIFAR-100 and SVHN. They can be downloaded from Mahalanobis: https://github.com/pokaxpoka/deep_Mahalanobis_detector/
Download these models and save each state dictionary as models/dataset_name/model_name/state_dict.pt
Download our collection of out-of-distribution datasets (https://drive.google.com/file/d/1Pdm3aJXDiwkfadZwSQOQFweIjTmy9Yk2/view?usp=sharing) and place them in the datasets folder. These are the same datasets as shared by ODIN (https://github.com/facebookresearch/odin), but we have removed any zero padding, filtered out greyscale images and saved the images to a DataFrame.
Download SVHN (train_32x32.mat, test_32x32.mat) from http://ufldl.stanford.edu/housenumbers/ and put them in datasets/svhn.
To fit and evaluate XOOD on small images run:
python test_ood.py
Prepare the datasets:
- Download the imagenet1000 validation set and prepare the files using imagenet/create_validation_subfolders.py.
- Download iNaturalist, SUN, Places and Textures using the links provided by ReAct: https://github.com/deeplearning-wisc/react
- Save these files to the directory imagenet.
Imagenet models are downloaded from pytorch automatically when the code is executed.
To fit and evaluate XOOD on Imagenet1000 run:
python test_imagenet.py
XOOD achieves the following performance on CIFAR-10, CIFAR-100 and SVHN:
To reproduce these tables, download models and datasets as described in the section "Small Image Datasets" and run:
python test_ood.py
python gram_table.py