Skip to content

kkirchheim/pytorch-ood

Repository files navigation

PyTorch Out-of-Distribution Detection

Documentation License License Downloads


pytorch-ood-logo


Out-of-Distribution (OOD) Detection with Deep Neural Networks based on PyTorch.

The library provides:

  • Out-of-Distribution Detection Methods
  • Loss Functions
  • Datasets
  • Neural Network Architectures as well as pretrained weights
  • Useful Utilities

and is designed such that it should be compatible with frameworks like pytorch-lightning and pytorch-segmentation-models. The library also covers some methods from closely related fields such as Open-Set Recognition, Novelty Detection, Confidence Estimation and Anomaly Detection.

📚 Documentation

The documentation is available here.

NOTE: An important convention adopted in pytorch-ood is that OOD detectors predict outlier scores that should be larger for outliers than for inliers. If you notice that the scores predicted by a detector do not match the formulas in the corresponding publication, it may be possible that we multiplied the scores by negative one to comply with this convention.

⏳ Quick Start

Load model pre-trained on CIFAR-10 with the Energy-Bounded Learning Loss1, and predict on some dataset data_loader using Energy-based Out-of-Distribution Detection2, calculating the common OOD detection metrics:

from pytorch_ood.model import WideResNet
from pytorch_ood.detector import EnergyBased
from pytorch_ood.utils import OODMetrics

# Create Neural Network
model = WideResNet(num_classes=10, pretrained="er-cifar10-tune").eval().cuda()

# Create detector
detector = EnergyBased(model)

# Evaluate
metrics = OODMetrics()

for x, y in data_loader:
    metrics.update(detector(x.cuda()), y)

print(metrics.compute())

You can find more examples in the documentation.

🛠 ️️Installation

The package can be installed via PyPI:

pip install pytorch-ood

Dependencies

  • torch
  • torchvision
  • scipy
  • torchmetrics

Optional Dependencies

  • scikit-learn for ViM
  • gdown to download some datasets and model weights
  • pandas for the examples.
  • segmentation-models-pytorch to run the examples for anomaly segmentation

📦 Implemented

Detectors:

Detector Description Year Ref
OpenMax Implementation of the OpenMax Layer as proposed in the paper Towards Open Set Deep Networks. 2016 3
Monte Carlo Dropout Implements Monte Carlo Dropout. 2016 4
Maximum Softmax Probability Implements the Softmax Baseline for OOD and Error detection. 2017 5
ODIN ODIN is a preprocessing method for inputs that aims to increase the discriminability of the softmax outputs for In- and Out-of-Distribution data. 2018 6
Mahalanobis Implements the Mahalanobis Method. 2018 7
Energy-Based OOD Detection Implements the Energy Score of Energy-based Out-of-distribution Detection. 2020 8
Entropy Uses entropy to detect OOD inputs. 2021 9
Maximum Logit Implements the MaxLogit method. 2022 10
KL-Matching Implements the KL-Matching method for Multi-Class classification. 2022 11
ViM Implements Virtual Logit Matching. 2022 12

Objective Functions:

Objective Function Description Year Ref
Objectosphere Implementation of the paper Reducing Network Agnostophobia. 2016 13
Center Loss Generalized version of the Center Loss from the Paper A Discriminative Feature Learning Approach for Deep Face Recognition. 2016 14
Outlier Exposure Implementation of the paper Deep Anomaly Detection With Outlier Exposure. 2018 15
Deep SVDD Implementation of the Deep Support Vector Data Description from the paper Deep One-Class Classification. 2018 16
Energy Regularization Adds a regularization term to the cross-entropy that aims to increase the energy gap between IN and OOD samples. 2020 17
CAC Loss Class Anchor Clustering Loss from Class Anchor Clustering: a Distance-based Loss for Training Open Set Classifiers 2021 18
Entropy Maximization Entropy maximization and meta classification for OOD in semantic segmentation 2021 19
II Loss Implementation of II Loss function from Learning a neural network-based representation for open set recognition. 2022 20
MCHAD Loss Implementation of the MCHAD Loss friom the paper Multi Class Hypersphere Anomaly Detection. 2022 21

Image Datasets:

Dataset Description Year Ref
TinyImages The TinyImages dataset is often used as auxiliary OOD training data. However, use is discouraged. 2012 22
Textures Textures dataset, also known as DTD, often used as OOD Examples. 2013 23
FoolingImages OOD Images Generated to fool certain Deep Neural Networks. 2014 24
TinyImages300k A cleaned version of the TinyImages Dataset with 300.000 images, often used as auxiliary OOD training data. 2018 25
MNIST-C Corrupted version of the MNIST. 2019 26
CIFAR10-C Corrupted version of the CIFAR 10. 2019 27
CIFAR100-C Corrupted version of the CIFAR 100. 2019 28
ImageNet-C Corrupted version of the ImageNet. 2019 29
ImageNet - A, O, R Different Outlier Variants for the ImageNet. 2019 30
MVTech-AD MVTech Anomaly Segmentation Dataset 2021 31
StreetHazards Anomaly Segmentation Dataset 2022 32
PixMix PixMix image augmentation method 2022 33

Text Datasets:

Dataset Description Year Ref
Multi30k Multi-30k dataset, as used by Hendrycks et al. in the OOD baseline paper. 2016 34
WikiText2 Texts from the wikipedia often used as auxiliary OOD training data. 2016 35
WikiText103 Texts from the wikipedia often used as auxiliary OOD training data. 2016 36
NewsGroup20 Textx from different newsgroups, as used by Hendrycks et al. in the OOD baseline paper.

🤝 Contributing

We encourage everyone to contribute to this project by adding implementations of OOD Detection methods, datasets etc, or check the existing implementations for bugs.

📝 Citing

pytorch-ood was presented at a CVPR Workshop in 2022. If you use it in a scientific publication, please consider citing:

@InProceedings{kirchheim2022pytorch,
    author    = {Kirchheim, Konstantin and Filax, Marco and Ortmeier, Frank},
    title     = {PyTorch-OOD: A Library for Out-of-Distribution Detection Based on PyTorch},
    booktitle = {Proceedings of the IEEE/CVF Conference on Computer Vision and Pattern Recognition (CVPR) Workshops},
    month     = {June},
    year      = {2022},
    pages     = {4351-4360}
}

🛡️ ️License

The code is licensed under Apache 2.0. We have taken care to make sure any third party code included or adapted has compatible (permissive) licenses such as MIT, BSD, etc. The legal implications of using pre-trained models in commercial services are, to our knowledge, not fully understood.


🔗 References


  1. Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. NeurIPS.

  2. Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. NeurIPS.

  3. Bendale, A., & Boult, T. E. (2016). Towards open set deep networks. CVPR.

  4. Gal, Y., & Ghahramani, Z. (2016). Dropout as a bayesian approximation: Representing model uncertainty in deep learning. ICML.

  5. Hendrycks, D., & Gimpel, K. (2016). A baseline for detecting misclassified and out-of-distribution examples in neural networks. ICLR.

  6. Liang, S., Li, Y., & Srikant, R. (2017). Enhancing the reliability of out-of-distribution image detection in neural networks. ICLR.

  7. Lee, K., Lee, K., Lee, H., & Shin, J. (2018). A simple unified framework for detecting out-of-distribution samples and adversarial attacks. NeurIPS.

  8. Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. NeurIPS.

  9. Chan R, et al. (2021) Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. CVPR

  10. Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.

  11. Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.

  12. Wang, H., Li, Z., Feng, L., Zhang, W. (2022) ViM: Out-Of-Distribution with Virtual-logit Matching. CVPR

  13. Dhamija, A. R., Günther, M., & Boult, T. (2018). Reducing network agnostophobia. NeurIPS.

  14. Wen, Y., Zhang, K., Li, Z., & Qiao, Y. (2016). A discriminative feature learning approach for deep face recognition. ECCV.

  15. Hendrycks, D., Mazeika, M., & Dietterich, T. (2018). Deep anomaly detection with outlier exposure. ICLR.

  16. Ruff, L., et al. (2018). Deep one-class classification. ICML.

  17. Liu, W., Wang, X., Owens, J., & Li, Y. (2020). Energy-based out-of-distribution detection. NeurIPS.

  18. Miller, D., Sunderhauf, N., Milford, M., & Dayoub, F. (2021). Class anchor clustering: A loss for distance-based open set recognition. WACV.

  19. Chan R, et al. (2021) Entropy maximization and meta classification for out-of-distribution detection in semantic segmentation. CVPR

  20. Hassen, M., & Chan, P. K. (2020). Learning a neural-network-based representation for open set recognition. SDM.

  21. Kirchheim, K., Filax, M., Ortmeier, F. (2022) Multi Class Hypersphere Anomaly Detection. ICPR

  22. Torralba, A., Fergus, R., & Freeman, W. T. (2007). 80 million tiny images: a large dataset for non-parametric object and scene recognition. IEEE Transactions on Pattern Analysis and Machine Learning.

  23. Cimpoi, M., Maji, S., Kokkinos, I., Mohamed, S., & Vedaldi, A. (2014). Describing textures in the wild. CVPR.

  24. Nguyen, A., Yosinski, J., & Clune, J. (2015). Deep neural networks are easily fooled: High confidence predictions for unrecognizable images. CVPR.

  25. Hendrycks, D., Mazeika, M., & Dietterich, T. (2018). Deep anomaly detection with outlier exposure. ICLR.

  26. Mu, N., & Gilmer, J. (2019). MNIST-C: A robustness benchmark for computer vision. ICLR Workshop.

  27. Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. ICLR.

  28. Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. ICLR.

  29. Hendrycks, D., & Dietterich, T. (2019). Benchmarking neural network robustness to common corruptions and perturbations. ICLR.

  30. Hendrycks, D., Zhao, K., Basart, S., Steinhardt, J., & Song, D. (2021). Natural adversarial examples. CVPR.

  31. Bergmann, P., Batzner, K., et al. (2021) The MVTec Anomaly Detection Dataset: A Comprehensive Real-World Dataset for Unsupervised Anomaly Detection. IJCV.

  32. Hendrycks, D., Basart, S., Mazeika, M., Mostajabi, M., Steinhardt, J., & Song, D. (2022). Scaling out-of-distribution detection for real-world settings. ICML.

  33. Hendrycks, D, Zou, A, et al. (2022) PixMix: Dreamlike Pictures Comprehensively Improve Safety Measures. CVPR

  34. Elliott, D., Frank, S., Sima'an, K., & Specia, L. (2016). Multi30k: Multilingual english-german image descriptions. Proceedings of the 5th Workshop on Vision and Language.

  35. Merity, S., Xiong, C., Bradbury, J., & Socher, R. (2016). Pointer sentinel mixture models. ArXiv

  36. Merity, S., Xiong, C., Bradbury, J., & Socher, R. (2016). Pointer sentinel mixture models. ArXiv