Skip to content

ENSTA-U2IS-AI/torch-uncertainty

Repository files navigation

TorchUncertaintyLogo

pypi tests Docs PRWelcome Ruff Code Coverage Discord Badge

TorchUncertainty is a package designed to help you leverage uncertainty quantification techniques and make your deep neural networks more reliable. It aims at being collaborative and including as many methods as possible, so reach out to add yours!

🚧 TorchUncertainty is in early development 🚧 - expect changes, but reach out and contribute if you are interested in the project! Please raise an issue if you have any bugs or difficulties and join the discord server.

Our webpage and documentation is available here: torch-uncertainty.github.io.


This package provides a multi-level API, including:

  • easy-to-use ⚡️ lightning uncertainty-aware training & evaluation routines for 4 tasks: classification, probabilistic and pointwise regression, and segmentation.
  • ready-to-train baselines on research datasets, such as ImageNet and CIFAR
  • pretrained weights for these baselines on ImageNet and CIFAR (work in progress 🚧).
  • layers, models, metrics, & losses available for use in your networks
  • scikit-learn style post-processing methods such as Temperature Scaling.

Have a look at the Reference page or the API reference for a more exhaustive list of the implemented methods, datasets, metrics, etc.

⚙️ Installation

TorchUncertainty requires Python 3.10 or greater. Install the desired PyTorch version in your environment. Then, install the package from PyPI:

pip install torch-uncertainty

The installation procedure for contributors is different: have a look at the contribution page.

🐎 Quickstart

We make a quickstart available at torch-uncertainty.github.io/quickstart.

📚 Implemented methods

TorchUncertainty currently supports Classification, probabilistic and pointwise Regression and Segmentation.

Baselines

To date, the following deep learning baselines have been implemented:

  • Deep Ensembles
  • MC-Dropout - Tutorial
  • BatchEnsemble
  • Masksembles
  • MIMO
  • Packed-Ensembles (see Blog post) - Tutorial
  • Bayesian Neural Networks 🚧 Work in progress 🚧 - Tutorial
  • Regression with Beta Gaussian NLL Loss
  • Deep Evidential Classification & Regression - Tutorial

Augmentation methods

The following data augmentation methods have been implemented:

  • Mixup, MixupIO, RegMixup, WarpingMixup

Post-processing methods

To date, the following post-processing methods have been implemented:

  • Temperature, Vector, & Matrix scaling - Tutorial
  • Monte Carlo Batch Normalization - Tutorial

Tutorials

Our documentation contains the following tutorials:

Other References

This package also contains the official implementation of Packed-Ensembles.

If you find the corresponding models interesting, please consider citing our paper:

@inproceedings{laurent2023packed,
    title={Packed-Ensembles for Efficient Uncertainty Estimation},
    author={Laurent, Olivier and Lafage, Adrien and Tartaglione, Enzo and Daniel, Geoffrey and Martinez, Jean-Marc and Bursuc, Andrei and Franchi, Gianni},
    booktitle={ICLR},
    year={2023}
}