Skip to content

remic-othr/OpenMIBOOD

 
 

Repository files navigation

OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection

❗ If you use OpenMIBOOD in your research, please cite our paper OpenMIBOOD along with both OpenOOD benchmarks (versions 1 and 1.5), from which this evaluation framework is forked.

Summary of all utilized medical datasets, separated into ID, cs-ID, near-OOD, and far-OOD and their corresponding underlying domain shifts.

❗ The table below lists all medical imaging datasets used in this framework. Each used dataset must be cited at a minimum. However, ensure compliance with its specific citation requirements to properly acknowledge the researchers' contributions.

Supported Medical Imaging Benchmarks (3)

This part lists all benchmarks and their associated dataset structure.

Medical Imaging Benchmarks
  • MIDOG

    ID: Domain 1a;
    cs-ID: Domain 1b, Domain 1c;
    near-OOD: Domain 2, Domain 3, Domain 4, Domain 4, Domain 5, Domain 6a, Domain 6b, Domain 7;
    far-OOD: CCAgT, FNAC2019;

  • PhaKIR

    ID: Video 01 – 05, Video 07; (without frames containing smoke)
    cs-ID: Video 01 – 05, Video 07; (only frames containing smoke);
    near-OOD: Cholec80, EndoSeg15, EndoSeg18;
    far-OOD: Kvasir-SEG, CATARACTS;

  • OASIS-3

    ID: T1-weighted MRI; (without scans from Siemens MAGNETOM Vision devices)
    cs-ID: T2-weighted MRI, T1-weighted MRI (only scans from Siemens MAGNETOM Vision devices);
    near-OOD: ATLAS, BraTS-2023 Glioma, OASIS-3 CT;
    far-OOD: MSD-H, CHAOS;

❗ The PhaKIR dataset is not yet publicly available (expected release: early summer). Until then, we offer to evaluate post-hoc methods for this benchmark and provide the results.

The three Medical Imaging Benchmarks from OpenMIBOOD were evaluated using the following 24 post-hoc methods. While other postprocessors contained in this repository may also be compatible with these benchmarks, they have not been tested yet.

The evaluated methods include: ASH, DICE, Dropout, EBO, fDBD, GEN, KLM, KNN, MDS, MDS Ensemble, MLS, MSP, NNGuide, ODIN, OpenMax, RankFeat, ReACT, Relation, Residual, RMDS, SCALE, SHE, TempScale, ViM.

Summary of the main results covering all introduced Medical Imaging Benchmarks.

To reproduce our results, run the scripts eval_ood_midog.py, eval_ood_phakir.py (not yet released), and eval_ood_oasis3.py from the scripts directory, specifying the corresponding postprocessor method name as a parameter.

Datasets (14)

For each dataset, a corresponding script is provided under scripts/download/OpenMIBOOD that either downloads and prepares the dataset directly or gives instructions on how to proceed. For datasets that require a slightly more complex access, we prepared additional instructions under instructions/[dataset].

Dataset Associated Publications Homepage
MIDOG https://doi.org/10.1038/s41597-023-02327-4 https://github.com/DeepMicroscopy/MIDOGpp
CCAgT https://doi.org/10.1016/j.compmedimag.2021.101934, https://doi.org/10.1109/CBMS49503.2020.00110 https://github.com/johnnv1/CCAgT-utils
FNAC 2019 https://doi.org/10.1016/j.tice.2019.02.001 https://1drv.ms/u/s!Al-T6d-\_ENf6axsEbvhbEc2gUFs
PhaKIR https://phakir.re-mic.de/ , Smoke Annotations https://phakir.re-mic.de/
Cholec80 https://doi.org/10.1109/TMI.2016.2593957 Cropped single instrument frames from Cholec80, https://camma.unistra.fr/datasets/
EndoSeg15 https://doi.org/10.48550/arXiv.1805.02475 https://endovissub-instrument.grand-challenge.org/
EndoSeg18 https://doi.org/10.48550/arXiv.2001.11190 https://endovissub2018-roboticscenesegmentation.grand-challenge.org/
Kvasir-SEG https://doi.org/10.1007/978-3-030-37734-2\_37 https://datasets.simula.no/kvasir-seg/
CATARACTS https://doi.org/10.1016/j.media.2018.11.008 Cleaned subset of the first five CATARACTS test videos, https://dx.doi.org/10.21227/ac97-8m18
OASIS-3 https://doi.org/10.1101/2019.12.13.19014902 https://sites.wustl.edu/oasisbrains/home/oasis-3/
ATLAS https://doi.org/10.1038/s41597-022-01401-7 https://fcon_1000.projects.nitrc.org/indi/retro/atlas.html
BraTS-Glioma https://doi.org/10.48550/arXiv.2107.02314 https://www.synapse.org/Synapse:syn51156910/wiki/621282
MSD-H https://doi.org/10.1038/s41467-022-30695-9, https://doi.org/10.1038/s41467-022-30695-9 http://medicaldecathlon.com/
CHAOS https://doi.org/10.1016/j.media.2020.101950 https://chaos.grand-challenge.org/Combined_Healthy_Abdominal_Organ_Segmentation/, https://doi.org/10.5281/zenodo.3362844

Updates

  • 14 Mar, 2025: Repository corresponding to OpenMIBOOD released on github.
  • 26 Feb, 2025: OpenMIBOOD full paper is accepted at the CVPR 2025 conference. Check the report here.

Contributing

We appreciate all contributions to improve OpenMIBOOD. However, we emphasize that this repository is merely an extension of the underlying OpenOOD framework; therefore, contributions may be more appropriately directed to the original OpenOOD repository.

Get Started

Installation

git clone https://github.com/remic-othr/OpenMIBOOD
cd OpenMIBOOD
pip install -e .

Data

To get all required datasets, you can use the provided download scripts in scripts/download/OpenMIBOOD. After all datasets for a benchmark are prepared using those scripts, you can use the evaluation scripts scripts/eval_ood_[benchmark].py.

Pre-trained checkpoints

OpenMIBOOD uses three ID datasets and we release pre-trained models accordingly at https://doi.org/10.5281/zenodo.14982267. However, for ease of access, you can use the download script download_classifiers.py to automatically download and move the models to the correct folder.

Our codebase accesses the datasets from ./data/ and pretrained models from ./results/[benchmark]/ by default.

├── ...
├── data
│   ├── benchmark_imglist
│   ├── midog
|   ├── phakir
│   └── oasis
├── openood
├── results
│   ├── midog
|   ├── phakir
|   ├── oasis3
│   └── ...
├── scripts

Evaluation scripts

We provide evaluation scripts for all the methods we support in the scripts folder: eval_ood_midog.py, eval_ood_phakir.py, eval_ood_oasis3.py.


Citation

If you find our repository useful for your research, please consider citing our CVPR 2025 paper along with the original OpenOOD publications found at Citation. Depending on which benchmarks/datasets you use, also give appropriate citations and credit to those researchers as outlined under Datasets

# OpenMIBOOD
@InProceedings{gutbrod2025openmibood,
  author    = {Gutbrod, Max and Rauber, David and Nunes, Danilo Weber and Palm, Christoph},
  title     = {OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection},
  booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
  month     = {June},
  year      = {2025},
  pages     = {25874-25886}
}

About

Medical Imaging Benchmarks for Out-Of-Distribution Detection

Resources

License

Code of conduct

Contributing

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 68.5%
  • Shell 16.2%
  • Jupyter Notebook 15.3%