| ❗ If you use OpenMIBOOD in your research, please cite our paper OpenMIBOOD along with both OpenOOD benchmarks (versions 1 and 1.5), from which this evaluation framework is forked. |
|---|
| ❗ The table below lists all medical imaging datasets used in this framework. Each used dataset must be cited at a minimum. However, ensure compliance with its specific citation requirements to properly acknowledge the researchers' contributions. |
|---|
This part lists all benchmarks and their associated dataset structure.
Medical Imaging Benchmarks
- MIDOG
ID:
Domain 1a;
cs-ID:Domain 1b,Domain 1c;
near-OOD:Domain 2,Domain 3,Domain 4,Domain 4,Domain 5,Domain 6a,Domain 6b,Domain 7;
far-OOD:CCAgT,FNAC2019;- PhaKIR
ID:
Video 01 – 05,Video 07; (without frames containing smoke)
cs-ID:Video 01 – 05,Video 07; (only frames containing smoke);
near-OOD:Cholec80,EndoSeg15,EndoSeg18;
far-OOD:Kvasir-SEG,CATARACTS;- OASIS-3
ID:
T1-weighted MRI; (without scans from Siemens MAGNETOM Vision devices)
cs-ID:T2-weighted MRI,T1-weighted MRI(only scans from Siemens MAGNETOM Vision devices);
near-OOD:ATLAS,BraTS-2023 Glioma,OASIS-3 CT;
far-OOD:MSD-H,CHAOS;
| ❗ The PhaKIR dataset is not yet publicly available (expected release: early summer). Until then, we offer to evaluate post-hoc methods for this benchmark and provide the results. |
|---|
The three Medical Imaging Benchmarks from OpenMIBOOD were evaluated using the following 24 post-hoc methods. While other postprocessors contained in this repository may also be compatible with these benchmarks, they have not been tested yet.
The evaluated methods include: ASH, DICE, Dropout, EBO, fDBD, GEN, KLM, KNN, MDS, MDS Ensemble, MLS, MSP, NNGuide, ODIN, OpenMax, RankFeat, ReACT, Relation, Residual, RMDS, SCALE, SHE, TempScale, ViM.
To reproduce our results, run the scripts eval_ood_midog.py, eval_ood_phakir.py (not yet released), and eval_ood_oasis3.py from the scripts directory, specifying the corresponding postprocessor method name as a parameter.
For each dataset, a corresponding script is provided under scripts/download/OpenMIBOOD that either downloads and prepares the dataset directly or gives instructions on how to proceed. For datasets that require a slightly more complex access, we prepared additional instructions under instructions/[dataset].
- 14 Mar, 2025: Repository corresponding to OpenMIBOOD released on github.
- 26 Feb, 2025: OpenMIBOOD full paper is accepted at the CVPR 2025 conference. Check the report here.
We appreciate all contributions to improve OpenMIBOOD. However, we emphasize that this repository is merely an extension of the underlying OpenOOD framework; therefore, contributions may be more appropriately directed to the original OpenOOD repository.
git clone https://github.com/remic-othr/OpenMIBOOD
cd OpenMIBOOD
pip install -e .
To get all required datasets, you can use the provided download scripts in scripts/download/OpenMIBOOD.
After all datasets for a benchmark are prepared using those scripts, you can use the evaluation scripts scripts/eval_ood_[benchmark].py.
OpenMIBOOD uses three ID datasets and we release pre-trained models accordingly at https://doi.org/10.5281/zenodo.14982267.
However, for ease of access, you can use the download script download_classifiers.py to automatically download and move the models to the correct folder.
Our codebase accesses the datasets from ./data/ and pretrained models from ./results/[benchmark]/ by default.
├── ...
├── data
│ ├── benchmark_imglist
│ ├── midog
| ├── phakir
│ └── oasis
├── openood
├── results
│ ├── midog
| ├── phakir
| ├── oasis3
│ └── ...
├── scripts
We provide evaluation scripts for all the methods we support in the scripts folder: eval_ood_midog.py, eval_ood_phakir.py, eval_ood_oasis3.py.
If you find our repository useful for your research, please consider citing our CVPR 2025 paper along with the original OpenOOD publications found at Citation. Depending on which benchmarks/datasets you use, also give appropriate citations and credit to those researchers as outlined under Datasets
# OpenMIBOOD
@InProceedings{gutbrod2025openmibood,
author = {Gutbrod, Max and Rauber, David and Nunes, Danilo Weber and Palm, Christoph},
title = {OpenMIBOOD: Open Medical Imaging Benchmarks for Out-Of-Distribution Detection},
booktitle = {Proceedings of the Computer Vision and Pattern Recognition Conference (CVPR)},
month = {June},
year = {2025},
pages = {25874-25886}
}
