Skip to content

kdjoumessi/Sparse-BagNet_clinical-validation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Sparse-BagNet validation: An Inherently Interpretable AI model improves Screening Speed and Accuracy for Early Diabetic Retinopathy

This repository contains the official implementation of the paper An Inherently Interpretable AI model improves Screening Speed and Accuracy for Early Diabetic Retinopathy.

Model's architecture

Model's architecture

Development dataset

Dev dataset

Dependencies

All packages required for running the code in the repository are listed in the file requirements.txt

Data & Preprocessing

Data

The code in this repository uses publicly available Kaggle dataset from the diabetic retinopathy detection challenge

Preprocessing

The images were preprocessed by tightly cropping circular masks of the retinal fundus and resize to 512 x 512. The code is available here: fundus preprocessing

An ensemble of EfficientNets trained on the ISBI2020 challenge dataset was used to filter out images with low qualities. The resulting dataset (csv files) used to train the model and for internal evaluation is as follows:

The image names used for figures are provided in images.txt

How to use: training

1. Organize the dataset as follows:

├── main_folder
    ├── Kaggle_data
        ├── Images
        ├── kaggle_gradable_train.csv
        ├── kaggle_gradable_test.csv
        ├── kaggle_gradable_val.csv 
    ├── Outputs
    ├── configs
    ├── data
    ├── files
    ├── utils
    ├── modules  
    ├── main.py
    ├── train.py

Adjust paths to dataset in configs/paths.yaml. Replace the value of

  • root with Kaggle_data/
  • img_dir with Images/

Adjust paths in configs/default.yaml. Replace the value of

  • save_paths with xx where xx is where the log files and model weights will be saved during the model training
  • paths.model_dir with xx

2. Update the training configurations and hyperparameters

All experiments are fully specified by the configuration file located at ./configs/default.yaml.

The training configurations including hyperparameters turning can be done in the main config file.

3. Run to train

  • Create a virtual environment and install dependencies
$ pip install requirements.txt
  • Run a model with previously defined parameters
$ python main.py

4. Monitor the training step

Monitor the training progress in website 127.0.0.1:6006 by running:

$ tensorborad --logdir=/path/to/your/log --port=6006

Reproducibility

Figures and annotations

  • Code for figures are available

Retrospective reader study

  • CSV file of the 180 images with ground truth level used for the grading tasks is available at grading dataset
  • CSV file containing the outcome of the grading tasks (including the model output and the ophthalmologists's performance such as the decision time, confidence, and grade) is available at grading dataset
  • CSV file of the 65 images with ground truth level used for the annotation task is available in annotation dataset
  • Annotations masks from clinicians used to evaluate the performance of the model on localizing DR related lesion on the internal set is available at annotation dataset. The annotation only include Microaneurysm (MA), Hemorrhages (HE), Hard Exudates (HE), and Soft Exudates (SE) lesions.

Models's weights

The final models with the best validation weights used for all the experiments are as follows:

Acknowledge

Citation

  @inproceedings{donteu2023sparse,
  title={An Inherently Interpretable AI model improves Screening Speed and Accuracy for Early Diabetic Retinopathy},
  author={Kerol Djoumessi, Ziwei Huang, Annekatrin Rickmann, Natalia Simon, Laura Kühlewein, Lisa M. Koch, Philipp Berens},
  booktitle={xx},
  year={2024}
}

This work includes code adaptations from Sparse BagNet (Djoumessi et al., 2023):

  @inproceedings{donteu2023sparse,
  title={Sparse Activations for Interpretable Disease Grading},
  author={Donteu, Kerol R Djoumessi and Ilanchezian, Indu and K{\"u}hlewein, Laura and Faber, Hanna and Baumgartner, Christian F and Bah, Bubacarr and Berens, Philipp and Koch, Lisa M},
  booktitle={Medical Imaging with Deep Learning},
  year={2023}
}

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Languages