Skip to content

Official PyTorch implementation of CASS, from the following paper: CASS: Cross Architectural Self-Supervision for Medical Image Analysis. arXiv 2022.

Notifications You must be signed in to change notification settings

pranavsinghps1/CASS

Repository files navigation

Official PyTorch implementation of CASS, from the following papers:

CASS: Cross Architectural Self-Supervision for Medical Image Analysis.

(Accepted at the NeurIPS 2022 Workshop: Self-Supervised Learning - Theory and Practice)

Pranav Singh, Elena Sizikova, Jacopo Cirrone

New York University.

A longer version of the above was accepted at the Machine Learning for Healthcare Conference, New York, USA

Efficient Representation Learning for Healthcare with Cross-Architectural Self-Supervision

Pranav Singh,Jacopo Cirrone; Proceedings of the 8th Machine Learning for Healthcare Conference, PMLR 219:691-711, 2023.

PWC

PWC

PWC

PWC

PWC

Description and Assumptions

CASS stands for Cross-Architectural Self-Supervised Learning with its primary aim to work robustly with small batch size and limited computational resources, to make self-supervised learning accessible. We have tested CASS for various label fractions (1%, 10% and 100%), with three modalities in medical imaging (Brain MRI classification, Autoimmune biopsy cell classification and Skin Lesion Classification), various dataset sizes (198 samples, 7k samples and 25k samples) as well as with Multi-class and multi-label classification. The pipeline is also compatible with binary classification.

For a detailed description of the datasets and tasks refer to the CASS: Cross Architectural Self-Supervision for Medical Image Analysis paper.

We have not accounted for any meta-data and the pipeline purely functions on image-label mapping. Labels are not required during the self-supervised training but are required for the downstream supervised training.

Analysis of complexity (time & space)

As compared to existing state-of-the-art methods CASS is twice as computationally efficient. The size of CASS-trained models is the same as that trained by other state-of-the-art self-supervised methods.

Datasets

Dermatomyositis Autoimmunity Dataset

This is a private dataset [Van Buren et al.] [1] of autoimmunity biopsies of 198 samples. This is a multi-label class classification. For train/validation/test splits we follow an 80/10/10 split.

DERMOFIT Datset

This is a paid dataset sourced by the University of Edinburgh, it contains 1,300 samples of high quality skin lesions. [2]

Brain MRI Classification

Courtesy of [Cheng, Jun] [3] This dataset contains 7k samples of brain MRI with different tumour-related diseases. We perform multi-class classification in this context. Train/validation/test splits have already been provided by the dataset curator.

SIIM-ISIC 2019 Dataset

This is a collection of skin lesions images contributed to the [2019 SIIM-ISIC challenge] [4]. This contains 25k samples and is a multi-class classification problem. For train/validation/test splits we follow an 80/10/10 split.

Specification of dependencies


pip install torchmetrics

pip install torchcontrib

pip install pytorch-lightning

pip install timm

Note: The code has been tested with Pytorch version 1.11. From PyTorch version 1.12, the GELU function has an added parameter, which might not work with older versions of Timm models and may raise errors.

We provide the list of dependencies in requirements.txt and an extensive list used on our development system in full_reqs.txt.

Pre-processing

We assume that a train.csv containing the image's address and corresponding labels is present for each dataset. Similarly, a test.csv containing the image addresses and labels for the test set is also present. We split the dataset in a ratio of 70/10/20 for training, validation, and testing except for brain MRI classification. The curators had already split the dataset into training and testing sets for brain tumor MRI classification.

Furthermore, for creating class weights for Focal loss used during downstream fine-tuning, we use the normalize function in EDA.ipynb. Since it is a jupyter notebook, sequentially executing the cells is recommended.

If the accompanying CSV is not present, we can create it using the EDA.ipynb; we assume that the dataset is stored at /scratch/Dermofit/.

Explore files

-- CASS/CASS.ipynb : contains the code for self-supervised and downstream supervised fine-tuning. For the supervised fine-tuning we use Focal loss to address the class imbalance problem so the class-wise distribution of the dataset is required.

Sequentially running the notebooks should render the required result. See TRAINING.md for training and fine-tuning instructions.

-- CASS/eval.ipynb contains the evaluation code for the trained and saved model.

-- CASS/Examples/MedMNIST/MNIST Get-started-CASS.ipynb contains the required preprocessing to get started with CASS downstream labelled training.

-- CASS/Examples/MedMNIST/CASS.ipynb once we have the preprocessed data from MNIST Get-started-CASS.ipynb we can get started with self-supervised training and supervised downstream fine-tuning.

Updates

July 24, 2022

  • Added example and support for MedMNIST datasets. This includes 12 datasets for 2D of different sizes (range: 780 to 236,386 samples) and modalities.

June 23, 2022

  • Initial Code release for CASS

Citation

If you find this repository helpful, please consider citing:

@InProceedings{pmlr-v219-singh23a,
  title = 	 {Efficient Representation Learning for Healthcare with Cross-Architectural Self-Supervision},
  author =       {Singh, Pranav and Cirrone, Jacopo},
  booktitle = 	 {Proceedings of the 8th Machine Learning for Healthcare Conference},
  pages = 	 {691--711},
  year = 	 {2023},
  editor = 	 {Deshpande, Kaivalya and Fiterau, Madalina and Joshi, Shalmali and Lipton, Zachary and Ranganath, Rajesh and Urteaga, Iñigo and Yeung, Serene},
  volume = 	 {219},
  series = 	 {Proceedings of Machine Learning Research},
  month = 	 {11--12 Aug},
  publisher =    {PMLR},
  pdf = 	 {https://proceedings.mlr.press/v219/singh23a/singh23a.pdf},
  url = 	 {https://proceedings.mlr.press/v219/singh23a.html}
}

References

  • [1]: https://www.sciencedirect.com/science/article/abs/pii/S0022175922000205

  • [2]:https://licensing.edinburgh-innovations.ed.ac.uk/product/dermofit-image-library

  • [3]:https://figshare.com/articles/dataset/brain_tumor_dataset/1512427 & https://www.hindawi.com/journals/cin/2022/3236305/

  • [4]:https://challenge.isic-archive.com/data/#2019

About

Official PyTorch implementation of CASS, from the following paper: CASS: Cross Architectural Self-Supervision for Medical Image Analysis. arXiv 2022.

Topics

Resources

Stars

Watchers

Forks