Skip to content

Latest commit

 

History

History
44 lines (37 loc) · 2.58 KB

README.md

File metadata and controls

44 lines (37 loc) · 2.58 KB

Intermdiate layer matters - SSL

The official repository for "Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning" paper [pdf] (NeurIPS 2021).

image

Summary of the paper

  1. Bringing intermediate layers’ representations of two augmented versions of an image closer together helps to improve the momentum contrastive (MoCo) method
  2. We show this improvement for two loss functions: the mean squared error (MSE) and Barlow Twin’s loss between the intermediate layer representations; and three datasets: NIH-Chest Xrays, Breast Cancer Histopathology, and Diabetic Retinopathy
  3. Improved MoCo has large gains (~5%) in the performance especially when we are in a low-labeled regime (1% data is labeled) image
  4. Improved MoCo learns meaningful features earlier in the model and also has high feature reuse. image

Datasets

image

The data can be downloaded from kaggle.com. NIH chest-xray dataset: https://www.kaggle.com/nih-chest-xrays/data Breast cancer histopathology dataset: https://www.kaggle.com/paultimothymooney/breast-histopathology-images Diabetic Retinopathy dataset: https://www.kaggle.com/c/diabetic-retinopathy-detection/data

Code for each dataset

Please read the readme for each dataset to execute the code and reproduce the results.

License and Contributing

  • This README is formatted based on paperswithcode.
  • Feel free to post issues via Github.

Reference

For technical details and full experimental results, please check our paper.

@article{kaku2021intermediate,
  title={Intermediate Layers Matter in Momentum Contrastive Self Supervised Learning},
  author={Kaku, Aakash and Upadhya, Sahana and Razavian, Narges},
  journal={Advances in Neural Information Processing Systems},
  volume={34},
  year={2021}
}

Contact

Please contact ark576@nyu.edu if you have any question on the codes.