Skip to content

StarUniversus/gcmae

Repository files navigation

GCMAE

Global Contrast-Masked Autoencoders Are Powerful Pathological Representation Learners

[Paper] [BibTeX]

Hao Quan*, Xingyu Li*, Weixing Chen, Qun Bai, Mingchen Zou, Ruijie Yang, Tingting Zheng, Ruiqun Qi, Xinghua Gao, Xiaoyu Cui (*Equal Contribution)

📢 News

July 2024

  • 🎉🎉🎉Article Acceptance: We are thrilled to announce that our GCMAE paper has been accepted for publication in Pattern Recognition.

May 2022

  • Initial Model and Code Release: We are excited to announce that the initial release of the GCMAE model and its code is now available.

Abstract

Based on digital whole slide scanning technique, artificial intelligence algorithms represented by deep learning have achieved remarkable results in the field of computational pathology. Compared with other medical images such as Computed Tomography (CT) or Magnetic Resonance Imaging (MRI), pathological images are more difficult to annotate, thus there is an extreme lack of data sets that can be used for supervised learning. In this study, a self-supervised learning (SSL) model, Global Contrast Masked Autoencoders (GCMAE), is proposed, which has the ability to represent both global and local domain-specific features of whole slide image (WSI), as well as excellent cross-data transfer ability. The Camelyon16 and NCTCRC datasets are used to evaluate the performance of our model. When dealing with transfer learning tasks with different data sets, the experimental results show that GCMAE has better linear classification accuracy than MAE, which can reach 81.10% and 89.22% respectively. Our method outperforms the previous state of-the-art algorithm and even surpass supervised learning (improved by 3.86% on NCTCRC data sets).

Installation

This repo is a modification on the mae repo. Installation and preparation follow that repo.

Usage

Dataset

License

Distributed under the CC-BY-NC 4.0 License. See LICENSE for more information.

Citation

If you find GCMAE useful for your your research and applications, please cite using this BibTeX:

@article{quan2024global,
  title={Global Contrast-Masked Autoencoders Are Powerful Pathological Representation Learners},
  author={Quan, Hao and Li, Xingyu and Chen, Weixing and Bai, Qun and Zou, Mingchen and Yang, Ruijie and Zheng, Tingting and Qi, Ruiqun and Gao, Xinghua and Cui, Xiaoyu},
  journal={Pattern Recognition},
  pages={110745},
  year={2024},
  publisher={Elsevier}
}

Releases

No releases published

Packages

No packages published

Languages