Skip to content

yizhezhang2000/SAMAug

Repository files navigation

SAMAug: Augmenting Medical Images with Segmentation Foundation Model

Oct. 12th 2023 Update:

SamAug was presented in the MICCAI 2023 1st International Workshop on Foundation Models for General Medical AI (MedAGI). See https://medagi.github.io/#/program for more details and other excellent papers.

Oct. 5th 2023 Update:

Uploaded python scripts (model training and testing with SAMAug) for the polyp segmentation experiments.

We introduce SAMAug, an efficient method that utilizes a segmentation foundation model (SAM) for improving medical image segmentation. SAMAug utilizes a segmentation foundation model (SAM) to augment medical images. The augmented images thus generated are then used for training and testing a task-specific medical image segmentation model (e.g., a U-Net model for cell segmentation). SAMAug does not require fine-tuning on the foundation model. Please see below for an overview of the proposed method.

Examples of the SAM-augmented images:

More technical details can be found in this technical report:

Yizhe Zhang, Tao Zhou, Peixian Liang, Danny Z. Chen, Input Augmentation with SAM: Boosting Medical Image Segmentation with Segmentation Foundation Model, arXiv preprint arXiv:2304.11332.

Link: https://arxiv.org/abs/2304.11332

Below we highlight some experimental results.

Experiments and Results

Polyp Segmentation in Endoscopic Images:

(https://github.com/DengPingFan/PraNet)

CVC-ClinicDB:

Model SAMAug meanDic meanIoU Sm
PraNet[1] 85.8 80.0 90.6
PraNet[1] 89.1 83.9 93.1

CVC-300:

Model SAMAug meanDic meanIoU Sm
PraNet[1] 87.7 80.2 92.6
PraNet[1] 87.9 80.6 92.8

CVC-ColonDB:

Model SAMAug meanDic meanIoU Sm
PraNet[1] 67.3 59.8 79.4
PraNet[1] 70.6 63.2 81.9

ETIS-LaribPolypDB:

Model SAMAug meanDic meanIoU Sm
PraNet[1] 57.6 50.8 76.1
PraNet[1] 64.0 57.2 79.4

Kvasir:

Model SAMAug meanDic meanIoU Sm
PraNet[1] 85.4 78.8 88.0
PraNet[1] 89.7 83.7 91.2

Cell Segmentation in Histology Images:

MoNuSeg (https://monuseg.grand-challenge.org/):

Model SAMAug AJI Pixel F-score
U-Net[2] 58.36 75.70
U-Net[2] 64.30 82.56
P-Net[3] 59.46 77.09
P-Net[3] 63.98 82.56
Attention Net[4] 58.76 75.43
Attention Net[4] 63.15 81.49

[1] Fan, Deng-Ping, et al. "Pranet: Parallel reverse attention network for polyp segmentation." MICCAI, 2020.

[2] Ronneberger, Olaf, Philipp Fischer, and Thomas Brox. "U-net: Convolutional networks for biomedical image segmentation." Medical Image Computing and Computer-Assisted Intervention–MICCAI 2015: MICCAI, 2015.

[3] Wang, Guotai, et al. "DeepIGeoS: a deep interactive geodesic framework for medical image segmentation." IEEE-TPAMI, 2018.

[4] Oktay, Ozan, et al. "Attention U-Net: Learning Where to Look for the Pancreas." Medical Imaging with Deep Learning, 2018.

You can refer the script in SAMAug.py for generating SAM-augmented images for your own medical image data.

Questions and comments are welcome! We believe there is room for further improvement. Please consider sharing your experience in using SAMAug. Thank you.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages