Skip to content

keeplearning-again/MatchSeg

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 

Repository files navigation

MatchSeg:
Towards Better Segmentation via Reference Image Matching

This is the official Pytorch implementation of our paper "MatchSeg".

Abstract

Automated medical image segmentation methods heavily rely on large annotated datasets, which are costly and time-consuming to acquire. Inspired by few-shot learning, we introduce MatchSeg, a novel framework that enhances medical image segmentation through strategic reference image matching.

Installation

Requirenments

  • Linux, CUDA>=11.0, PyTorch >= 1.8.1 [The results are tested on pytorch 1.10.1.]
  • Other requirements
    pip install open-clip-torch
    pip install scikit-learn
    pip install opencv-python
    pip install albumentations
    pip install matplotlib
    pip install tensorboard
    # pip uninstall setuptools
    # pip install setuptools==59.5.0
    pip install einops
    pip install pydantic

Prepare datasets for MatchSeg

The training of our model uses HAM-10000, GlaS, Breast Ultrasound Dataset B(BUS), BUSI datasets.

We prepare the datasets as the more easy ways for training and you can download it from this dataset link (Password: yp9e). The dataset should be saved to $ROOT/datasets

  • HAM10000 Dataset Preparation
    • Due to the HAM10000 dataset being provided without differentiation among its seven categories (actinic keratoses and intraepithelial carcinoma (AKIEC), basal cell carcinoma (BCC), benign keratosis-like lesions (BKL), dermatofibroma (DF), melanoma (MEL), melanocytic nevi (NV), and vascular lesions (VASC)), it is necessary to execute the script below to separate the images and masks into their respective categories.
    • Run the following script to categorize images and masks into the different categories:
      python datasets/HAM10000_spilt.py

After processing, the datasets should be organized as follows:

├─bus
│  ├─images
│  └─masks
├─busi
│  ├─images
│  └─masks
├─glas
│  ├─images
│  └─masks
└─HAM10000
    ├─AKIEC
    │  ├─images
    │  └─masks
    ├─BCC
    │  ├─images
    │  └─masks
    ├─BKL
    │  ├─images
    │  └─masks
    ├─DF
    │  ├─images
    │  └─masks
    ├─MEL
    │  ├─images
    │  └─masks
    ├─NV
    │  ├─images
    │  └─masks
    └─VASC
        ├─images
        └─masks

Training

We use single RTX3090/4090 for training:

bash train_single_domain.sh
bash train_cross_domain.sh

Results

Quantitative results on BUSI, BUS, and GlaS datasets.

Model Params(M) BUSI BUS GlaS
DSC% IOU% DSC% IOU% DSC% IOU%
UNet 34.53 79.29 71.14 89.08 81.73 89.58 82.16
AttentionUNet 34.88 79.35 71.28 88.12 80.45 90.00 82.80
ACC-UNet 16.77 74.59 65.39 71.16 58.98 77.82 64.88
UNext 1.47 69.50 58.80 70.74 58.68 80.96 69.08
UCTransNet 66.34 78.17 69.02 87.96 79.86 88.53 80.15
PA-Net 8.94 77.02 67.63 81.19 70.45 84.89 74.54
ASG-Net 34.06 50.60 42.40 81.14 70.46 73.51 59.28
UniverSeg 1.18 52.14 40.21 59.85 48.42 60.79 44.79
MatchSeg (ours) 7.79 81.03 72.56 90.58 83.23 90.92 84.04

Cross-domain segmentation results on BUSI and BUS datasets. A $\rightarrow$ B indicates A for training and B for testing.

Models Params (M) BUSI → BUS BUS → BUSI
DSC% IOU% DSC% IOU%
UNet 34.53 74.72 67.16 56.85 49.36
AttentionUNet 34.88 76.18 67.66 56.85 49.32
ACC-UNet 16.77 61.62 53.55 42.81 33.79
UNext 1.47 56.37 47.30 39.93 30.92
UCTransNet 66.34 67.52 58.21 54.91 47.16
PA-Net 8.94 70.99 61.80 49.15 40.28
ASG-Net 34.06 47.23 34.96 49.70 40.44
UniverSeg 1.18 51.75 40.91 35.19 26.05
MathcSeg (ours) 7.79 78.07 68.86 59.27 51.53

Cross-domain segmentation results on the HAM10000 dataset. Note that all models were trained on the NV lesion type.

Cross-domain segmentation results on the HAM10000 dataset. Note that all models were trained on the NV lesion type.
Models Params (M) Results Average
AKIEC AKIEC BKL DF MEL VASC
UNet 34.53 75.16 68.07 84.67 83.50 91.20 81.99 83.19
AttentionUNet 34.88 76.72 69.61 85.04 83.00 92.14 83.71 84.08
ACC-UNet 16.77 72.77 64.71 84.15 80.39 90.45 82.91 82.96
UNext 1.47 79.75 70.83 85.73 83.17 91.10 79.39 84.26
UCTransNet 66.34 80.31 71.91 86.18 83.82 92.23 82.72 85.19
PA-Net 8.94 72.22 68.02 83.76 80.70 89.34 80.53 81.83
ASG-Net 34.06 51.39 40.26 45.33 48.45 50.27 31.55 46.42
UniverSeg 1.18 74.36 66.78 77.59 77.37 82.99 74.39 77.17
MathcSeg (ours) 7.79 80.82 73.53 86.44 82.97 91.64 82.28 85.33

Note: Best is bold.

The improved segmentation performance is also reflected in the example predictions as shown below.

single domain nv cross domain

To do

  • Environment Settings
  • Dataset Links
  • CLIP-guided Selection
  • Training Pipeline
  • Citation Link

Citation

If this code is helpful for your study, please cite:

Acknowledgement

open_clip, Universeg

About

Official repository of “MatchSeg"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages