Skip to content
/ U-SAM Public

Tuning Vision Foundation Models for Rectal Cancer Segmentation from CT Scans: Development and Validation of U-SAM

Notifications You must be signed in to change notification settings

kanydao/U-SAM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

U-SAM

This repo holds the pytorch implementation of U-SAM:

Tuning Vision Foundation Models for Rectal Cancer Segmentation from CT Scans: Development and Validation of U-SAM

Model

  • We provide our implementation of U-SAM.
  • The dataloaders of CARE and WORD are also available in dataset.

Datasets

The following pictures are demonstrations of CARE.

CARE

CARE2

We conducted our experiments on CARE and WORD. Here we provide public access to these datasets.

CARE: [paper] [dataset]

WORD: [paper] [dataset]

Get Started

Main Requirements

  • python==3.9.12

  • torch==1.11.0

  • torchvision==0.12.0

  • numpy==1.21.5

  • matplotlib==3.5.2

Pre-trained Weights

We utilized the SAM-ViT-B in our model, the pre-trained weights are supposed to be placed in the folder weight.

Pre-trained weights are available here, or you can directly download them via the following link.

Training

Train 100 epochs on CARE with one single GPU:

python u-sam.py --epochs 100 --batch_size 24 --dataset rectum

Train 100 epochs on CARE with multiple GPUs (via DDP, on 8 GPUs for example):

CUDA_LAUNCH_BLOCKING=1;PYTHONUNBUFFERED=1;CUDA_VISIBLE_DEVICES=0,1,2,3,4,5,6,7 \
python -m torch.distributed.launch \
--master_port 29666 \
--nproc_per_node=8 \
--use_env u-sam.py \
--num_workers 4 \
--epochs 100 \
--batch_size 24 \
--dataset rectum

For convenience, you can use our default bash file:

bash train_sam.sh

Evaluation

Evaluate on CARE with one single GPU:

python u-sam.py --dataset rectum --eval --resume chkpt/best.pth

The model checkpoint for evaluation should be specified via --resume.

Feedback and Contact

For further questions, pls feel free to contact Hantao Zhang.

Acknowledgement

Our code is based on Segment Anything and SAMed. Thanks them for releasing their codes.

Citation

If this code is helpful for your study, please cite:

@article{zhang2023care,
  title={CARE: A Large Scale CT Image Dataset and Clinical Applicable Benchmark Model for Rectal Cancer Segmentation},
  author={Zhang, Hantao and Guo, Weidong and Qiu, Chenyang and Wan, Shouhong and Zou, Bingbing and Wang, Wanqin and Jin, Peiquan},
  journal={arXiv preprint arXiv:2308.08283},
  year={2023}
}

About

Tuning Vision Foundation Models for Rectal Cancer Segmentation from CT Scans: Development and Validation of U-SAM

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published