Skip to content

[MICCAI 2024 Spotlight, Early Acceptance] PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts

License

Notifications You must be signed in to change notification settings

MedICL-VU/PRISM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

PRISM

PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts

Placenta application:

PRISM Lite: A lightweight model for interactive 3D placenta segmentation in ultrasound

Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound Images (arXiv version)

News

[07/07/24] Check out the decent performance/version of PRISM on placenta segmentation in ultrasound images.

[05/13/24] Our work is early accepted by MICCAI 2024.

[03/07/24] The pretrained PRISM models and preprocessed datasets are uploaded.

TODO

demo (gradio)

What is PRISM?

PRISM is a robust model/method for interactive segmentation in medical imaging. We strive for human-level performance, as a human-in-loop interactive segmentation model with prompts should gradually refine its outcomes until they closely match inter-rater variability.

PRISM tumor segmentation examples

Briefly, PRISM produces tumor segmentation with mean Dice values of 93.79 (colon), 94.48 (pancreas), 94.18 (liver), and 96.58 (kidney).

Iterative correction for colon tumor iterative_colon
Iterative correction for multiple tumors iterative_all
Qualitative results with compared methods qualitative_results

The quantitative results can be viewed in our paper.

Datasets

The anatomical differences among individuals and ambiguous boundaries are present in the datasets.

Models

colon pancreas liver kidney
Download Download Download Download

Get Started

Installation

conda create -n prism python=3.9
conda activate prism
sudo install git
pip install torch==1.12.1+cu113 torchvision==0.13.1+cu113 --extra-index-url https://download.pytorch.org/whl/cu113 # install pytorch
pip install git+https://github.com/facebookresearch/segment-anything.git # install segment anything packages
pip install git+https://github.com/deepmind/surface-distance.git # for normalized surface dice (NSD) evaluation
pip install -r requirements.txt

Train

python train.py --data colon --data_dir your_data_directory --save_name your_save_name --multiple_outputs --dynamic --use_box --refine

add "--use_scribble" and "--efficient_scribble" if you want to train with scribbles.

Train (Distributed Data Parallel)

the only difference between this and above (train) command is the use of "--ddp".

python train.py --data colon --data_dir your_data_directory --save_name your_save_name -multiple_outputs --dynamic --use_box --refine --ddp

Test

put downloaded pretrained model under the implementation directory

python test.py --data colon --data_dir your_data_directory --split test --checkpoint best --save_name prism_pretrain --num_clicks 1 --iter_nums 11 --multiple_outputs --use_box --use_scribble --efficient_scribble --refine --refine_test

FAQ

if you got the error as AttributeError: module 'cv2' has no attribute 'ximgproc', please check this out

DDP mode has lower Dice and more epoch numbers may solve it

On my end, combining trainer and trainer_basic speeds up

training the model without refine module (as we reported in the paper) has better accuracy than with refine but not using it

License

The model is licensed under the Apache 2.0 license

Acknowledgements

Thanks for the code from: SAM, SAM-Med3D, ProMISe, ScribblePrompt, nnU-Net

If you find this repository useful, please consider citing this paper:

@article{li2024prism,
  title={PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts},
  author={Li, Hao and Liu, Han and Hu, Dewei and Wang, Jiacheng and Oguz, Ipek},
  journal={arXiv preprint arXiv:2404.15028},
  year={2024}
}
@article{li2024interactive,
  title={Interactive Segmentation Model for Placenta Segmentation from 3D Ultrasound images},
  author={Li, Hao and Oguz, Baris and Arenas, Gabriel and Yao, Xing and Wang, Jiacheng and Pouch, Alison and Byram, Brett and Schwartz, Nadav and Oguz, Ipek},
  journal={arXiv preprint arXiv:2407.08020},
  year={2024}
}

Please send an email to hao.li.1@vanderbilt.edu for any questions and always happy to help! :)

Original PRISM repository.

About

[MICCAI 2024 Spotlight, Early Acceptance] PRISM: A Promptable and Robust Interactive Segmentation Model with Visual Prompts

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages