Skip to content

openmedlab/MIS-FM

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

30 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Medical Image Segmentation Foundation Model


PyPI Conda Conda update PyPI - Python Version PyTorch Version

GitHub license

This repository provides the official implementation of "MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset".

@article{Wang2023MisFm,
  title={MIS-FM: 3D Medical Image Segmentation using Foundation Models Pretrained on a Large-Scale Unannotated Dataset},
  author={Guotai Wang, Jianghao Wu, Xiangde Luo, Xinglong Liu, Kang Li, Shaoting Zhang},
  journal={arXiv preprint arXiv:2306.16925},
  year={2023}
}

Key Features

  • A new self-supervised learning method based on Volume Fusion that is a segmentation-based pretext task.
  • A new network architecture PCT-Net that combines the advantages of CNNs and Transformers.
  • A foundation model that is trained from 110k unannotated 3D CT scans.

Links

Details

The following figure shows an overview of our proposed method for pretraining with unannotated 3D medical images. We introduce a pretext task based on pseudo-segmentation, where Volume Fusion is used to generate paired images and segmentation labels to pretrain the 3D segmentation model, which can better match the downstream task of segmentation than existing Self-Supervised Learning (SSL) methods.

The pretraining strategy is combined with our proposed PCT-Net to obtain a pretrained model that is applied to segmentation of different objects from 3D medical images after fine tuning with a small set of labeled data.

Datasets

We used 10k CT volumes from public datasets and 103k private CT volumes for pretraining.

Demo for using the pretrained model

Main Requirements

torch==1.10.2
PyMIC

To use PyMIC, please download the latest code in the master branch, and add the path of PyMIC source code to PYTHONPATH environmental variable. See bash.sh for example.

Demo data

In this demo, we show the use of PCT-Net for left atrial segmentation. The dataset can be downloaded from PYMIC_data.

The dataset, network and training/testing settings can be found in configuration files: demo/pctnet_scratch.cfg and demo/pctnet_pretrain.cfg for training from scratch and using the pretrained weights, respectively.

After downloading the data, edit the value of root_dir in the configuration files, and make sure the path to the images is correct.

Training

python train.py demo/pctnet_scratch.cfg

or

python train.py demo/pctnet_pretrain.cfg

Inference

python predict.py demo/pctnet_scratch.cfg

or

python predict.py demo/pctnet_pretrain.cfg

Evaluation

python $PyMIC_path/pymic/util/evaluation_seg.py -cfg demo/evaluation.cfg

You may need to edit demo/evaluation.cfg to specify the path of segmentation results before evaluating the performance.

In this simple demo, the segmentation Dice was 90.71% and 92.73% for training from scratch and from the pretrained weights, respectively.

🛡️ License

This project is under the Apache license. See LICENSE for details.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published