Skip to content

JinxLv/StyleSegv1-One-shot-image-segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

56 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

StyleSeg: One shot Segmentation of Brain Tissues via Image aligned Style Transformation

This is the implementation of the StyleSeg: Robust One-shot Segmentation of Brain Tissues via Image-aligned Style Transformation

Install

The packages and their corresponding version we used in this repository are listed in below.

  • Tensorflow==1.15.4
  • Keras==2.3.1
  • tflearn==0.5.0

Training

After configuring the environment, please use this command to train the model. The training process consists of unsupervised training of reg-model (reg0), supervised training of seg-model with image transferred by IST (seg0), and the weakly supervised training of reg-model (reg1).

python train.py --lr 1e-4  -d ./dataset/OASIS.json -c weights/xxx --clear_steps -g 0 --round 2000 --scheme reg #reg0
python train.py --lr 1e-3  -d ./dataset/OASIS.json -c weights/xxx --clear_steps -g 0 --round 2000 --scheme seg #seg0
python train.py --lr 1e-4  -d ./dataset/OASIS.json -c weights/xxx --clear_steps -g 0 --round 2000 --scheme reg_supervise #reg1

Testing

Use this command to obtain the final segmentation results of test data.

python predict.py -c weights/xxx -d ./datasets/OASIS.json -g 0 --scheme seg

Results

The visualization of our proposed Image-aligned Style Transformation module. The style-transferred atlas w/ IST presents a highly similar appearance with the target unlabeled image, while that w/o IST contains lots of artifacts.

Boxplots of Dice scores of 35 brain regions for comparison methods on OASIS dataset. The brain regions are presented under the plot and are ranked by the average numbers of the region voxels in decreasing order.

The visualization of segmentation results for different dual-model iterative learning methods. From left to right are raw image, UNet trained with 5 atlas, Brainstorm, DeepAtlas, our method, and the ground-truth of segmentation. The implementation of Brainstorm and DeepAtlas are all used their offical released source code. The Brainstorm, DeepAtlas and our method are all trained with one atlas (labeled image).

Furthermore, we also evaluated the generalization performance of our method on the other modality, i.e., 3D heart CT MH-WHS 2017 dataset. One labeled image (atlas) is selected randomly together with the 40 unlabeled images as a training set, the remaining 19 labeled images were utilized as test set. We randomly selected three cases from the segmentation results, and the visualization results are shown below.

Citation

If you use this code as part of any published research, we'd really appreciate it if you could cite the following paper:

@article{Lv_Zeng_Wang_Duan_Wang_Li_2023,
number={2},
journal={Proceedings of the AAAI Conference on Artificial Intelligence},
author={Lv, Jinxin and Zeng, Xiaoyu and Wang, Sheng and Duan, Ran and Wang, Zhiwei and Li, Qiang},
year={2023},
month={Jun.},
pages={1861-1869} }

Acknowledgment

Some codes are modified from RCN and VoxelMorph. Thanks a lot for their great contribution.

About

This is the implementation of the paper "Robust One-shot Segmentation of Brain Tissues via Image-aligned Style Transformation"

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages