Skip to content

This is the official source code for CMAN, a deep learning platform designed for the registration of large deformable CT images

License

Notifications You must be signed in to change notification settings

LocPham263/CMAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CMAN: Cascaded Multi-scale Spatial Channel Attention-guided Network for Large 3D Deformable Registration of Liver CT Images

By Xuan Loc Pham, Manh Ha Luu, Theo van Walsum, Hong Son Mai, Stefan Klein, Ngoc Ha Le, Duc Trinh Chu
https://www.sciencedirect.com/science/article/pii/S1361841524001373

🚀 03/2023: Submitted to Medical Image Analysis journal
🚀 04/2023: Under review
🚀 01/2024: The manuscript of CMAN has been potentially accepted for Medical Image Analysis journal. We're currently working on the revisions.
🚀 05/2024: CMAN has been published in the Medical Image Analysis journal.

Introduction

CMAN is a deeplearning-based image registration platform, which is designed specifically to handle the large and complex deformations between images. We verified the performance of CMAN through extensive experiments on multi-source liver datasets.

Architecture

CMAN solves the large deformable liver CT registration problem by dividing the large deformation field into a chain of densely connected global-local multi-resolution smaller deformation fields. Consequently, the liver would be deformed gradually, little-by-little until it reaches the desired shape, which is more effective than other one-time deformation methods. Besides, a combination of spatial-channel attention module is also integrated into each layer of every base network for better refinement of the deformation field.

Environment setup

  1. Install miniconda
  2. Create and setup conda environment (Pls be aware of the versions of the packages and your cuda version)
conda create --name CMAN python=3.7.13 
conda activate CMAN
pip install tensorflow==1.15.0 keras==2.1.6 tflearn==0.5.0 numpy==1.19.5 protobuf==3.20 SimpleITK h5py tqdm scipy scikit-image matplotlib
  1. We will soon provide a Docker for the project for ease of reproducibility

Data preparation

Example preprocessed datasets could be found in SLiver, LiTS, LSPIG and MSD, BFH. Note that only LSPIG dataset support pair-wise registration.
Example of json file could be found in ./datasets/liver.json

If you want to use your own dataset, please follow these 2 steps for the data preparation (Refer to file data_preprocess.py for the generation of h5 file and json file):

  1. Compress each of your dataset in a single h5 file
  • Original data folder should look like this:
data_train_folder
├── Patient_0
│   ├── ct.nii.gz
│── Patient_1
│   ├── ct.nii.gz
...

data_val_folder
├── Patient_0
│   ├── ct.nii.gz
│   ├── seg.nii.gz
│── Patient_1
│   ├── ct.nii.gz
│   ├── seg.nii.gz
...

  • The generated h5 datasets will have the structure as follow:
datasets/eval_data_0.h5
├── Patient_0
│   ├── volume
│   |── segmentation
│── Patient_1
│   ├── volume
│   |── segmentation
...

datasets/eval_data_1.h5
├── Patient_0
│   ├── volume
│   |── segmentation
│── Patient_1
│   ├── volume
│   |── segmentation
...

datasets/train_data_0.h5
├── Patient_0
│   ├── volume
│── Patient_1
│   ├── volume
...

datasets/train_data_1.h5
├── Patient_0
│   ├── volume
│── Patient_1
│   ├── volume
...
  1. Config json file to include all datasets (both training, evaluation and testing)

Training

Run the following command for training:

python train.py -b [BASE_NETWORK] -n [NUMBER_OF_CASCADES]

Example:

python train.py -b CMAN_CA -n 5 --batch 1
python train.py -b CMAN -n 4 --batch 1 -c weights/checkpoints

Consider calibrating the regularization loss and the number of cascades to avoid foldings.
For more options, please refer to file train.py

Inference

Run the following command for unpaired inference:

python eval.py -c [WEIGHTS] -v [DATASET]

or this command for paired inference

python eval.py -c [WEIGHTS] -v [DATASET] --paired

Example:

python eval.py -c weights/Sep05_0238 -v sliver --batch 1
python eval.py -c weights/Sep15_1323 -v lspig --paired

For more saving options, please refer to file eval.py. Full saving option includes:

keys = ['jaccs', 'dices', 'landmark_dists', 'jacobian_det', 'real_flow', 'image_fixed', 'warped_moving', 'warped_seg_moving']

The registration results could be found in the evaluate folder. Customize the path to save results by referring to file eval.py and change the variable link

link = './evaluate/main_dataset/' + model_name + ...

Quick trial

For a quick trial of CMAN, please download a preprocessed dataset above (Ex: sliver dataset), add to the datasets folder, and then download the weight 3-cascade pre-trained weight. Then run the inference command

python eval.py -c weights/3-cascade -v sliver 

Results

Here are some positive results when applying CMAN to align image pairs with large and complex deformations.

Reference

The implementation of CMAN is based on the following source:

We recommend the use of MevisLab for the analysis and visualization of data in this work

Contact

For more information of CMAN (theory or source code), please email: xuanloc97ars@vnu.edu.vn

About

This is the official source code for CMAN, a deep learning platform designed for the registration of large deformable CT images

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages