Skip to content
/ GGCL Public
forked from cychoi97/GGCL

Official Pytorch Implementation of "CT Kernel Conversion Using Multi-Domain Image-to-Image Translation with Generator-Guided Contrastive Learning" (MICCAI 2023)

Notifications You must be signed in to change notification settings

mi2rl/GGCL

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

76 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CT Kernel Conversion Using
Multi-Domain Image-to-Image Translation
with Generator-Guided Contrastive Learning

Changyong Choi1,2,†·Jiheon Jeong1,2,†·Sangyoon Lee2·   Sang Min Lee3·Namkug Kim2,3

1Department of Biomedical Engineering, AMIST, Asan Medical Center, University of Ulsan College of Medicine
2Department of Convergence Medicine, Asan Medical Center, University of Ulsan College of Medicine
3Department of Radiology, Asan Medical Center, University of Ulsan College of Medicine
†equal contribution

News

  • Accepted in MICCAI 2023!
  • Update code for the other CT modality from GE.

Introduction

This is an application study for CT kernel conversion using Multi-Domain Image-to-Image Translation (StarGAN) with generator-guided discriminator regularization (GGDR) and contrastive learning from contrastive unpaired translation (CUT).

In our study, we used only SIEMENS dataset, however, you can train with other CT modalities from different manufacturer's vendor (e.g., GE, Philips). If you want to measure the metrics like PSNR and SSIM, you need registered ground truth CT images with the same participants.

Dependencies

  • CUDA 11.6
  • Pytorch 1.10.0

Please install Pytorch for your own CUDA version.

Also, install the other packages in requirements.txt following:

pip install -r requirements.txt

Prepare your own dataset

For example, you should set dataset path following:

root_path
    ├── train
          ├── SIEMENS
                ├── B30f
                      ├── 0001.dcm
                      ├── 0002.dcm
                      └── 0003.dcm
                ├── B50f
                └── B70f
          └── GE
               ├── SOFT
               ├── CHEST
               └── EDGE
    ├── valid
    └── test

Training

For multi-GPU, you can use --multi-gpu-mode DataParallel.

For generator-guided contrastive learning, you should specify two arguments below:

  • --use_feature makes GGCL (or GGDR) run.
  • --guide_type decides which regularization method is used between GGDR and GGCL. Default is GGCL.

If not specified --use_feature, Vanilla StarGAN will be run.

For one dataset (e.g. SIEMENS),

python main.py --mode train --dataset SIEMENS --batch_size 2 --root_path 'your_own_dataset_path' --use_feature --guide_type ggcl

for two dataset (e.g. SIEMENS and GE),

python main.py --mode train --dataset Both --batch_size 2 --root_path 'your_own_dataset_path' --use_feature --guide_type ggcl

Model checkpoints and validation samples will be stored in ./result/models and ./result/samples, respectively.

resume

To restart training, you can use --resume_iters.

Test

Png file save

# for one dataset
python main.py --mode test --dataset SIEMENS --root_path 'your_own_dataset_path' --save_path 'result' --use_feature --test_iters 400000

# for two dataset
python main.py --mode test --dataset Both --root_path 'your_own_dataset_path' --save_path 'result' --use_feature --test_iters 400000

Test results will be stored in ./result/results/png as png file.

Dicom file save

To save results as dicom file together, you can use --dicom_save.

Test results will be stored in ./result/results/png and ./result/results/dcm as png file and dicom file, respectively.

Acknowledgement

Our main code is heavily based on StarGAN and patch-wise contrastive learning code is brought from CUT.

data_loader.py is inspired by StyleGAN2-ADA.

About

Official Pytorch Implementation of "CT Kernel Conversion Using Multi-Domain Image-to-Image Translation with Generator-Guided Contrastive Learning" (MICCAI 2023)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages

  • Python 100.0%