Skip to content

LEI-YRO/MD-GAN

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

20 Commits
 
 
 
 
 
 
 
 
 
 

Repository files navigation

MD-GAN

Environment and Supported Toolkits

python 3.9
pytorch(http://pytorch.org/)
tensorflow 2.10.0
munch 2.5.0
opencv-python 4.4.0.46
ffmpeg-python 0.2.0

Demo

  1. Download pre-trained models from BaiduNetdisk. password: zfxa.
  2. Create the folder expr, which contains the folders :checkpoints, results, samples.
  3. Copy the pre-training files to the expr/checkpoints/BraTS.
  4. To train MD-GAN, run the following command:
  #BraTS2018
   python main.py --mode train --num_domains 2 --w_hpf 0 \
               --lambda_reg 1 --lambda_rec 0.01 --lambda_class 0.02 --lambda_l1 100 \
               --train_img_dir data/BraTS/train \
               --val_img_dir data/BraTS/val
  1. To test MD-GAN, run the following command:
 #BraTS2018
 python main.py --mode sample --num_domains 2 --resume_iter 0 --w_hpf 0 \
               --checkpoint_dir expr/checkpoints/BraTS \
               --result_dir expr/results/BraTS \
               --src_dir assets/BraTS/src \
               --ref_dir assets/BraTS/ref

Notes

  1. The implementation of proposed MD-GAN model is based on StarGAN V2(https://github.com/clovaai/stargan-v2) and ADGAN(https://github.com/LEI-YRO/ADGAN).
  2. To facilitate processing, some image data were uploaded, which were derived from the dataset BraTS2018.
  3. If you want to train a custom dataset, the file processing is the same as BraTS.
  4. For smooth training of the network, it is recommended that the image naming does not contain any modal nouns.
  5. Modify the weight file name you want to test in the solver file

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages