Source code (Pytorch) for the paper: “Cross-Dimensional Knowledge-Guided Synthesizer Trained With Unpaired Multimodality MRIs”
Contact: 202103150302@zjut.edu.cn (Binjia Zhou) and zhouqianweischolar@gmail.com (Qianwei Zhou)
- nvidia gpu driver version: 530.30.02
- cuda version: 12.1
- GPU memory >= 12GB
- install miniconda
$ conda create --name testENV --file requirements.txt -c pytorch
$ pip install pypng
- BraTs2018: https://www.med.upenn.edu/sbia/brats2018/data.html
- BraTs2021: http://braintumorsegmentation.org/
- IXI dataset: https://brain-development.org/ixi-dataset/
- Adjust original images to the resolutions reported in the paper.
- Place traing data to folder /datasets/,
- In /datasets/BraTs2018/, list all images of BraTs2018.
- In /datasets/BraTs2021/, list all images of BraTs2021.
- In /datasets/IXI/, list all images of IXI dataset.
You can obtain the pre-trained segmentation model and the student model for link https://pan.baidu.com/s/1EFgyt2YjGULHPEkhw37BQw?pwd=e2eb, the password is e2eb, then put them in the folder ./model.
$ python brats_4type_train.py
to train image generator.
- Output:
- The generator and discriminator models will be in the folder /outputs/brats_4type_train/checkpoints/.
- They may look like dis_00040000.pt (the model of discriminator), gen_00040000.pt (the model of generator).
- In the file brats_4type_test.py, please set pre-train model of your own that you are going to test.
- Please copy the target models (for example, gen_00040000.pt) to folder /outputs/brats_4type_train/checkpoints/.
$ python brats_4type_test.py
- Output: the code will generate target type images from input images.
- Generated fake images will be in the folder ./test
- For example: /Samples/realImages/3917L-CC-neg.png ---> ./test/output_num0001.jpg