Skip to content
Code repository for Frontiers article 'Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT'
Branch: master
Clone or download
Simon Karlsson
Simon Karlsson Update README.md
Latest commit e5f0133 Feb 16, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
CycleGAN Fix(Updated the image normalization) Feb 11, 2019
ReadMe/gifs add(CycleGAN model and ReadME) Jun 14, 2018
UNIT Fix(to allow batch size > 1) Jan 13, 2019
LICENSE Initial commit Jun 14, 2018
README.md Update README.md Feb 16, 2019

README.md

Generative Adversarial Networks for Image-to-Image Translation on Multi-Contrast MR Images - A Comparison of CycleGAN and UNIT

[Arxiv paper]

Code usage

  1. Prepare your dataset under the directory 'data' in the CycleGAN or UNIT folder and set dataset name to parameter 'image_folder' in model init function.
  • Directory structure on new dataset needed for training and testing:
    • data/Dataset-name/trainA
    • data/Dataset-name/trainB
    • data/Dataset-name/testA
    • data/Dataset-name/testB
  1. Train a model by:
python CycleGAN.py

or

python UNIT.py
  1. Generate synthetic images by following specifications under:
  • CycleGAN/generate_images/ReadMe.md
  • UNIT/generate_images/ReadMe.md

Result GIFs - 304x256 pixel images

Left: Input image. Middle: Synthetic images generated during training. Right: Ground truth.
Histograms show pixel value distributions for synthetic images (blue) compared to ground truth (brown).
(An updated image normalization, present in the current version of this repo, has fixed the intensity error seen in these results.)

CycleGAN - T1 to T2

CycleGAN - T2 to T1

UNIT - T1 to T2

UNIT - T2 to T1

You can’t perform that action at this time.