Skip to content
[NIPS 2017] Toward Multimodal Image-to-Image Translation
Branch: master
Clone or download


Project Page | Paper | Video

Pytorch implementation for multimodal image-to-image translation. For example, given the same night image, our model is able to synthesize possible day images with different types of lighting, sky and clouds. The training requires paired data.

Note: The current software works well with PyTorch 0.41+. Check out the older branch that supports PyTorch 0.1-0.3.

Toward Multimodal Image-to-Image Translation.
Jun-Yan Zhu, Richard Zhang, Deepak Pathak, Trevor Darrell, Alexei A. Efros, Oliver Wang, Eli Shechtman.
UC Berkeley and Adobe Research
In NIPS, 2017.

Example results

Other Implementations


  • Linux or macOS
  • Python 3

Getting Started


  • Clone this repo:
git clone -b master --single-branch
cd BicycleGAN

For pip users:

bash ./scripts/

For conda users:

bash ./scripts/

Use a Pre-trained Model

  • Download some test photos (e.g., edges2shoes):
bash ./datasets/ edges2shoes
  • Download a pre-trained model (e.g., edges2shoes):
bash ./pretrained_models/ edges2shoes
  • Generate results with the model
bash ./scripts/

The test results will be saved to a html file here: ./results/edges2shoes/val/index.html.

  • Generate results with synchronized latent vectors
bash ./scripts/ --sync

Results can be found at ./results/edges2shoes/val_sync/index.html.

Generate Morphing Videos

  • We can also produce a morphing video similar to this GIF and Youtube video.
bash ./scripts/

Results can be found at ./videos/edges2shoes/.

Model Training

  • To train a model, download the training images (e.g., edges2shoes).
bash ./datasets/ edges2shoes
  • Train a model:
bash ./scripts/
  • To view training results and loss plots, run python -m visdom.server and click the URL http://localhost:8097. To see more intermediate results, check out ./checkpoints/edges2shoes_bicycle_gan/web/index.html
  • See more training details for other datasets in ./scripts/

Datasets (from pix2pix)

Download the datasets using the following script. Many of the datasets are collected by other researchers. Please cite their papers if you use the data.

  • Download the testset.
bash ./datasets/ dataset_name
  • Download the training and testset.
bash ./datasets/ dataset_name


Download the pre-trained models with the following script.

bash ./pretrained_models/ model_name
  • edges2shoes (edge -> photo) trained on UT Zappos50K dataset.
  • edges2handbags (edge -> photo) trained on Amazon handbags images..
bash ./pretrained_models/ edges2handbags
bash ./datasets/ edges2handbags
bash ./scripts/
  • night2day (nighttime scene -> daytime scene) trained on around 100 webcams.
bash ./pretrained_models/ night2day
bash ./datasets/ night2day
bash ./scripts/
  • facades (facade label -> facade photo) trained on the CMP Facades dataset.
bash ./pretrained_models/ facades
bash ./datasets/ facades
bash ./scripts/
  • maps (map photo -> aerial photo) trained on 1096 training images scraped from Google Maps.
bash ./pretrained_models/ maps
bash ./datasets/ maps
bash ./scripts/


If you find this useful for your research, please use the following.

  title={Toward multimodal image-to-image translation},
  author={Zhu, Jun-Yan and Zhang, Richard and Pathak, Deepak and Darrell, Trevor and Efros, Alexei A and Wang, Oliver and Shechtman, Eli},
  booktitle={Advances in Neural Information Processing Systems},

If you use modules from CycleGAN or pix2pix paper, please use the following:

  title={Unpaired Image-to-Image Translation using Cycle-Consistent Adversarial Networkss},
  author={Zhu, Jun-Yan and Park, Taesung and Isola, Phillip and Efros, Alexei A},
  booktitle={Computer Vision (ICCV), 2017 IEEE International Conference on},

  title={Image-to-Image Translation with Conditional Adversarial Networks},
  author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
  booktitle={Computer Vision and Pattern Recognition (CVPR), 2017 IEEE Conference on},


This code borrows heavily from the pytorch-CycleGAN-and-pix2pix repository.

You can’t perform that action at this time.