- Linux or OSX
- NVIDIA GPU + CUDA CuDNN (CPU mode and CUDA without CuDNN may work with minimal modification, but untested)
- Install torch and dependencies from https://github.com/torch/distro
- Install torch packages
nngraph
anddisplay
luarocks install nngraph
luarocks install https://raw.githubusercontent.com/szym/display/master/display-scm-0.rockspec
- Clone this repo:
git clone https://github.com/doubletry/pix2pix.git
cd pix2pix
- Download and unzip the dataset.zip to the datasets folder (https://pan.baidu.com/s/1rCJt8yVkkfunlvU54Qxn_g) or run this:
bash ./scripts/download_dataset.sh
- Download and unzip the pix2pix model to the checkpoints folder (https://pan.baidu.com/s/1AUl2SpSJp5YTjWWZJF7qSA)
- Combine the images:
python scripts/combine_A_and_B.py --fold_A ./datasets/colourblindness/A/ --fold_B ./datasets/colourblindness/B --fold_AB ./datasets/colourblindness/
- Test the model:
bash ./test_model.sh
The test results will be saved to an html file here: ./results/colourblindness/latest_net_G_val/index.html
.
If you use this code for your research, please cite our paper Image-to-Image Translation Using Conditional Adversarial Networks:
@article{pix2pix2017,
title={Image-to-Image Translation with Conditional Adversarial Networks},
author={Isola, Phillip and Zhu, Jun-Yan and Zhou, Tinghui and Efros, Alexei A},
journal={CVPR},
year={2017}
}
Code borrows heavily from DCGAN. The data loader is modified from DCGAN and Context-Encoder.