- install fantastic pytorch and pytorch.vision
- Download images from author's implementation
- Suppose you downloaded the "facades" dataset in /path/to/facades
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --mode B2A --exp ./facades --display 5 --evalIter 500
- Resulting model is saved in ./facades directory named like net[D|G]_epoch_xx.pth
CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/edges2shoes/train --valDataroot /path/to/edges2shoes/val --mode A2B --exp ./edges2shoes --batchSize 4 --display 5
- We modified pytorch.vision.folder and transform.py as to follow the format of train images in the datasets
- Most of the parameters are the same as the paper.
- You can easily reproduce results of the paper with other dataets
- Try B2A or A2B translation as your need
- pix2pix.torch
- pix2pix-pytorch (Another pytorch implemention of the pix2pix)
- dcgan.pytorch
- FANTASTIC pytorch pytorch doc
- genhacks from soumith