Skip to content

A pytorch implementation of "Image-to-Image Translation with Conditional Adversarial Networks"

Notifications You must be signed in to change notification settings

taey16/pix2pix.pytorch

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

12 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Install

Datasets

Train with facades dataset (mode: B2A)

  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/facades/train --valDataroot /path/to/facades/val --mode B2A --exp ./facades --display 5 --evalIter 500
  • Resulting model is saved in ./facades directory named like net[D|G]_epoch_xx.pth

Train with edges2shoes dataset (mode: A2B)

  • CUDA_VISIBLE_DEVICES=x python main_pix2pixgan.py --dataset pix2pix --dataroot /path/to/edges2shoes/train --valDataroot /path/to/edges2shoes/val --mode A2B --exp ./edges2shoes --batchSize 4 --display 5

Results

  • Randomly selected input samples input
  • Corresponding real target samples target
  • Corresponding generated samples generated

Note

  • We modified pytorch.vision.folder and transform.py as to follow the format of train images in the datasets
  • Most of the parameters are the same as the paper.
  • You can easily reproduce results of the paper with other dataets
  • Try B2A or A2B translation as your need

Reference

About

A pytorch implementation of "Image-to-Image Translation with Conditional Adversarial Networks"

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages