Skip to content

ChengBinJin/pix2pix-tensorflow

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 

Repository files navigation

pix2pix-tensorflow

This repository is a Tensorflow implementation of the Isola's Image-to-Image Tranaslation with Conditional Adversarial Networks, CVPR2017.

Requirements

  • tensorflow 1.8.0
  • python 3.5.3
  • numpy 1.14.2
  • matplotlib 2.0.2
  • scipy 0.19.0

Generated Results

  • facades dataset
    A to B: from RGB image to generate label image
    B to A: from label image to generate RGB image

  • maps dataset A to B: from statellite image to generate map image B to A: from map image to generate statellite image

Generator & Discriminator Structure

  • Generator structure

  • Discriminator structure

Documentation

Download Dataset

Download datasets (script borrowed from torch code)

bash ./src/download_dataset.sh [dataset_name]
  • dataset_name supports cityscapes, edges2handbags, edges2shoes, facades, and maps.
    Note: our implementation has tested on facades and maps dataset only. But you can easily revise the code to run on other datasets.

Directory Hierarchy

├── pix2pix
│   ├── src
│   │   ├── dataset.py
│   │   ├── download_dataset.sh
│   │   ├── main.py
│   │   ├── pix2pix.py
│   │   ├── solver.py
│   │   ├── tensorflow_utils.py
│   │   └── utils.py
├── Data
│   ├── facades
│   └── maps

Note: please put datasets on the correct position based on the Directory Hierarchy.

Training pix2pix Model

Use main.py to train a pix2pix model. Example usage:

python main.py --dataset=facades --which_direction=0 --is_train=true
  • gpu_index: gpu index, default: 0
  • dataset: dataset name for choice [facades|maps], default: facades
  • which_direction: AtoB (0) or BtoA (1), default: AtoB 0
  • batch_size: batch size for one feed forward, default: 1
  • is_train: 'training or inference mode, default: False
  • learning_rate: initial learning rate, default: 0.0002
  • beta1: momentum term of Adam, default: 0.5
  • iters: number of interations, default: 200000
  • print_freq: print frequency for loss, default: 100
  • save_freq: save frequency for model, default: 20000
  • sample_freq: sample frequency for saving image, default: 500
  • sample_batch: sample size for check generated image quality, default: 4
  • load_model: folder of save model that you wish to test, (e.g. 20180704-1736). default: None

Evaluating pix2pix Model

Use main.py to evaluate a pix2pix model. Example usage:

python main.py --is_train=false --load_model=folder/you/wish/to/test/e.g./20180704-1746

Please refer to the above arguments.

Citation

  @misc{chengbinjin2018pix2pix,
    author = {Cheng-Bin Jin},
    title = {pix2pix tensorflow},
    year = {2018},
    howpublished = {\url{https://github.com/ChengBinJin/pix2pix-tensorflow}},
    note = {commit xxxxxxx}
  }

Attributions/Thanks

License

Copyright (c) 2018 Cheng-Bin Jin. Contact me for commercial use (or rather any use that is not academic research) (email: sbkim0407@gmail.com). Free for research use, as long as proper attribution is given and this copyright notice is retained.

Related Projects

About

pix2pix TensorFlow Implementation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages