Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond
Switch branches/tags
Nothing to show
Clone or download
Latest commit ccb413d Dec 15, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data version 1 Apr 11, 2018
datasets README Apr 11, 2018
imgs README Apr 11, 2018
models debug Oct 19, 2018
options version 1 Apr 11, 2018
scripts version 1 Apr 11, 2018
util version 1 Apr 11, 2018
vision version 1 Apr 11, 2018
LICENSE version 1 Apr 11, 2018
README.md README May 19, 2018
test.py version 1 Apr 11, 2018
train.py delete only_d Dec 15, 2018

README.md

Pedestrian-Synthesis-GAN

See arxiv: https://arxiv.org/abs/1804.02047
Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond


Preparing

Prepare your data before training. The format of your data should follow the file in datasets.

Training stage

python train.py --dataroot data_path --name model_name --model pix2pix --which_model_netG unet_256 --which_direction BtoA --lambda_A 100 --dataset_mode aligned --use_spp --no_lsgan --norm batch

Testing stage

python test.py --dataroot data_path --name model_name --model pix2pix --which_model_netG unet_256 --which_direction BtoA  --dataset_mode aligned --use_spp --norm batch

Vision

Run python -m visdom.server to see the training process.

Citation

If you find this work useful for your research, please cite:

@article{ouyang2018pedestrian,
  title={Pedestrian-Synthesis-GAN: Generating Pedestrian Data in Real Scene and Beyond},
  author={Ouyang, Xi and Cheng, Yu and Jiang, Yifan and Li, Chun-Liang and Zhou, Pan},
  journal={arXiv preprint arXiv:1804.02047},
  year={2018}
}

Acknowledgments

Heavily borrow the code from pix2pix