Skip to content
Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19
Branch: master
Clone or download
Latest commit 702739e Apr 20, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
data
imgs
losses initial codes Apr 19, 2019
models initial codes Apr 19, 2019
options initial codes Apr 19, 2019
scripts initial codes Apr 19, 2019
ssd_score initial codes Apr 19, 2019
tool initial codes Apr 19, 2019
util initial codes Apr 19, 2019
README.md update ReadMe Apr 19, 2019
test.py initial codes Apr 19, 2019
train.py initial codes Apr 19, 2019

README.md

Pose-Transfer

Code for the paper Progressive Pose Attention for Person Image Generation in CVPR19. The paper is available here. The video demo is coming soon.

This is Pytorch implementation for pose transfer on both Market1501 and DeepFashion dataset. The code is written by Tengteng Huang and Zhen Zhu.

Requirement

  • pytorch 0.3.1
  • torchvision
  • numpy
  • scipy
  • scikit-image
  • pillow
  • pandas
  • tqdm
  • dominate

Getting Started

Installation

  • Clone this repo:
git clone https://github.com/tengteng95/Pose-Transfer.git
cd Pose-Transfer

Data Preperation

Market1501

  • Download the Market-1501 dataset from here. Rename bounding_box_train and bounding_box_test to train and test, and put them under the market_data directory.
  • Download train/test splits and train/test key points annotations from here, including market-pairs-train.csv, market-pairs-test.csv, market-annotation-train.csv, market-annotation-train.csv. Put these four files under the market_data directory.
  • Launch python tool/generate_pose_map_market.py to generate the pose heatmaps.

DeepFashion

  • Download the DeepFashon dataset from here. Unzip train.zip and test.zip into the fashion_data directory.
  • Download train/test splits and train/test key points annotations from here, including fasion-resize-pairs-train.csv, fasion-resize-pairs-test.csv, fasion-resize-annotation-train.csv, fasion-resize-annotation-train.csv. Put these four files under the fashion_data directory.
  • Launch python tool/generate_pose_map_fashion.py to generate the pose heatmaps.

Train a model

Market-1501

python train.py --dataroot ./market_data/ --name market_PATN --model PATN --lambda_GAN 5 --lambda_A 10  --lambda_B 10 --dataset_mode keypoint --no_lsgan --n_layers 3 --norm batch --batchSize 32 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

DeepFashion

python train.py --dataroot ./fashion_data/ --name fashion_PATN --model PATN --lambda_GAN 5 --lambda_A 1 --lambda_B 1 --dataset_mode keypoint --n_layers 3 --norm instance --batchSize 7 --pool_size 0 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --niter 500 --niter_decay 200 --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-train.csv --L1_type l1_plus_perL1 --n_layers_D 3 --with_D_PP 1 --with_D_PB 1  --display_id 0

Test the model

Market1501

python test.py --dataroot ./market_data/ --name market_PATN_test --model PATN --phase test --dataset_mode keypoint --norm batch --batchSize 1 --resize_or_crop no --gpu_ids 2 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./market_data/market-pairs-test.csv --which_epoch latest --results_dir ./results

DeepFashion

python test.py --dataroot ./fashion_data/ --name fashion_PATN_test --model PATN --phase test --dataset_mode keypoint --norm instance --batchSize 1 --resize_or_crop no --gpu_ids 0 --BP_input_nc 18 --no_flip --which_model_netG PATN --checkpoints_dir ./checkpoints --pairLst ./fashion_data/fasion-resize-pairs-test.csv --which_epoch latest --results_dir ./results

Pre-trained model

Our pre-trained model can be downloaded here.

Citation

If you use this code for your research, please cite our paper.

@article{zhu2019progressive,
  title={Progressive Pose Attention Transfer for Person Image Generation},
  author={Zhu, Zhen and Huang, Tengteng and Shi, Baoguang and Yu, Miao and Wang, Bofei and Bai, Xiang},
  journal={arXiv preprint arXiv:1904.03349},
  year={2019}
}

Acknowledgments

Our code is based on the popular pytorch-CycleGAN-and-pix2pix.

You can’t perform that action at this time.