Skip to content
master
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 

README.md

License CC BY-NC-SA 4.0 Python 3.6 Packagist Last Commit Maintenance Contributing Ask Me Anything !

Contents

BiGraphGAN

| Project | Paper |
Bipartite Graph Reasoning GANs for Person Image Generation
Hao Tang12, Song Bai2, Philip H.S. Torr2, Nicu Sebe13.
1University of Trento, Italy, 2University of Oxford, UK, 3Huawei Research Ireland, Ireland.
In BMVC 2020 Oral.
The repository offers the official implementation of our paper in PyTorch.

In the meantime, check out our related ECCV 2020 paper XingGAN for Person Image Generation

Motivation

Framework

Comparison Results


License

Creative Commons License
Copyright (C) 2020 University of Trento, Italy.

All rights reserved. Licensed under the CC BY-NC-SA 4.0 (Attribution-NonCommercial-ShareAlike 4.0 International)

The code is released for academic research use only. For commercial use, please contact hao.tang@unitn.it.

Installation

Clone this repo.

git clone https://github.com/Ha0Tang/BiGraphGAN
cd BiGraphGAN/

This code requires PyTorch 1.0.0 and python 3.6.9+. Please install the following dependencies:

  • pytorch 1.0.0
  • torchvision
  • numpy
  • scipy
  • scikit-image
  • pillow
  • pandas
  • tqdm
  • dominate

To reproduce the results reported in the paper, you need to run experiments on NVIDIA DGX1 with 4 32GB V100 GPUs for DeepFashion, and 1 32GB V100 GPU for Market-1501.

Dataset Preparation

Please follow SelectionGAN to directly download both Market-1501 and DeepFashion datasets.

This repository use the same dataset format as SelectionGAN and XingGAN. so you can use the same data for all these methods.

Generating Images Using Pretrained Model

Market-1501

cd scripts/
sh download_bigraphgan_model.sh market
cd ..
cd market_1501/

Then,

  1. Change several parameters in test_market_pretrained.sh.
  2. Run sh test_market_pretrained.sh for testing.

DeepFashion

cd scripts/
sh download_bigraphgan_model.sh deepfashion
cd ..
cd deepfashion/

Then,

  1. Change several parameters in test_deepfashion_pretrained.sh.
  2. Run sh test_deepfashion_pretrained.sh for testing.

Train and Test New Models

Market-1501

  1. Go to the market_1501 folder.
  2. Change several parameters in train_market.sh.
  3. Run sh train_market.sh for training.
  4. Change several parameters in test_market.sh.
  5. Run sh test_market.sh for testing.

DeepFashion

  1. Go to the deepfashion folder.
  2. Change several parameters in train_deepfashion.sh.
  3. Run sh train_deepfashion.sh for training.
  4. Change several parameters in test_deepfashion.sh.
  5. Run sh test_deepfashion.sh for testing.

Download Images Produced by the Authors

For your convenience, you can directly download the images produced by the authors for qualitative comparisons in your own papers!!!

Market-1501

cd scripts/
sh download_bigraphgan_result.sh market

DeepFashion

cd scripts/
sh download_bigraphgan_result.sh deepfashion

Evaluation

We adopt SSIM, mask-SSIM, IS, mask-IS, and PCKh for evaluation of Market-1501. SSIM, IS, PCKh for DeepFashion. Please refer to Pose-Transfer for more details.

Acknowledgments

This source code is inspired by both Pose-Transfer, and SelectionGAN.

Related Projects

XingGAN | GestureGAN | C2GAN | SelectionGAN | Guided-I2I-Translation-Papers

Citation

If you use this code for your research, please cite our papers.

BiGraphGAN

@inproceedings{tang2020bipartite,
  title={Bipartite Graph Reasoning GANs for Person Image Generation},
  author={Tang, Hao and Bai, Song and Torr, Philip HS and Sebe, Nicu},
  booktitle={BMVC},
  year={2020}
}

If you use the original XingGAN, GestureGAN, C2GAN, and SelectionGAN model, please cite the following papers:

XingGAN

@inproceedings{tang2020xinggan,
  title={XingGAN for Person Image Generation},
  author={Tang, Hao and Bai, Song and Zhang, Li and Torr, Philip HS and Sebe, Nicu},
  booktitle={ECCV},
  year={2020}
}

GestureGAN

@article{tang2019unified,
  title={Unified Generative Adversarial Networks for Controllable Image-to-Image Translation},
  author={Tang, Hao and Liu, Hong and Sebe, Nicu},
  journal={IEEE Transactions on Image Processing (TIP)},
  year={2020}
}

@inproceedings{tang2018gesturegan,
  title={GestureGAN for Hand Gesture-to-Gesture Translation in the Wild},
  author={Tang, Hao and Wang, Wei and Xu, Dan and Yan, Yan and Sebe, Nicu},
  booktitle={ACM MM},
  year={2018}
}

C2GAN

@inproceedings{tang2019cycleincycle,
  title={Cycle In Cycle Generative Adversarial Networks for Keypoint-Guided Image Generation},
  author={Tang, Hao and Xu, Dan and Liu, Gaowen and Wang, Wei and Sebe, Nicu and Yan, Yan},
  booktitle={ACM MM},
  year={2019}
}

SelectionGAN

@inproceedings{tang2019multi,
  title={Multi-channel attention selection gan with cascaded semantic guidance for cross-view image translation},
  author={Tang, Hao and Xu, Dan and Sebe, Nicu and Wang, Yanzhi and Corso, Jason J and Yan, Yan},
  booktitle={CVPR},
  year={2019}
}

@article{tang2020multi,
  title={Multi-channel attention selection gans for guided image-to-image translation},
  author={Tang, Hao and Xu, Dan and Yan, Yan and Corso, Jason J and Torr, Philip HS and Sebe, Nicu},
  journal={arXiv preprint arXiv:2002.01048},
  year={2020}
}

Contributions

If you have any questions/comments/bug reports, feel free to open a github issue or pull a request or e-mail to the author Hao Tang (hao.tang@unitn.it).

You can’t perform that action at this time.