Skip to content
STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing
Python
Branch: master
Clone or download
csmliu fix evaluation (samping) bug
Wrong source attribute vector was passed to the computation graph.
1
Latest commit 2ef612e May 29, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
att_classification Update att_classification/README.md May 6, 2019
imlib Code Release Apr 22, 2019
paper Code Release Apr 22, 2019
pic Code Release Apr 22, 2019
pylib Code Release Apr 22, 2019
tflib Code Release Apr 22, 2019
.gitignore Upload attribute classification model May 6, 2019
README.md Add show_images.py May 6, 2019
data.py Code Release Apr 22, 2019
models.py Code Release Apr 22, 2019
results.md Code Release Apr 22, 2019
show_images.py Add show_images.py May 6, 2019
test.py Code Release Apr 22, 2019
train.py fix evaluation (samping) bug May 29, 2019

README.md

STGAN (CVPR 2019)

Tensorflow implementation of STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing


Overall architecture of our STGAN. Taking the image above as an example, in the difference attribute vector $\mathbf{att}_\mathit{diff}$, $Young$ is set to 1, $Mouth\ Open$ is set to -1, and others are set to zeros. The outputs of $\mathit{D_{att}}$ and $\mathit{D_{adv}}$ are the scalar $\mathit{D_{adv}}(\mathit{G}(\mathbf{x}, \mathbf{att}_\mathit{diff}))$ and the vector $\mathit{D_{att}}(\mathit{G}(\mathbf{x}, \mathbf{att}_\mathit{diff}))$, respectively

Exemplar Results

  • See results.md for more results

  • Facial attribute editing results


    Facial attribute editing results on the CelebA dataset. The rows from top to down are results of IcGAN, FaderNet, AttGAN, StarGAN and STGAN.


    High resolution ($384\times384$) results of STGAN for facial attribute editing.

  • Image translation results


    Results of season translation, the top two rows are $summer \rightarrow winter$, and the bottom two rows are $winter \rightarrow summer$.

Preparation

  • Prerequisites

    • Tensorflow (r1.4 - r1.12 should work fine)
    • Python 3.x with matplotlib, numpy and scipy
  • Dataset

    • CelebA dataset (Find more details from the project page)
      • Images should be placed in DATAROOT/img_align_celeba/*.jpg
      • Attribute labels should be placed in DATAROOT/list_attr_celeba.txt
      • If google drive is unreachable, you can get the data from Baidu Cloud
    • We follow the settings of AttGAN, kindly refer to AttGAN for more dataset preparation details
  • pre-trained model

Quick Start

Exemplar commands are listed here for a quick start.

Training

  • for 128x128 images

    python train.py --experiment_name 128
  • for 384x384 images (please prepare data according to HD-CelebA)

    python train.py --experiment_name 384 --img_size 384 --enc_dim 48 --dec_dim 48 --dis_dim 48 --dis_fc_dim 512 --n_sample 24 --use_cropped_img

Testing

  • Example of testing single attribute

    python test.py --experiment_name 128 [--test_int 1.0]
  • Example of testing multiple attributes

    python test.py --experiment_name 128 --test_atts Pale_Skin Male [--test_ints 1.0 1.0]
  • Example of attribute intensity control

    python test.py --experiment_name 128 --test_slide --test_att Male [--test_int_min -1.0 --test_int_max 1.0 --n_slide 10]

The arguments in [] are optional with a default value.

View Images

You can use show_image.py to show the generated images, the code has been tested on Windows 10 and Ubuntu 16.04 (python 3.6). If you want to change the width of the buttons in the bottom, you can change width parameter in the 160th line. the '+++' and '---' on the button indicate that the above image is modified to 'add' or 'remove' the attribute. Note that you should specify the path of the attribute file (list_attr_celeba.txt) of CelebA in the 82nd line.

NOTE:

  • You should give the path of the data by adding --dataroot DATAROOT;
  • You can specify which GPU to use by adding --gpu GPU, e.g., --gpu 0;
  • You can specify which image(s) to test by adding --img num (e.g., --img 182638, --img 200000 200001 200002), where the number should be no larger than 202599 and is suggested to be no smaller than 182638 as our test set starts at 182638.png.
  • You can modify the model by using following arguments
    • --label: 'diff'(default) for difference attribute vector, 'target' for target attribute vector
    • --stu_norm: 'none'(default), 'bn' or 'in' for adding no/batch/instance normalization in STUs
    • --mode: 'wgan'(default), 'lsgan' or 'dcgan' for differenct GAN losses
    • More arguments please refer to train.py

AttGAN

  • Train with AttGAN model by

    python train.py --experiment_name attgan_128 --use_stu false --shortcut_layers 1 --inject_layers 1

Citation

If you find STGAN useful in your research work, please consider citing:

@InProceedings{liu2019stgan,
  title={STGAN: A Unified Selective Transfer Network for Arbitrary Image Attribute Editing},
  author={Liu, Ming and Ding, Yukang and Xia, Min and Liu, Xiao and Ding, Errui and Zuo, Wangmeng and Wen, Shilei},
  booktitle={IEEE Conference on Computer Vision and Pattern Recognition (CVPR)},
  year={2019}
}

Acknowledgement

The code is built upon AttGAN, thanks for their excellent work!

You can’t perform that action at this time.