This repository provides a PyTorch implementation of SAGAN. Both wgan-gp and wgan-hinge loss are ready, but note that wgan-gp is somehow not compatible with the spectral normalization. Remove all the spectral normalization at the model for the adoption of wgan-gp.
Self-attentions are applied to later two layers of both discriminator and generator.
Current update status
- Supervised setting
- Tensorboard loggings
-  updated the self-attention module. Thanks to my colleague Cheonbok Park! see 'sagan_models.py' for the update. Should be efficient, and run on large sized images
- Attention visualization (LSUN Church-outdoor)
- Unsupervised setting (use no label yet)
- Applied: Spectral Normalization, code from here
- Implemented: self-attention module, two-timescale update rule (TTUR), wgan-hinge loss, wgan-gp loss
Attention result on LSUN (epoch #8)
CelebA dataset (epoch on the left, still under training)
LSUN church-outdoor dataset (epoch on the left, still under training)
1. Clone the repository
$ git clone https://github.com/heykeetae/Self-Attention-GAN.git $ cd Self-Attention-GAN
2. Install datasets (CelebA or LSUN)
$ bash download.sh CelebA or $ bash download.sh LSUN
$ python python main.py --batch_size 64 --imsize 64 --dataset celeb --adv_loss hinge --version sagan_celeb or $ python python main.py --batch_size 64 --imsize 64 --dataset lsun --adv_loss hinge --version sagan_lsun
4. Enjoy the results
$ cd samples/sagan_celeb or $ cd samples/sagan_lsun
Samples generated every 100 iterations are located. The rate of sampling could be controlled via --sample_step (ex, --sample_step 100).