Simple Tensorflow implementation of "Self-Attention Generative Adversarial Networks" (SAGAN)
Switch branches/tags
Nothing to show
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assests result & clean up Jun 7, 2018
dataset dataset Jun 1, 2018
.gitignore Initial commit Jun 1, 2018
LICENSE Initial commit Jun 1, 2018
README.md Update README.md Jun 7, 2018
SAGAN.py fix code Jul 5, 2018
download.py fix typo Jun 19, 2018
main.py recommend up-sample conv Jul 5, 2018
ops.py fix code Jul 5, 2018
utils.py code Jun 1, 2018

README.md

Self-Attention-GAN-Tensorflow

Simple Tensorflow implementation of "Self-Attention Generative Adversarial Networks" (SAGAN)

Requirements

  • Tensorflow 1.8
  • Python 3.6

Summary

Framework

framework

Code

    def attention(self, x, ch):
      f = conv(x, ch // 8, kernel=1, stride=1, sn=self.sn, scope='f_conv') # [bs, h, w, c']
      g = conv(x, ch // 8, kernel=1, stride=1, sn=self.sn, scope='g_conv') # [bs, h, w, c']
      h = conv(x, ch, kernel=1, stride=1, sn=self.sn, scope='h_conv') # [bs, h, w, c]

      # N = h * w
      s = tf.matmul(hw_flatten(g), hw_flatten(f), transpose_b=True) # # [bs, N, N]

      beta = tf.nn.softmax(s, axis=-1)  # attention map

      o = tf.matmul(beta, hw_flatten(h)) # [bs, N, C]
      gamma = tf.get_variable("gamma", [1], initializer=tf.constant_initializer(0.0))

      o = tf.reshape(o, shape=x.shape) # [bs, h, w, C]
      x = gamma * o + x

      return x

Usage

dataset

> python download.py celebA
  • mnist and cifar10 are used inside keras
  • For your dataset, put images like this:
├── dataset
   └── YOUR_DATASET_NAME
       ├── xxx.jpg (name, format doesn't matter)
       ├── yyy.png
       └── ...

train

  • python main.py --phase train --dataset celebA --gan_type hinge

test

  • python main.py --phase test --dataset celebA --gan_type hinge

Results

ImageNet

 

CelebA (100K iteration, hinge loss)

celebA

Author

Junho Kim