Skip to content
Implement Coupled Generative Adversarial Networks in Tensorflow
Branch: master
Clone or download
andrewliao11 Merge pull request #6 from hanxiao/patch-1
supports loading fashion-mnist data
Latest commit 370c490 Sep 17, 2017
Type Name Latest commit message Commit time
Failed to load latest commit information.
asset manage Nov 30, 2016
samples Update Sep 12, 2017 supports loading fashion-mnist data Sep 17, 2017 fix save path Feb 19, 2017
list_attr_celeba.txt add celebA data list and parse attr code Feb 20, 2017 support version later than tf.0.11 Feb 19, 2017 support version later than tf.0.11 Feb 19, 2017 support version after tf.0.11 Feb 19, 2017 add celebA data list and parse attr code Feb 20, 2017 add utils Dec 11, 2016


Implement Coupled Generative Adversarial Networks, [NIPS 2016]
This implementation is a little bit different from the original caffe code. Basically, I follow the model architecture design of DCGAN.

What's CoGAN?

CoGAN can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one.
The following figure is the result showed in paper:

  • Note that all the natural images here is unpaired. In a nutshell, in each training process, the input of the descriminator is not aligned.
  • The experiment result of UDA problem is very impressive, which inpires me to implement this in Tensorflow.

The following image is the model architecture referred in the paper:
Again: this repo isn't follow the model architecture in the paper currently


  • Python 2.7
  • Tensorflow.0.12

Kick off

First you have to clone this repo:

$ git clone

Download the data:
This step will automatically download the data under the current folder.

$ python mnist

Preprocess(invert) the data:

$ python 

Train your CoGAN:

$ python --is_train True

During the training process, you can see the average loss of the generators and the discriminators, which can hellp your debugging. After training, it will save some sample to the ./samples/top and ./samples/bot, respectively.

To visualize the the whole training process, you can use Tensorboard:

tensorboard --logdir=logs


  • model in 1st epoch

  • model in 5th epoch

  • model in 24th epoch

  • We can see that without paired infomation, the network can generate two different images with the same high-level concepts.

  • Note: To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.


  • Modify the network structure to get the better results
  • Try to use in different dataset(WIP)


This code is heavily built on these repo:

You can’t perform that action at this time.