CoGAN can learn a joint distribution with just samples drawn from the marginal distributions. This is achieved by enforcing a weight-sharing constraint that limits the network capacity and favors a joint distribution solution over a product of marginal distributions one.
The following figure is the result showed in paper:
- Note that all the natural images here is unpaired. In a nutshell, in each training process, the input of the descriminator is not aligned.
- The experiment result of UDA problem is very impressive, which inpires me to implement this in Tensorflow.
- Python 2.7
First you have to clone this repo:
$ git clone https://github.com/andrewliao11/CoGAN-tensorflow.git
Download the data:
This step will automatically download the data under the current folder.
$ python download.py mnist
Preprocess(invert) the data:
$ python invert.py
Train your CoGAN:
$ python main.py --is_train True
During the training process, you can see the average loss of the generators and the discriminators, which can hellp your debugging. After training, it will save some sample to the
./samples/top and ./samples/bot, respectively.
To visualize the the whole training process, you can use Tensorboard:
We can see that without paired infomation, the network can generate two different images with the same high-level concepts.
Note: To avoid the fast convergence of D (discriminator) network, G (generator) network is updated twice for each D network update, which differs from original paper.
- Modify the network structure to get the better results
- Try to use in different dataset(WIP)
This code is heavily built on these repo: