Conditional generative adversarial networks for convolutional face generation
Pull request Compare This branch is 170 commits ahead, 14 commits behind goodfeli:master.
Latest commit ec14b0d May 23, 2016 @hans Update
Failed to load latest commit information.
conditional Fix minor mistakes Mar 4, 2015
data/lfwcrop_color Add `data` folder with LFWcrop data for reproducibility Oct 22, 2015
lfw Retain backwards compatibility in lfw/dataset Feb 26, 2015
models Fix mistakes in YAML files Mar 9, 2015
sampler LFW conditional sampler bugfixes Mar 16, 2015
tests Fix + improve conditional generator tests Mar 17, 2015
.gitignore Improve gitignore Mar 5, 2015
LICENSE Update May 23, 2016 Fix + improve conditional generator tests Mar 17, 2015 avoid underflowing the division Jun 21, 2014
paper.pdf Add paper Nov 25, 2015 parzen_ll: fix bug in NLL calculation Mar 12, 2015 Copy the code and hyperparameters from galatea Jun 10, 2014 Minor fixes to training extension Mar 3, 2015 Fix silly errors Mar 5, 2015

Conditional Generative Adversarial Networks

Demonstration of deterministic control of image samples. We tweak conditional information to first make the sampled faces age, then again to make them smile.

The code in this repository implements the conditional generative adversarial network (cGAN), described in my paper from late 2015:

Conditional generative adversarial networks for convolutional face generation. Jon Gauthier. March 2015.

This code is a fork of the original GAN repository. The original GAN model is described in the paper:

Generative Adversarial Networks. Ian J. Goodfellow, Jean Pouget-Abadie, Mehdi Mirza, Bing Xu, David Warde-Farley, Sherjil Ozair, Aaron Courville, Yoshua Bengio. ArXiv 2014.

Guide to the code / usage instructions

This code is built on the Pylearn2 framework for machine learning. The abstract model structures are implemented as Python classes (see e.g. the ConditionalAdversaryPair class, and concrete model instantiations / training configurations are described in YAML files (see e.g. a file for training with LFW data).

You can perform your own training runs using these YAML files. The paths in the YAML files reference my own local data; you'll need to download the LFW dataset and change these paths yourself. The "file-list" and embedding files referenced in the YAML files are available for LFW in the data/lfwcrop_color folder. Once you have the paths in the YAML file, you can start training a model with the simple invocation of Pylearn2's binary, e.g. models/lfwcrop_convolutional_conditional.yaml


The sampler folder contains various GAN sampling scripts that helps visualize trained models. Some highlights are listed below (see the head of the linked source files for descriptions).


  • Numpy
  • Theano
  • Pylearn2