Generating faces with deconvolution networks
Clone or download
Type Name Latest commit message Commit time
Failed to load latest commit information.
faces Initialize identity vector to zero Jan 12, 2017
img Add example gif Sep 4, 2016
jaffe Add semantic rating for the JAFFE dataset Jan 12, 2017
params Remove extra params files Oct 8, 2016
.gitignore Add option to choose optimizer Aug 23, 2016
LICENSE Add license Sep 26, 2016 Rename "facegen" -> "faces" Sep 30, 2016 add use_jaffee flag throughout Jan 11, 2017
requirements.txt Add requirements.txt Jan 10, 2017

Generating Faces with Deconvolution Networks

Example generations

This repo contains code to train and interface with a deconvolution network adapted from this paper to generate faces using data from the Radboud Faces Database. Requires Keras, NumPy, SciPy, and tqdm with Python 3 to use.

A blog post describing this project can be found here.

Training New Models

To train a new model, simply run:

python3 train path/to/data

You can specify the number of deconvolution layers with -d to generate larger images, assuming your GPU has the memory for it. You can play with the batch size and the number of kernels per layer (using -b and -k respectively) until it fits in memory, although this may result in worse results or longer training.

Using 6 deconvolution layers with a batch size of 8 and the default number of kernels per layer, a model was trained on an Nvidia Titan X card (12 GB) to generate 512x640 images in a little over a day.

Generating Images

To generate images using a trained model, you can specify parameters in a yaml file and run:

python3 generate -m path/to/model -o output/directory -f path/to/params.yaml

There are four different modes you can use to generate images:

  • single, produce a single image.
  • random, produce a set of random images.
  • drunk, similar to random, but produces a more contiguous sequence of images.
  • interpolate, animate between a set of specified keyframes.

You can find examples of these files in the params directory, which should give you a good idea of how to format these and what's available.


Interpolating between identities and emotions:

Interpolating between identities and emotions

Interpolating between orientations: (which the model is unable to learn)

Interpolating between orientation

Random generations (using "drunk" mode):

Random generations