Skip to content

Keras implementation of Variation Autoencoder for face generation. Analysis of the distribution of the latent space of the VAE. Vector arithemtic in the latent space. Morphing between the faces. The model was trained on CelebA dataset

Notifications You must be signed in to change notification settings

Data-Science-kosta/Variational-Autoencoder-for-Face-Generation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

14 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Variational-Autoencoder-for-Face-Generation

This repository contains Keras implementation of the Variational Autoencoder for face generation.

Note for people who speak Serbian language: Detaljna teorijska objašnjenja i matematička izvođenja potrebna za implementaciju Varijacionog Autoenkodera mogu se naći na sledećem linku

DATASET

The model was trained on CelebA dataset. It contains 202600 images of human faces, which are labeled with attributes (such as smiling, young, male, eyeglasses, ...).

RESULTS

Following results are for the model trained on images of size 128x128.

Reconstructing faces

reconstructing faces The images blurry, but that is expected for VAE because it averages squared error across pixels.

Generating new faces

New faces are generated by sampling the vector of the latent space from the prior distribution and passing it through the decoder. generating faces

Analysis of the distribution of the latent space

Posterior distribution should be close to standard normal distribution, since the KL divergence in the loss function pushes posterior distribution to be close to the prior (which is standard normal distribution). We can not visualize 200 dimensional distribution, so the t-SNE algorithm is performed to reduce the dimensions to 2. 2D Distribution evaluated on 20 000 images is plotted on the right, and its histogram is presented on the left:
2D distribution
Since it is assumed that covariance matrix of the posterior distribution is diagonal, distribution of each dimension should be close to standard normal.
Distribution of the first 30 dimensions of the posterior distribution (evaluated on the 20 000 images) are shown with blue color and standard normal distributions are shown with red:
30 dims

Vector arithmetic in the latent space

Using labeled attributes we can extract specific feature vectors from the latent space. For example if we want to extract smiling vector, we can do that by calculating the average vector of all the vectors from images that have smiling face and subtract it from the average vector of all the vectors from images that do not have smiling face. By normalizing the resulting vector we should get unit vector pointing in the direction not_smiling-->smiling. We can add this vector (with some intensity) to the vector of the image of the face without smile, and pass it through the decoder in order to get the image of the same face with smile. Same principle can be applied to other feature vectors. arithmetic

Morphing between faces

If we take 2 vectors of the latent space from 2 corresponding images and start 'walking' from one vector to another, if we pass the vectors we encounter through the decoder we should get gradual transition from one face to another: morphing

About

Keras implementation of Variation Autoencoder for face generation. Analysis of the distribution of the latent space of the VAE. Vector arithemtic in the latent space. Morphing between the faces. The model was trained on CelebA dataset

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published