Skip to content

soerenberg/variational-autoencoder

Repository files navigation

CI Tests Build Status codecov

Variational Autoencoder

Implementation of a Variational Autoencoder model applied to several datasets.

The model is implemented in TensorFlow 2 using its Keras API tf.keras. The primary goal of this project was not to find the most performant ANN architecture for a general VAE or even to find very well-tuned hyperparameters. The primary goal wast to have a small, maintanable, reliable & well-tested code which is easy to extend, and present some simple results to demonstrate that the model is quite able to create new images instead of just reconstructing training instances.

Below you find results for some of the most well-known datasets.

Results

MNIST dataset

Latent dimension 25

The following animation shows how the decoder of the VAE model creates images for a set of 16 points randomly sampled in the latent space held fixed over all epochs.

alt text

Generated Training set
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text

Fashion-MNIST dataset

Latent dimension 100

The following animation shows how the decoder of the VAE model creates images for a set of 16 points randomly sampled in the latent space held fixed over all epochs.

alt text

Generated Training set
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text

Visualization of 10k points of the train set in the latent space

The points from the 100 dimensional latent space have been projected into the plane using PCA.

alt text

Visualization of 10k points of the test set in the latent space

The points from the 100 dimensional latent space have been projected into the plane using PCA.

alt text

CIFAR-10 dataset

Latent dimension 100

The following animation shows how the decoder of the VAE model creates images for a set of 16 points randomly sampled in the latent space held fixed over all epochs.

alt text

Generated Training set
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text
alt text alt text

Visualization of 10k points of the train set in the latent space

The points from the 100 dimensional latent space have been projected into the plane using PCA.

alt text

Visualization of 10k points of the test set in the latent space

The points from the 100 dimensional latent space have been projected into the plane using PCA.

alt text

About

Variational Autoencoder model

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages