Implementation of a variational Auto-encoder
Switch branches/tags
Nothing to show
Clone or download
Latest commit 9b104b8 Jul 9, 2017
y0ast Fix #11
Fix float64 error for validation, correctly input floatX data.
Failed to load latest commit information.
.gitignore doesnt converge yet Mar 22, 2016
LICENSE Create LICENSE Mar 30, 2014 clean up readme Mar 23, 2016 Merge branch 'master' of May 23, 2017
freyfaces.pkl init Mar 30, 2014
mnist.pkl.gz init Mar 30, 2014 Fix #11 Jul 9, 2017

##Variational Auto-encoder

This is an improved implementation of the paper Stochastic Gradient VB and the Variational Auto-Encoder by D. Kingma and Prof. Dr. M. Welling. This code uses ReLUs and the adam optimizer, instead of sigmoids and adagrad. These changes make the network converge much faster.

In my other repository the implementation is in Torch7 (lua), this version is based on Theano (Python). To run the MNIST experiment:


Setting the continuous boolean to true will make the script run the freyfaces experiment. It is necessary to tweak the batch_size and learning rate parameter for this to run smoothly.

There used to be a scikit-learn implementation too, but it was very slow and outdated. You can still find it by looking at the code at this commit

The code is MIT licensed.