Implementation of a Variational Auto-Encoder in TensorFlow
Switch branches/tags
Nothing to show
Clone or download
y0ast Merge pull request #4 from arongdari/master
Update code for compatibility with TensorFlow 1.0
Latest commit 5ee6250 Mar 20, 2017
Permalink
Failed to load latest commit information.
save don't crash on saving May 25, 2016
.gitignore don't crash on saving May 25, 2016
LICENSE add license Apr 19, 2016
README.MD init Apr 19, 2016
main.py Update code for compatibility with TensorFlow 1.0 Mar 20, 2017

README.MD

##Variational Auto-encoder

This is an improved implementation of the paper Stochastic Gradient VB and the Variational Auto-Encoder by D. Kingma and Prof. Dr. M. Welling. This code uses ReLUs and the adam optimizer, instead of sigmoids and adagrad. These changes make the network converge much faster.

I also created a Theano and a Torch version.

To run the MNIST experiment:

python main.py

###NB: This code is not as nicely polished as the Torch7 and Theano version. It is mainly for playing around with TensorFlow, which is why I tried to add as many of its bells and whistles as possible. PRs to make it more "TensorFlowy" are welcomed! Specifically if I made a mistake that causes a slow down.

There is no continuous version for now, but there will probably be one in the near future.

The code is MIT licensed.