Skip to content
Replication of "Auto-Encoding Variational Bayes" (Kingma & Welling, 2013)
Python
Branch: master
Clone or download
Latest commit 2e322ae Mar 7, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
vae dynamic batch_size Mar 7, 2018
.gitignore dec, enc, vae docstrings Mar 2, 2018
README.md dec, enc, vae docstrings Mar 2, 2018
main.py basic repo structure, readme Mar 2, 2018
requirements.txt dec, enc, vae docstrings Mar 2, 2018

README.md

Variational Auto-Encoder (vanilla)

Replication of Auto-Encoding Variational Bayes (Kingma & Welling, 2013)

Quick Start

# Create and activate virtual environment
virtualenv -p python3.5 venv
source venv/bin/activate

# Install dependencies with pip
pip install -r requirements.txt

# Run main.py, which trains vae and saves results to /img
python main.py

More Details

The variational autoencoder implementation is in vanilla tensorflow, and is in /vae. Since the same graph can be used in multiple ways, there is a simple VAE class that constructs the tf graph and has useful pointers to important tensors and methods to simplify interaction with those tensors.

(being single use code, there are no unit tests here)

Thoughts

  • VAE for MNIST
  • VAE for Frey Face
  • Functions to make encoders, decoders
  • Simple object for full graph
    • inference method for image sim
    • accessible input and loss
  • Make Figure 2 for z dim = 10

Chuck all that stuff in package vae.

Then in main.py in root, load those and relevant graphing tools, train the model, make the graphs and images. Then save to some gitignored subfolder. So our tf code is nicely separated but we can still easily go

python main.py

to generate some images.

You can’t perform that action at this time.