Variational Auto-Encoder (vanilla)
Replication of Auto-Encoding Variational Bayes (Kingma & Welling, 2013)
# Create and activate virtual environment virtualenv -p python3.5 venv source venv/bin/activate # Install dependencies with pip pip install -r requirements.txt # Run main.py, which trains vae and saves results to /img python main.py
The variational autoencoder implementation is in vanilla tensorflow, and is in
Since the same graph can be used in multiple ways, there is a simple
VAE class that
tf graph and has useful pointers to important tensors and methods to
simplify interaction with those tensors.
(being single use code, there are no unit tests here)
- VAE for MNIST
- VAE for Frey Face
- Functions to make encoders, decoders
- Simple object for full graph
- inference method for image sim
- accessible input and loss
- Make Figure 2 for z dim = 10
Chuck all that stuff in package vae.
Then in main.py in root, load those and relevant graphing tools, train the model, make the graphs and images. Then save to some gitignored subfolder. So our tf code is nicely separated but we can still easily go
to generate some images.