Skip to content
Variational Autoencoder and Conditional Variational Autoencoder on MNIST in PyTorch
Python
Branch: master
Clone or download
Pull request Compare This branch is even with timbmg:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
figs
README.md
models.py
requirements.txt
train.py
utils.py

README.md

Variational Autoencoder & Conditional Variational Autoenoder on MNIST

VAE paper: Auto-Encoding Variational Bayes

CVAE paper: Learning Structured Output Representation using Deep Conditional Generative Models


In order to run conditional variational autoencoder, add --conditional to the the command. Check out the other commandline options in the code for hyperparameter settings (like learning rate, batch size, encoder/decoder layer depth and size).


Results

All plots obtained after 10 epochs of training. Hyperparameters accordning to default settings in the code; not tuned.

z ~ q(z|x) and q(z|x,c)

The modeled latent distribution after 10 epochs and 100 samples per digit.

VAE CVAE

p(x|z) and p(x|z,c)

Randonly sampled z, and their output. For CVAE, each c has been given as input once.

VAE CVAE
You can’t perform that action at this time.