Variational Autoencoder implementation in tensorflow based off another example. In addition, Sampler
class to interactively work with results inside IPython.
Default parameters should be fine for MNIST. Both L2 loss and logistic regression loss (Bernoulli) supported. Default dataset is MNIST.
To train a model:
python train.py --help
To use the model, load IPython
%run -i sample.py
sampler = Sampler() # loads trained model inside /save
To generate a random MNIST
z = sampler.generate_z() # generates iid normal latent variable of 8 dimensions
m = sampler.generate(z) # generates a sample image from latent variables
sampler.show_image(m) # displays the image from the prompt
Alternatively, we can generate and display the image in one line:
sampler.show_image_from_z(sampler.generate_z()) # displays the image from the prompt
We can draw a random image from the MNIST database, display it, and also display the autoencoded reconstruction:
m = sampler.get_random_mnist() # get a random real MNIST image
sampler.show_image(m) # display the image
z = sampler.encode(m) # encode m into latent variables z
sampler.show_image_from_z(z) # show the autoencoded image
There are also some operations to perform image processing on images. For example, if we want to differentiate the image, ie, find d(m)/dxdy:
m = sampler.get_random_mnist() # get a random real MNIST image
diff_m = sampler.diff_image(m)
integrate_m = sampler.integrate_image(diff_m)
recover_m = sampler.integrate_image(diff_m)
sampler.show_image(m)
sampler.show_image(diff_m)
sampler.show_image(integrate_m)
recover_m = sampler.diff_image(integrate_m)
sampler.show_image(recover_m)
recover_m = sampler.integrate_image(diff_m)
sampler.show_image(recover_m) # same as previous image
MIT - everything else