Skip to content
master
Switch branches/tags
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
src
 
 
 
 
 
 
 
 
 
 
 
 

Variational Autoencoders

Reconstruction of MINST digits during VAE training using Convoluation/Deconvolution Neural Network, 50 latent dimensions, 2 epochs shown

About

A library using Julia's Flux Library to implement Variational Autoencoders

  • main.jl - run model with MINST dataset, this will be dropped later
  • Model.jl the basic Model, for now it's just a basic VAE
  • Dataset.jl the interface

Open Questions

  • Can KL-Divergence and reconstruction error be better balanced?
  • Can VAE be used as a pure clustering method? How else would the latent space representation be useful?
  • Is it possible (in Julia) to reconconstruct the reverse transformation (decoder), for a given encoder?
  • Can VAE be used for columnar data with missing inputs?

References

Tutorial on VAE
Tensorflow VAE
Flux.jl
Flux VAE
Auto-Encoding Variational Bayes
Stochastic Backpropagation and Approximate Inference in Deep Generative Models

Figures

Conv/Deconv VAE during 4 epochs of MINST training. 10 latent dimensions. MINST digits choosen at random from test set.

Releases

No releases published

Packages

No packages published

Languages