Skip to content
Experiments with variational autoencoders in Julia
Julia
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets
src
test
.editorconfig
.gitignore
LICENSE.md
Project.toml
README.md

README.md

Variational Autoencoders

Reconstruction of MINST digits during VAE training using Convoluation/Deconvolution Neural Network, 50 latent dimensions, 2 epochs shown

About

A library using Julia's Flux Library to implement Variational Autoencoders

  • main.jl - run model with MINST dataset, this will be dropped later
  • Model.jl the basic Model, for now it's just a basic VAE
  • Dataset.jl the interface

Open Questions

  • Can KL-Divergence and reconstruction error be better balanced?
  • Can VAE be used as a pure clustering method? How else would the latent space representation be useful?
  • Is it possible (in Julia) to reconconstruct the reverse transformation (decoder), for a given encoder?
  • Can VAE be used for columnar data with missing inputs?

References

Tutorial on VAE
Tensorflow VAE
Flux.jl
Flux VAE
Auto-Encoding Variational Bayes
Stochastic Backpropagation and Approximate Inference in Deep Generative Models

Figures

Conv/Deconv VAE during 4 epochs of MINST training. 10 latent dimensions. MINST digits choosen at random from test set.

You can’t perform that action at this time.