Skip to content

Commit

Permalink
added explanation of VAE
Browse files Browse the repository at this point in the history
  • Loading branch information
bfortuner committed Mar 4, 2018
1 parent a1fa124 commit 48580fa
Showing 1 changed file with 5 additions and 1 deletion.
6 changes: 5 additions & 1 deletion docs/architectures.rst
Original file line number Diff line number Diff line change
Expand Up @@ -117,7 +117,11 @@ indices representing the class (language) of the name.
VAE
===

Variational Autoencoder. Use case and basic architecture. Figure from [4].
Autoencoders can encode an input image to a latent vector and decode it, but they can't generate novel images.
Variational Autoencoders (VAE) solve this problem by adding a constraint: the latent vector representation should model a unit gaussian distribution.
The Encoder returns the mean and variance of the learned gaussian. To generate a new image, we pass a new mean and variance to the Decoder.
In other words, we "sample a latent vector" from the gaussian and pass it to the Decoder.
It also improves network generalization and avoids memorization. Figure from [4].

.. image:: images/vae.png
:align: center
Expand Down

0 comments on commit 48580fa

Please sign in to comment.