Skip to content

Latest commit

 

History

History
33 lines (30 loc) · 2.4 KB

File metadata and controls

33 lines (30 loc) · 2.4 KB

Variational Auto Encoder(VAE)

Variational Auto Encoder models tend to make strong assumptions related to the distribution of latent variables. They use a variational approach for latent representation learning, which results in an additional loss component and a specific estimator for the training algorithm called the Stochastic Gradient Variational Bayes estimator. The probability distribution of the latent vector of a variational autoencoder typically matches the training data much closer than a standard autoencoder. As VAE are much more flexible and customisable in their generation behaviour than GANs, they are suitable for art generation of any kind.

code

python3 sample_keras.py
python3 sample_pytorch.py

Usefull Resources: