Skip to content

Gaurav927/Variational_Auto_Encoder

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

46 Commits
 
 
 
 
 
 
 
 

Repository files navigation

Variational_Auto_Encoder

VAE

Basic knowledge to understand VAE

The key is to notice that any distribution in d dimensions can be generated by taking a set of d variables that are normally distributed and mapping them through a sufficiently complicated function. For example, say we wanted to construct a 2D random variable whose values lie on a ring. If z is 2D and normally distributed, g ( z ) = z/10 + z/ || z || is roughly a ring

Density_approximation

Our model is representative of our dataset, we need to make sure that for every datapoint X in the dataset, there is one (or many) settings of the latent variables which causes the model to generate something very similar to X.Formally, say we have a vector of latent variables z in a high-dimensional space Z which we can easily sample according to some probability density function (PDF) P ( z ) defined over Z.

Euclids_Perception

Then, say we have a family of deterministic functions f ( z; θ ) , parameterized by a vector θ in some space Θ, where f : Z × Θ → X . f is deterministic, but if z is random and θ is fixed, then f (z; θ) is a random variable in the space X . We wish to optimize θ such that we can sample z from P(z) and, with high probability, f ( z; θ ) will be like the X’s in our dataset

Objective function

The intuition behind this framework—called “maximum likelihood”— is that if the model is likely to produce training set samples, then it is also likely to produce similar samples, and unlikely to produce dissimilar ones.

Variational Auto Encoder

1 2