Skip to content

lavish619/Variational-Autoencoders

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

16 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Variational-Autoencoders (VAEs)

Model Visualization

Encoder

Decoder

Dataset and Training

Dataset

MNIST Dataset is used for training the model.

Training the model

  1. Optimizer - Adam
  2. 3 types of loss trackers
    1. Reconstruction Loss
    2. KL divergence Loss
    3. Total Loss
  3. Epochs for training - 30
  4. Latent Dimension Size - 2
  5. Callbacks used - Early Stopping, Reduce LR
  6. Model Weights saved to model_weights.h5 after training.

For retraining, load the previous weights to the model and train your new model on top of the previous one.

Visualizing the Outputs

Run the main.py file to visualize the following:

  1. Latent Space Representations of the Dataset as encoded by the Encoder network.

2) Reconstructed Images of the images present in the test set.


Real Images


Reconstructed Images

  1. Newly generated images formed by sampling random noise from the latent space and feeding it to the Decoder network.


Generated Images

References

  1. An awesome article giving a basic intuition of mathematics behind VAEs.

  2. Keras code reference for writing the loss functions.

  3. A lecture giving a deep understanding of probability behind VAE.

  4. Paper on VAEs