This repository contains the (unofficial) implementation of generative models. Results from these models can be found in ./output
.
train.py:
--[no]animation: If yes, animation of latent space interpolation will be created
(default: 'false')
--batch_size: Batch size.
(default: '64')
(an integer)
--dataset: Dataset to train.
(default: 'mnist')
--epoch: Number of epoch.
(default: '20')
(an integer)
--latent_dims: Number of latent dimensions.
(default: '2')
(an integer)
--learning_rate: Initial learning rate.
(default: '0.001')
(a number)
--model: Model to train.
(default: 'VAE')
--model_params: parameters for some models
(default: 'key:value|key:value')
- Vanilla Autoencoder (VanillaAE)
- Variational Autoencoder (VAE):
- Its convolutional version (ConvVAE) is also implemented.
- BetaVAE
- [1406.5298] Semi-Supervised Learning with Deep Generative Models
- [1506.02216] A Recurrent Latent Variable Model for Sequential Data
- [1502.04623] DRAW: A Recurrent Neural Network For Image Generation
- [1410.6460] Markov Chain Monte Carlo and Variational Inference: Bridging the Gap
- This repository is inspired by @wiseodd's generative-models repository.
- Tutorial on Variational Autoencoders
- Variational Autoencoder and Extensions - VideoLectures.NET