Codes used for teaching about Deep Generative Models (DGM). All algorithms are coded using Torch. Note that these algorithms are for pedagogical purposes, so they might not be the best in terms of performance and/or efficiency.
These codes are intended to understand some basic principles underlying a Generative Model and classical methods used to sample from distributions.
- Unidimensional Gaussian presents a simple case using a unidimensional Gaussian distribution.
- Gaussian mixture presents the simple case of having a Gaussian Mixture Model.
- Gibbs sampling of a bivariate Gaussian shows a step-by-step implementation of a Gibbs sampler on a bivariate Gaussian distribution.
- Gibbs sampling of a hierarchical Gaussian shows another step-by-step implementation of a Gibbs sampler, in this case applied to inferring the parameters of a hierarchical Gaussian distribution. This exercise highlights that Gibbs can be used to sample from an unknwown posterior, provided that we can sample from the conditional distributions.
- E-M example on a Gaussian mixture particularizes the E-M algorithm to a Gaussian Mixture model and implements the method step-by-step.
- VI with MF on a hierarchical Gaussian shows an example of Variational Inference using a Mean-Field approximation on a hierarchical Gaussian distribution.
These codes are intended to understand and exemplify some key ideas used in Deep Generative models. Hence, the methods used are simplified so as to maximize the understanding of the principles underlying these methods, and their performance is clearly limited against state-of-the-art implementations.
- Linear flow with Gaussian data shows how to implement a simple linear flow to transform a Gaussian distribution into another one, making special emphasis on the random variable transformation involved.
- Linear Autoencoder with MNIST implements a Linear Autoencoder and trains it on the MNIST data. It also explores how the latent representation affects the quality of the reconstruction.
- Non-linear Autoencoder with MNIST implements a Non-linear Autoencoder and trains it on the MNIST data.
- Variational Autoencoder with MNIST implements a Variational Autoencoder and trains it on the MNIST data. This example explores the latent space of the VAE, showing some of the insights that it provides.
- Generative Adversarial Network with MNIST implements a Generative Adversarial Network and trains it on the MNIST data.
Name | Link | Observations |
---|---|---|
Example 1.3 | Gaussian mixture | Example of Chapter 1 |
Example 1.6 | Gibbs sampling of a bivariate Gaussian | Example of Chapter 1 |
Case Study 1 | Unidimensional Gaussian | Case Study of Chapter 2 |
Case Study 2 | E-M example on a Gaussian mixture | Case Study of Chapter 2 |
Case Study 3 | Gibbs sampling of a hierarchical Gaussian | Case Study of Chapter 2 |
Case Study 4 | VI with MF on a hierarchical Gaussian | Case Study of Chapter 2 |
Case Study 5 | Linear flow with Gaussian data | Case Study of Chapter 3 |
Case Study 6 | Linear Autoencoder with MNIST | Case Study of Chapter 3 |
Case Study 7 | Variational Autoencoder with MNIST | Case Study of Chapter 3 |
Case Study 8 | Generative Adversarial Network with MNIST | Case Study of Chapter 3 |
The recommended way of executing these codes is to use Google Colab. The simplest way of doing that is to navigate to the code you want to execute, and then replace github.com
in the URL by githubtocolab.com
.
A second option is to go to Colab, and in the Open options, select GitHub and add this repository.
And finally, you can also download the code and execute it in your own machine, by installing all required dependencies.