Skip to content
Vector Quantized VAEs - PyTorch Implementation
Branch: master
Clone or download
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
samples adding more samples May 5, 2018
.gitignore Add tests for vq functions May 12, 2018
README.md
datasets.py
final_project.pdf adding the report May 22, 2018
functions.py
modules.py
pixelcnn_baseline.py
pixelcnn_prior.py
test_functions.py
vae.py
vqvae.py

README.md

Reproducing Neural Discrete Representation Learning

Course Project for IFT 6135 - Representation Learning

Project Report link: final_project.pdf

Instructions

  1. To train the VQVAE with default arguments as discussed in the report, execute:
python vqvae.py --data-folder /tmp/miniimagenet --output-folder models/vqvae
  1. To train the PixelCNN prior on the latents, execute:
python pixelcnn_prior.py --data-folder /tmp/miniimagenet --model models/vqvae --output-folder models/pixelcnn_prior

Datasets Tested

Image

  1. MNIST
  2. FashionMNIST
  3. CIFAR10
  4. Mini-ImageNet

Video

  1. Atari 2600 - Boxing (OpenAI Gym) code

Reconstructions from VQ-VAE

Top 4 rows are Original Images. Bottom 4 rows are Reconstructions.

MNIST

png

Fashion MNIST

png

Class-conditional samples from VQVAE with PixelCNN prior on the latents

MNIST

png

Fashion MNIST

png

Comments

  1. We noticed that implementing our own VectorQuantization PyTorch function speeded-up training of VQ-VAE by nearly 3x. The slower, but simpler code is in this commit.
  2. We added some basic tests for the vector quantization functions (based on pytest). To run these tests
py.test . -vv

Authors

  1. Rithesh Kumar
  2. Tristan Deleu
  3. Evan Racah
You can’t perform that action at this time.