Skip to content
Minimalist implementation of VQ-VAE in Pytorch
Branch: master
Clone or download
Latest commit d462f83 Jun 16, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
images upload results and update readme Mar 22, 2018
results initial commit Dec 24, 2017
utils @ 6ec367a improve cvae, added logging Jan 15, 2018
vq_vae delete gpu memory Aug 15, 2018
.gitmodules improve cvae, added logging Jan 15, 2018
LICENSE Initial commit Dec 24, 2017
README.md Update README.md Jun 6, 2018
requirements.txt initial commit Dec 24, 2017
setup.py change structure, and setup.py Jan 28, 2018

README.md

CVAE and VQ-VAE

This is an implementation of the VQ-VAE (Vector Quantized Variational Autoencoder) and Convolutional Varational Autoencoder. from Neural Discrete representation learning for compressing MNIST and Cifar10. The code is based upon pytorch/examples/vae.

pip install -r requirements.txt
python main.py

requirements

  • Python 3.6 (maybe 3.5 will work as well)
  • PyTorch 0.4
  • Additional requirements in requirements.txt

Results

All images are taken from the test set. Top row is the original image. Bottom row is the reconstruction.

k - number of elements in the dictionary. d - dimension of elements in the dictionary (number of channels in bottleneck).

  • MNIST (k=10, d=64)

mnist

  • CIFAR10 (k=128, d=256)

CIFAR10

  • Imagenet (k=512, d=128)

imagenet

TODO:

Acknowledgement

tf-vaevae for a good reference.

You can’t perform that action at this time.