Skip to content
This repository has been archived by the owner on Sep 10, 2022. It is now read-only.


Switch branches/tags

Name already in use

A tag already exists with the provided branch name. Many Git commands accept both tag and branch names, so creating this branch may cause unexpected behavior. Are you sure you want to create this branch?

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Joint VAE Model API

Variational autoencoder for images and text

Describes the code used to create results for the following paper:

Vedantam, Ramakrishna, Ian Fischer, Jonathan Huang, and Kevin Murphy. 2017. Generative Models of Visually Grounded Imagination. arXiv [cs.LG]. arXiv.


NOTE: All scripts should be run from the root directory of the project.

Basic Setup

Install basic dependencies.

cd to the root directory and run the following command.

source install/

This sets up the python virtual environment for the project, and downloads necessary files for MNIST-A experiments.

Additional Data Setup

To create your own MNISTA dataset see scripts/ Set appropriate paths in datasets/mnist_attributes/ and datasets/mnist_attributes/ respectively.

To download and process CELEBA dataset run scripts/ And set appropriate paths in datasets/celeba/


See scripts/ for example uses of different models/ experiments reported in the paper.

  1. Experiments on compositional split of MNISTA. 2) Experiments on iid split of MNISTA. 3) Experiments on CelebA dataset.

Running above scripts uses slurm to run/launch jobs on clusters, so only run the above if you have access to a cluster with slurm installed on it. If you dont have access to slurm simply run the corresponding python commands in the above scripts with the appropriate parameters.

Each experiment is associated with three jobs: train, eval and imeval (imagination evaluation).

Example commands to run experiments with slurm:

  • source scripts/ '' (to run all 3 jobs for every experiment)
  • source scripts/ train (to launch only training jobs)
  • source scripts/ eval (to launch only eval jobs)
    • source scripts/ imeval (to launch only imagination eval jobs)

If you dont have access to slurm, you can see what command line arguments are used for running experiments (in the above scripts) and run those commands in bash.

Quantiative Results

See the ipython notebook experiments/iclr_results_aggregate.ipynb on how to view the imagination results after running imeval (imagination evaluation) jobs, post training.


  • Ramakrishna Vedantam
  • Hernan Moraldo
  • Ian Fischer


This project is not an official Google project. It is not supported by Google and Google specifically disclaims all warranties as to its quality, merchantability, or fitness for a particular purpose.


See how to contribute.


Apache 2.0.


Code to build VAE models that are jointly conditioned.




Code of conduct

Security policy





No releases published


No packages published