Find file History
Type Name Latest commit message Commit time
Failed to load latest commit information.
bin Updating fivo codebase Sep 7, 2018
experimental Updating fivo codebase Sep 7, 2018
fivo Whitespace in license for Sep 7, 2018
.gitattributes Updating fivo codebase Sep 7, 2018
.gitignore Updating fivo codebase Sep 7, 2018 Updating fivo codebase Sep 7, 2018 Updating fivo codebase Sep 7, 2018

Filtering Variational Objectives

This folder contains a TensorFlow implementation of the algorithms from

Chris J. Maddison*, Dieterich Lawson*, George Tucker*, Nicolas Heess, Mohammad Norouzi, Andriy Mnih, Arnaud Doucet, and Yee Whye Teh. "Filtering Variational Objectives." NIPS 2017.

This code implements 3 different bounds for training sequential latent variable models: the evidence lower bound (ELBO), the importance weighted auto-encoder bound (IWAE), and our bound, the filtering variational objective (FIVO).

Additionally it contains several sequential latent variable model implementations:

  • Variational recurrent neural network (VRNN)
  • Stochastic recurrent neural network (SRNN)
  • Gaussian hidden Markov model with linear conditionals (GHMM)

The VRNN and SRNN can be trained for sequence modeling of pianoroll and speech data. The GHMM is trainable on a synthetic dataset, useful as a simple example of an analytically tractable model.

Directory Structure

The important parts of the code are organized as follows.         # main script, contains flag definitions
├─            # a sequential Monte Carlo implementation
├─         # code for computing each bound, uses
├─        # code for VRNN and SRNN training and evaluation
├─   # code for GHMM training and evaluation
| ├─     # readers for pianoroll and speech datasets
| ├─  # preprocesses the pianoroll datasets
| └─      # preprocesses the TIMIT dataset
  ├─   # base classes used in other models
  ├─   # VRNN implementation
  ├─   # SRNN implementation
  └─   # Gaussian hidden Markov model (GHMM) implementation
├─            # an example script that runs training
├─             # an example script that runs evaluation
├─           # an example script that runs sampling
├─            # a script that runs all tests
└─  # a script that downloads pianoroll files


Requirements before we start:

Download the Data

The pianoroll datasets are encoded as pickled sparse arrays and are available at You can use the script bin/ to download the files into a directory of your choosing.

export PIANOROLL_DIR=~/pianorolls

Preprocess the Data

The script loads a pianoroll pickle file, calculates the mean, updates the pickle file to include the mean under the key train_mean, and writes the file back to disk in-place. You should do this for all pianoroll datasets you wish to train on.

python data/ --in_file=$PIANOROLL_DIR/
python data/ --in_file=$PIANOROLL_DIR/
python data/ --in_file=$PIANOROLL_DIR/musedata.pkl
python data/ --in_file=$PIANOROLL_DIR/jsb.pkl


Now we can train a model. Here is the command for a standard training run, taken from bin/

python \
  --mode=train \
  --logdir=/tmp/fivo \
  --model=vrnn \
  --bound=fivo \
  --summarize_every=100 \
  --batch_size=4 \
  --num_samples=4 \
  --learning_rate=0.0001 \
  --dataset_path="$PIANOROLL_DIR/jsb.pkl" \

You should see output that looks something like this (with extra logging cruft):

Saving checkpoints for 0 into /tmp/fivo/model.ckpt.
Step 1, fivo bound per timestep: -11.322491
global_step/sec: 7.49971
Step 101, fivo bound per timestep: -11.399275
global_step/sec: 8.04498
Step 201, fivo bound per timestep: -11.174991
global_step/sec: 8.03989
Step 301, fivo bound per timestep: -11.073008


You can also evaluate saved checkpoints. The eval mode loads a model checkpoint, tests its performance on all items in a dataset, and reports the log-likelihood averaged over the dataset. For example here is a command, taken from bin/, that will evaluate a JSB model on the test set:

python \
  --mode=eval \
  --split=test \
  --alsologtostderr \
  --logdir=/tmp/fivo \
  --model=vrnn \
  --batch_size=4 \
  --num_samples=4 \
  --dataset_path="$PIANOROLL_DIR/jsb.pkl" \

You should see output like this:

Restoring parameters from /tmp/fivo/model.ckpt-0
Model restored from step 0, evaluating.
test elbo ll/t: -12.198834, iwae ll/t: -11.981187 fivo ll/t: -11.579776
test elbo ll/seq: -748.564789, iwae ll/seq: -735.209206 fivo ll/seq: -710.577141

The evaluation script prints log-likelihood in both nats per timestep (ll/t) and nats per sequence (ll/seq) for all three bounds.


You can also sample from trained models. The sample mode loads a model checkpoint, conditions the model on a prefix of a randomly chosen datapoint, samples a sequence of outputs from the conditioned model, and writes out the samples and prefix to a .npz file in logdir. For example here is a command that samples from a model trained on JSB, taken from bin/

python \
  --mode=sample \
  --alsologtostderr \
  --logdir="/tmp/fivo" \
  --model=vrnn \
  --bound=fivo \
  --batch_size=4 \
  --num_samples=4 \
  --split=test \
  --dataset_path="$PIANOROLL_DIR/jsb.pkl" \
  --dataset_type="pianoroll" \
  --prefix_length=25 \

Here num_samples denotes the number of samples used when conditioning the model as well as the number of trajectories to sample for each prefix.

You should see very little output.

Restoring parameters from /tmp/fivo/model.ckpt-0
Running local_init_op.
Done running local_init_op.

Loading the samples with np.load confirms that we conditioned the model on 4 prefixes of length 25 and sampled 4 sequences of length 50 for each prefix.

>>> import numpy as np
>>> x = np.load("/tmp/fivo/samples.npz")
>>> x[()]['prefixes'].shape
(25, 4, 88)
>>> x[()]['samples'].shape
(50, 4, 4, 88)

Training on TIMIT

The TIMIT speech dataset is available at the Linguistic Data Consortium website, but is unfortunately not free. These instructions will proceed assuming you have downloaded the TIMIT archive and extracted it into the directory $RAW_TIMIT_DIR.

Preprocess TIMIT

We preprocess TIMIT (as described in our paper) and write it out to a series of TFRecord files. To prepare the TIMIT dataset use the script

export $TIMIT_DIR=~/timit_dataset
mkdir $TIMIT_DIR
python data/ \
  --raw_timit_dir=$RAW_TIMIT_DIR \

You should see this exact output:

4389 train / 231 valid / 1680 test
train mean: 0.006060  train std: 548.136169

Training on TIMIT

This is very similar to training on pianoroll datasets, with just a few flags switched.

python \
  --mode=train \
  --logdir=/tmp/fivo \
  --model=vrnn \
  --bound=fivo \
  --summarize_every=100 \
  --batch_size=4 \
  --num_samples=4 \
  --learning_rate=0.0001 \
  --dataset_path="$TIMIT_DIR/train" \

Evaluation and sampling are similar.


This codebase comes with a number of tests to verify correctness, runnable via bin/ The tests are also useful to look at for examples of how to use the code.


This codebase is maintained by Dieterich Lawson, reachable via email at For questions and issues please open an issue on the tensorflow/models issues tracker and assign it to @dieterichlawson.