Reducing Reparameterization Gradient Variance code.
Python R
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
code use print_function for potential py3 compat May 21, 2017
LICENSE Initial commit May 20, 2017
README.md Update README.md Jun 5, 2017

README.md

ReducedVarianceReparamGradients

Code for Reducing Reparameterization Gradient Variance.

Abstract

Optimization with noisy gradients has become ubiquitous in statistics and machine learning. Reparameterization gradients, or gradient estimates computed via the "reparameterization trick," represent a class of noisy gradients often used in Monte Carlo variational inference (MCVI). However, when these gradient estimators are too noisy, the optimization procedure can be slow or fail to converge. One way to reduce noise is to use more samples for the gradient estimate, but this can be computationally expensive. Instead, we view the noisy gradient as a random variable, and form an inexpensive approximation of the generating procedure for the gradient sample. This approximation has high correlation with the noisy gradient by construction, making it a useful control variate for variance reduction. We demonstrate our approach on non-conjugate multi-level hierarchical models and a Bayesian neural net where we observed gradient variance reductions of multiple orders of magnitude (20-2,000x).

Authors: Andrew Miller, Nick Foti, Alex D'Amour, and Ryan Adams.

Requires