Repo for a paper about constructing priors on very deep models.
Switch branches/tags
Nothing to show
Clone or download
Pull request Compare This branch is 14 commits behind duvenaud:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
figures
latex
source
.gitignore
LICENSE
README.md

README.md

deep-limits

Source code for the paper

Avoiding Pathologies in Very Deep Networks By David Duvenaud, Oren Rippel, Ryan P. Adams, and Zoubin Ghahramani

To appear in AISTSATS 2014.

Abstract: Choosing appropriate architectures and regularization strategies for deep networks is crucial to good predictive performance. To shed light on this problem, we analyze the analogous problem of constructing useful priors on compositions of functions. Specifically, we study the deep Gaussian process, a type of infinitely-wide, deep neural network. We show that in standard architectures, the representational capacity of the network tends to capture fewer degrees of freedom as the number of layers increases, retaining only a single degree of freedom in the limit. We propose an alternate network architecture which does not suffer from this pathology. We also examine deep covariance functions, obtained by composing infinitely many feature transforms. Lastly, we characterize the class of models obtained by performing dropout on Gaussian processes.

The source directory contains code to generate all the figures.

Feel free to email me with any questions at (dkd23@cam.ac.uk).