Skip to content

Paper: The simple essence of automatic differentiation

Notifications You must be signed in to change notification settings

conal/essence-of-ad

Repository files navigation

The simple essence of automatic differentiation

See also the paper's web page.

This paper is based on an invited talk for PEPM 2018 (with slides and video) by the same name. An extended version with full proofs is on arXiv. A shorter version will appear at ICFP 2018.

Abstract

Automatic differentiation (AD) in reverse mode (RAD) is a central component of deep learning and other uses of large-scale optimization. Commonly used RAD algorithms such as backpropagation, however, are complex and stateful, hindering deep understanding, improvement, and parallel execution. This paper develops a simple, generalized AD algorithm calculated from a simple, natural specification. The general algorithm is then specialized by varying the representation of derivatives. In particular, applying well-known constructions to a naive representation yields two RAD algorithms that are far simpler than previously known. In contrast to commonly used RAD implementations, the algorithms defined here involve no graphs, tapes, variables, partial derivatives, or mutation. They are inherently parallel-friendly, correct by construction, and usable directly from an existing programming language with no need for new data types or programming style, thanks to use of an AD-agnostic compiler plugin.

About

Paper: The simple essence of automatic differentiation

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published