Fetching contributors…
Cannot retrieve contributors at this time
175 lines (115 sloc) 7.01 KB

# Autodiff engine

KeOps provides a simple Automatic Differentiation (AD) engine for generic formulas. This feature can be used seamlessly through the Grad instruction or the PyTorch backend: users don't have to understand backpropagation to enjoy our "free" gradients. Nevertheless, for the sake of completeness, here is a short introduction to the inner workings of KeOps.

## Backprop 101

### Gradient of a vector valued function

Let F:\mathbb R^n \to \mathbb R^d be a smooth function. Around any given point x \in \mathbb R^n, its variations are encoded in a linear application \text{d}F(x):\mathbb R^n \to \mathbb R^d called the differential of F: for all variation \delta x \in \mathbb R^n of x,

F(x+\delta x)
= F(x)+ [\text{d}F(x)](\delta x) + o(\delta x).


Going further, we can define the gradient of F at x as the adjoint of \text{d}F(x): it is the unique application \partial F(x)=\text{d}F(x)^*:\mathbb R^d \to \mathbb R^n such that for all vector e \in \mathbb R^d and variation \delta x \in \mathbb R^n of x, we have:

\langle [\partial F(x)](e) , \delta x \rangle_{\mathbb R^n}
= \langle e , [\text{d}F(x)](\delta x) \rangle_{\mathbb R^d},


where \langle\,\cdot\,,\,\cdot\,\rangle denotes the standard scalar product.

When F is scalar-valued (i.e. when d=1), the gradient is a linear application from \mathbb{R} to \mathbb{R}^n: it is best understood as the vector of partial derivatives of F. In the general case, the matrix of the gradient in the canonical basis is given by the transpose of the Jacobian of F.

### Reverse mode AD = backpropagation = chain rule

Now, let's assume that the function F:\mathbb R^n \to \mathbb R^d can be written as a composition F =F_p \circ \cdots \circ F_2 \circ F_1 of p functions F_i:E_{i-1} \to E_{i}, where E_i=\mathbb{R}^{d_i}. With these notations d_0 = n and d_p = d.

Evaluating the gradient of F with the backpropagation algorithm requires:

1. A Forward pass to evaluate the functions

\begin{array}{ccccl}
F_i & : & E_{i-1}    & \to & E_{i} \\
&      & x & \mapsto & F_i(x)
\end{array}


and thus compute the value F(x) :

2. A Backward pass to evaluate the (adjoints of the) differentials

\begin{array}{ccccl}
\partial F_i & : & E_{i-1}\times E_{i} & \to & E_{i-1} \\
& & (x_{i-1},x_i^*) & \mapsto & [\text{d} F_i^*(x_{i-1})](x_i^*) = x_{i-1}^*
\end{array}


and compute the gradient of F at location x, applied to an arbitrary vector e is the space of outputs:

This method relies on the chain-rule, as

\begin{align*}
& &\text{d}(F_p\circ\cdots\circ F_1)(x_0) &= \text{d}F_p(x_{p-1}) \circ\cdots \circ \text{d} F_1(x_0),\\
&\text{i.e.}& \text{d}(F_p\circ\cdots\circ F_1)^*(x_0) &=  \text{d} F_1^*(x_0) \circ\cdots \circ \text{d}F_p^*(x_{p-1}),\\
&\text{i.e.}& \big[\partial F(x_0)\big](e) &= \big[\partial F_1(x_0)\big]\big( \cdots \big[\partial F_p(x_{p-1})\big](e) \big).
\end{align*}


When F is scalar-valued (i.e. d=1), this algorithm allows us to compute the vector of partial derivatives

\nabla F(x_0)= \big[\partial F(x_0)\big](1)


with a mere forward-backward pass through the computational graph of F... which is much cheaper than the naive evaluation of n finite differences of F.

## The KeOps generic engine

Backpropagation has become the standard way of computing the gradients of arbitrary "Loss" functions in imaging and machine learning. Crucially, any backprop engine should be able to:

• Link together the forward operations F_i with their backward counterparts \partial F_i.
• Store in memory the intermediate results x_0,\dots,x_p before using them in the backward pass.

### The Grad operator

At a low level, KeOps allows you to perform these tasks with the Grad instruction: given a formula F, the symbolic expression Grad(F, V, E) denotes the gradient [\partial_V F(x)] (E) with respect to the variable V evaluated on the input variable E.

If V is a variable place-holder that appears in the expression of F and if E has the same dimension and category as F, Grad(F,V,E) can be fed to KeOps just like any other symbolic expression. The resulting output will have the same dimension and category as the variable V, and can be used directly for gradient descent or higher-order differentiation: operations such as Grad(Grad(..,..,..),..,..) are fully supported.

### User interfaces

As evidenced by this :doc:example <../_auto_examples/numpy/plot_generic_syntax_numpy>, the simple Grad syntax can relieve you from the burden of differentiating symbolic formulas by hand.

Going further, our python interface is fully compatible with the PyTorch library: feel free to use the output of a :mod:pykeops.torch routine just like any other differentiable tensor! Thanks to the flexibility of the :mod:torch.autograd engine, end-to-end automatic differentiation is at hand: see this :doc:example <../_auto_examples/pytorch/plot_generic_syntax_pytorch> or this :doc:example <../_auto_examples/pytorch/plot_generic_syntax_pytorch_LSE>.

## An example

Coming back to our :ref:previous example <formula.example> where the formula

F(p,x,y,a)_i = \left(\sum_{j=1}^N (p -a_j )^2 \exp(x_i^u + y_j^u) \right)_{i=1,\cdots,M, u=1,2,3} \in \mathbb R^{M\times 3}

F = "Sum_Reduction(Square(Pm(0,1) - Vj(3,1)) * Exp(Vi(1,3) + Vj(2,3)), 1)"

was discussed, the symbolic expression

[grad_a F] = "Grad( Sum_Reduction(Square(Pm(0,1) - Vj(3,1)) * Exp(Vi(1,3) + Vj(2,3)), 1),
Vj(3,1), Vi(4,3) )"

allows us to compute the gradient of F with respect to (a_j) \in \mathbb R^N (= Vj(3,1)), applied to an arbitrary test vector e\in\mathbb R^{M\times 3} given as a fifth input Vi(4,3) :

\left[ [\partial_{a} F(p,x,y,a)] (e)\right]_j = - \sum_{i=1}^M \sum_{u=1}^3 2(p -a_j ) \exp(x_i^u + y_j^u) e^u_i \in \mathbb R.


With aliases, this computation simply reads:

p=Pm(0,1), x=Vi(1,3), y=Vj(2,3), a=Vj(3,1), e=Vi(4,3)
[grad_a F](e) = "Grad( Sum_Reduction(Square(p-a)*Exp(x+y), 1), a, e)"
You can’t perform that action at this time.