http://colinraffel.com/wiki/stochastic_optimization_techniques
====== Stochastic Optimization Techniques ======
Neural networks are often trained stochastically, i.e. using a method where the objective function changes at each iteration. This stochastic variation is due to the model being trained on different data during each iteration. This is motivated by (at least) two factors: First, the dataset used as training data is often too large to fit in memory and/or be optimized over efficiently. Second, the objective function is typically nonconvex, so using different data at each iteration can help prevent the model from settling in a local minimum. Furthermore, training neural networks is usually done using only the first-order gradient of the parameters with respect to the loss function. This is due to the large number of parameters present in a neural network, which for practical purposes prevents the computation of the Hessian matrix. Because vanilla gradient descent can diverge or converge incredibly slowly if its learning rate hyperparameter is set inappropriately, many alternative methods have been proposed which are intended to produce desirable convergence with less dependence on hyperparameter settings. These methods often effectively compute and utilize a preconditioner on the gradient, adaptively change the learning rate over time or approximate the Hessian matrix. This document summarizes some of the more popular methods proposed recently; for a similar overview see ((Ruder, An overview of gradient descent optimization algorithms http://sebastianruder.com/optimizing-gradient-descent/index.html#gradientdescentoptimizationalgorithms)) or the documentation of Climin ((https://climin.readthedocs.org/en/latest/#optimizer-overview)) or ((Schaul, Antonoglou, Silver, Unit Tests for Stochastic Optimization)) for a comparison on some simple tasks.
In the following, we will use
===== Stochastic Gradient Descent =====
Stochastic gradient descent (SGD) simply updates each parameter by subtracting the gradient of the loss with respect to the parameter, scaled by the learning rate
===== Momentum =====
In SGD, the gradient
\begin{align*} v_{t + 1} &= \mu v_t - \eta \nabla \mathcal{L}(\theta_t) \ \theta_{t + 1} &= \theta_t + v_{t+1} \end{align*}
It has been argued that including the previous gradient step has the effect of approximating some second-order information about the gradient.
===== Nesterov's Accelerated Gradient =====
In Nesterov's Accelerated Gradient (NAG), the gradient of the loss at each step is computed at
\begin{align*} v_{t + 1} &= \mu v_t - \eta \nabla\mathcal{L}(\theta_t + \mu v_t) \ \theta_{t + 1} &= \theta_t + v_{t+1} \end{align*}
===== Adagrad =====
Adagrad effectively rescales the learning rate for each parameter according to the history of the gradients for that parameter. This is done by dividing each term in
===== RMSProp =====
In its originally proposed form ((Hinton, Srivastava, and Swersky, "rmsprop: Divide the gradient by a running average of its recent magnitude")), RMSProp is very similar to Adagrad. The only difference is that the
\begin{align*} g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \ \theta_{t + 1} &= \theta_t - \frac{\eta\nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t + 1}} + \epsilon} \end{align*}
In the original lecture slides where it was proposed,
Alternatively, in ((Graves, "Generating Sequences with Recurrent Neural Networks")), a first-order moment approximator
\begin{align*} m_{t + 1} &= \gamma m_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t) \ g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \ v_{t + 1} &= \mu v_t - \frac{\eta \nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t+1} - m_{t+1}^2 + \epsilon}} \ \theta_{t + 1} &= \theta_t + v_{t + 1} \end{align*}
===== Adadelta =====
Adadelta ((Zeiler, "Adadelta: An Adaptive Learning Rate Method")) uses the same exponentially decaying moving average estimate of the gradient second moment
\begin{align*} g_{t + 1} &= \gamma g_t + (1 - \gamma) \nabla \mathcal{L}(\theta_t)^2 \ v_{t + 1} &= -\frac{\sqrt{x_t + \epsilon} \nabla \mathcal{L}(\theta_t)}{\sqrt{g_{t+1} + \epsilon}} \ x_{t + 1} &= \gamma x_t + (1 - \gamma) v_{t + 1}^2 \ \theta_{t + 1} &= \theta_t + v_{t + 1} \end{align*}
===== Adam =====
Adam is somewhat similar to Adagrad/Adadelta/RMSProp in that it computes a decayed moving average of the gradient and squared gradient (first and second moment estimates) at each time step. It differs mainly in two ways: First, the first order moment moving average coefficient is decayed over time. Second, because the first and second order moment estimates are initialized to zero, some bias-correction is used to counteract the resulting bias towards zero. The use of the first and second order moments, in most cases, ensure that typically the gradient descent step size is
\begin{align*} m_{t + 1} &= \gamma_1 m_t + (1 - \gamma_1) \nabla \mathcal{L}(\theta_t) \ g_{t + 1} &= \gamma_2 g_t + (1 - \gamma_2) \nabla \mathcal{L}(\theta_t)^2 \ \hat{m}{t + 1} &= \frac{m{t + 1}}{1 - \gamma_1^{t + 1}} \ \hat{g}{t + 1} &= \frac{g{t + 1}}{1 - \gamma_2^{t + 1}} \ \theta_{t + 1} &= \theta_t - \frac{\eta \hat{m}{t + 1}}{\sqrt{\hat{g}{t + 1}} + \epsilon} \end{align*}
===== ESGD =====
((Dauphin, Vries, Chung and Bengion, "RMSProp and equilibrated adaptive learning rates for non-convex optimization"))
===== Adasecant =====
((Gulcehre and Bengio, "Adasecant: Robust Adaptive Secant Method for Stochastic Gradient"))
===== vSGD =====
((Schaul, Zhang, LeCun, "No More Pesky Learning Rates"))
===== Rprop =====
((Riedmiller and Bruan, "A Direct Adaptive Method for Faster Backpropagation Learning: The RPROP Algorithm"))