You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Popular & Simple idea: reduce the learning rate by some factor every few epochs, e.g. $\eta^{t} = \frac{\eta}{\sqrt{t + 1}}$ But
learning rate cannot be one-size-fit-all, giving different parameters different learning is better !
Review: Gradient Descent
在机器学习的第三步,我们需要求解一个包含模型参数的最优化问题:
$$\theta^{*} = argmax_{\theta} \mathcal{L}(\theta)$$
假设$\theta = \{\theta_1, \theta_2\}$
$$\bigtriangledown L(\theta) =
\left[
\begin{matrix}
\frac{\partial L}{\partial \theta_1} \\
\frac{\partial L}{\partial \theta_2}
\end{matrix}
\right]
$$
梯度下降过程:
$$
\theta^1 = \theta^0 - \eta \bigtriangledown L(\theta^{0}) \\
\theta^2 = \theta^1 - \eta \bigtriangledown L(\theta^{1}) \\
\cdots
$$
Learning Rate
set the learning rate carefully.
Adaptive Learning Rate
Popular & Simple idea: reduce the learning rate by some factor every few epochs, e.g.$\eta^{t} = \frac{\eta}{\sqrt{t + 1}}$
But
learning rate cannot be one-size-fit-all, giving different parameters different learning is better !
Adagrad
每一个参数的learning rate除以该参数之前所有偏微分的平方和的开方。所以每一个参数的learning rate都是不同的。
Adagrad的计算过程
所以
Larger Gradient, Larger Step(why Adagrad work)?
在Adagrad的参数更新方法中,分母位置的$g^{t}$表明梯度越大,更新的步幅越大(梯度大,距离最优解远,更新的步幅应该大一点);但是分子位置的$\sqrt{\sum_{i=0}^{t}(g^i)^2}$表明梯度越大,更新的步幅越小。这是不是有什么冲突的地方?
如上图,对于$x_0$来说,其最佳的更新步幅是$|x_0 + \frac{b}{2a}|$,这样可以直接得到最佳的解$-\frac{b}{2a}$,而$|x_0 + \frac{b}{2a}|$的分子$|2ax_0 + b|$正好是损失函数在$x_0$处的一阶微分,所以对于单变量的最优化问题来说,更新的步幅和微分的大小成正比是合理的(微分越大的点离最优解越远)。但这样的结论仅限于单变量问题,当存在多个变量的时候,并不是gradient的值越大,距离最优解越远。
图中c和a相比,$a$距离最优解更远,更新的步幅相比$c$而言应该更大才合理,但是$\text{gradient(c)}>\text{gradient(a)}$,显然$a$更新的慢。此时我们想到最佳的更新步幅$|x_0 + \frac{b}{2a}|$其分母$2a$正好是损失函数在$x_0$处的二阶微分,所以最好的步幅不仅仅要正比于一次微分,还要反比与二次微分。$\frac{\text{first derivative}}{\text{second derivation}}$的大小才能反映到最佳解的真实距离,所以
$${best step} = \frac{\text{first derivative}}{\text{second derivation}}$$
Adagrad算法正是利用了best step,并且采用类似在一次微分上进行采样的方式$\sqrt{\sum_{i=0}^{t}(g^i)^2}$近似了二次微分,如下图所示:
Stochastic Gradient Descent
Feature Scaling
Gradient Descent Theory
给定一个Loss function,我们无法一步确定最优的解,但是可以通过泰勒公式在一个很小的范围内找到使得Loss function最小的解,从而进行参数的更新,然后在新的邻域内再找使得Loss function最小的解。
single variable
$$h(x) = \sum_{0}^{\infty}\frac{h^{(k)}(x)}{k!}(x - x_0)^k = h(x_0) + h^{'}(x_0)(x - x_0) + \cdots$$
$$h(x) \approx (x_0) + h^{'}(x_0)(x - x_0)$$
当$x \rightarrow x_0$的时候,有
multivariable taylor series
$$h(x, y) = h(x_0, y_0) + \frac{\partial L(x_0, y_0)}{\partial x}(x - x_0) + \frac{\partial L(x_0, y_0)}{\partial y}(y - y_0) + \cdots$$
$$h(x, y) \approx h(x_0, y_0) + \frac{\partial L(x_0, y_0)}{\partial x}(x - x_0) + \frac{\partial L(x_0, y_0)}{\partial y}(y - y_0)$$
当$(x, y)\rightarrow (x_0, y_0)$的时候,有
Back to Formal Derivation
在足够小的红色邻域内,根据泰勒公式,有:
$$L(\theta) \approx L(a, b) + \frac{\partial L(a, b)}{\partial \theta_1}(\theta_1 - a) + \frac{\partial L(a, b)}{\partial \theta_2}(\theta_2 - b)$$
令:
$$u = \frac{\partial L(a, b)}{\partial \theta_1}, \ v = \frac{\partial L(a, b)}{\partial \theta_2}$$
$$L(\theta) \approx L(a, b) +u(\theta_1 - a) + v(\theta_2 - b) \tag1$$
使得$L(\theta)$最小的$(\theta_1, \theta_2)$满足($L(a, b)$是常数,与$\theta$的取值无关):
$$
\left[
\begin{matrix}
\theta_1 \\
\theta_2
\end{matrix}
\right]=
\left[
\begin{matrix}
a \\
b
\end{matrix}
\right]-\eta
\left[
\begin{matrix}
u \\
v
\end{matrix}
\right]
$$
因为当$\{\theta_1 - a, \theta_2 - b\}$与$\{u, v\}$的方向相反时$L(\theta)$的值最小,$\eta$表示邻域的半径。
这样我们就得到了梯度下降的公式(其中a,b为参数的初始值,$\eta$为learning rate):
$$
\left[
\begin{matrix}
\theta_1 \\
\theta_2
\end{matrix}
\right]=
\left[
\begin{matrix}
a \\
b
\end{matrix}
\right]-\eta
\left[
\begin{matrix}
\frac{\partial L(a, b)}{\partial \theta_1} \\
\frac{\partial L(a, b)}{\partial \theta_2}
\end{matrix}
\right]
$$
上式成立的条件是在足够小的邻域中,所以learning rate在理论上是要无穷小的,但是实际操作中只要足够小就可以了。通过把泰勒公式中的二次项考虑进来理论上可以将learning rate设置的稍大一点,但是这样做通过得不偿失。
More Limitation of Gradient Gradient
The text was updated successfully, but these errors were encountered: