Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Part 4. Gradient Descent #4

Open
XuSShuai opened this issue Nov 21, 2018 · 0 comments
Open

Part 4. Gradient Descent #4

XuSShuai opened this issue Nov 21, 2018 · 0 comments

Comments

@XuSShuai
Copy link
Owner

XuSShuai commented Nov 21, 2018

Review: Gradient Descent

在机器学习的第三步,我们需要求解一个包含模型参数的最优化问题:
$$\theta^{*} = argmax_{\theta} \mathcal{L}(\theta)$$
假设$\theta = \{\theta_1, \theta_2\}$
$$\bigtriangledown L(\theta) =
\left[
\begin{matrix}
\frac{\partial L}{\partial \theta_1} \\
\frac{\partial L}{\partial \theta_2}
\end{matrix}
\right]
$$
梯度下降过程:
$$
\theta^1 = \theta^0 - \eta \bigtriangledown L(\theta^{0}) \\
\theta^2 = \theta^1 - \eta \bigtriangledown L(\theta^{1}) \\
\cdots
$$
image

Learning Rate

set the learning rate carefully.
image

Adaptive Learning Rate

Popular & Simple idea: reduce the learning rate by some factor every few epochs, e.g. $\eta^{t} = \frac{\eta}{\sqrt{t + 1}}$
But
learning rate cannot be one-size-fit-all, giving different parameters different learning is better !

Adagrad

每一个参数的learning rate除以该参数之前所有偏微分的平方和的开方。所以每一个参数的learning rate都是不同的。

Adagrad的计算过程

$$ w^1 = w^0 - \frac{\eta^0}{\sigma^0}g^{0} ,\ \sigma^0 = \sqrt{(g^0)^2} \\\ w^2 = w^1 - \frac{\eta^1}{\sigma^1}g^{1} ,\ \sigma^1 = \sqrt{\frac{1}{2}((g^0)^2+(g^1)^2)} \\\ w^3 = w^2 - \frac{\eta^2}{\sigma^2}g^{2} ,\ \sigma^2 = \sqrt{\frac{1}{3}((g^0)^2+(g^1)^2+(g^1)^2)} \\\ \cdots \\\ w^{t+1} = w^{t} - \frac{\eta^t}{\sigma^t}g^{t} ,\ \sigma^t = \sqrt{\frac{1}{t+1}\sum_{i}^{t}(g^i)^2} $$ where $$\eta^{t} = \frac{\eta}{\sqrt{t+1}}, g^{t} = \frac{\partial L(w^t)}{\partial w}$$

所以

$$w^{t+1} = w^{t} - \frac{\eta^t}{\sigma^t}g^{t} = w^{t} - \frac{\frac{\eta}{\sqrt{t+1}}}{\sqrt{\frac{1}{t+1}\sum_{i=0}^{t}(g^i)^2}}g^{t} = w^{t} - \frac{\eta}{\sqrt{\sum_{i=0}^{t}(g^i)^2}}g^{t}$$

Larger Gradient, Larger Step(why Adagrad work)?

在Adagrad的参数更新方法中,分母位置的$g^{t}$表明梯度越大,更新的步幅越大(梯度大,距离最优解远,更新的步幅应该大一点);但是分子位置的$\sqrt{\sum_{i=0}^{t}(g^i)^2}$表明梯度越大,更新的步幅越小。这是不是有什么冲突的地方?

image

如上图,对于$x_0$来说,其最佳的更新步幅是$|x_0 + \frac{b}{2a}|$,这样可以直接得到最佳的解$-\frac{b}{2a}$,而$|x_0 + \frac{b}{2a}|$的分子$|2ax_0 + b|$正好是损失函数在$x_0$处的一阶微分,所以对于单变量的最优化问题来说,更新的步幅和微分的大小成正比是合理的(微分越大的点离最优解越远)。但这样的结论仅限于单变量问题,当存在多个变量的时候,并不是gradient的值越大,距离最优解越远。

image

图中c和a相比,$a$距离最优解更远,更新的步幅相比$c$而言应该更大才合理,但是$\text{gradient(c)}>\text{gradient(a)}$,显然$a$更新的慢。此时我们想到最佳的更新步幅$|x_0 + \frac{b}{2a}|$其分母$2a$正好是损失函数在$x_0$处的二阶微分,所以最好的步幅不仅仅要正比于一次微分,还要反比与二次微分。$\frac{\text{first derivative}}{\text{second derivation}}$的大小才能反映到最佳解的真实距离,所以
$${best step} = \frac{\text{first derivative}}{\text{second derivation}}$$

Adagrad算法正是利用了best step,并且采用类似在一次微分上进行采样的方式$\sqrt{\sum_{i=0}^{t}(g^i)^2}$近似了二次微分,如下图所示:
image

Stochastic Gradient Descent

image

Feature Scaling

image

Gradient Descent Theory

给定一个Loss function,我们无法一步确定最优的解,但是可以通过泰勒公式在一个很小的范围内找到使得Loss function最小的解,从而进行参数的更新,然后在新的邻域内再找使得Loss function最小的解。

  • single variable
    $$h(x) = \sum_{0}^{\infty}\frac{h^{(k)}(x)}{k!}(x - x_0)^k = h(x_0) + h^{'}(x_0)(x - x_0) + \cdots$$
    当$x \rightarrow x_0$的时候,有
    $$h(x) \approx (x_0) + h^{'}(x_0)(x - x_0)$$
    image

  • multivariable taylor series
    $$h(x, y) = h(x_0, y_0) + \frac{\partial L(x_0, y_0)}{\partial x}(x - x_0) + \frac{\partial L(x_0, y_0)}{\partial y}(y - y_0) + \cdots$$
    当$(x, y)\rightarrow (x_0, y_0)$的时候,有
    $$h(x, y) \approx h(x_0, y_0) + \frac{\partial L(x_0, y_0)}{\partial x}(x - x_0) + \frac{\partial L(x_0, y_0)}{\partial y}(y - y_0)$$

Back to Formal Derivation

在足够小的红色邻域内,根据泰勒公式,有:
$$L(\theta) \approx L(a, b) + \frac{\partial L(a, b)}{\partial \theta_1}(\theta_1 - a) + \frac{\partial L(a, b)}{\partial \theta_2}(\theta_2 - b)$$

image

令:
$$u = \frac{\partial L(a, b)}{\partial \theta_1}, \ v = \frac{\partial L(a, b)}{\partial \theta_2}$$
$$L(\theta) \approx L(a, b) +u(\theta_1 - a) + v(\theta_2 - b) \tag1$$
使得$L(\theta)$最小的$(\theta_1, \theta_2)$满足($L(a, b)$是常数,与$\theta$的取值无关):
$$
\left[
\begin{matrix}
\theta_1 \\
\theta_2
\end{matrix}
\right]=
\left[
\begin{matrix}
a \\
b
\end{matrix}
\right]-\eta
\left[
\begin{matrix}
u \\
v
\end{matrix}
\right]
$$
因为当$\{\theta_1 - a, \theta_2 - b\}$与$\{u, v\}$的方向相反时$L(\theta)$的值最小,$\eta$表示邻域的半径。
这样我们就得到了梯度下降的公式(其中a,b为参数的初始值,$\eta$为learning rate):
$$
\left[
\begin{matrix}
\theta_1 \\
\theta_2
\end{matrix}
\right]=
\left[
\begin{matrix}
a \\
b
\end{matrix}
\right]-\eta
\left[
\begin{matrix}
\frac{\partial L(a, b)}{\partial \theta_1} \\
\frac{\partial L(a, b)}{\partial \theta_2}
\end{matrix}
\right]
$$
上式成立的条件是在足够小的邻域中,所以learning rate在理论上是要无穷小的,但是实际操作中只要足够小就可以了。通过把泰勒公式中的二次项考虑进来理论上可以将learning rate设置的稍大一点,但是这样做通过得不偿失。

More Limitation of Gradient Gradient

image
image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant