Skip to content

Commit

Permalink
Update optimizers.rst
Browse files Browse the repository at this point in the history
  • Loading branch information
bfortuner committed Mar 13, 2018
1 parent d427544 commit 290e610
Showing 1 changed file with 0 additions and 11 deletions.
11 changes: 0 additions & 11 deletions docs/optimizers.rst
Original file line number Diff line number Diff line change
Expand Up @@ -73,17 +73,6 @@ of the gradient on previous steps. This results in minimizing oscillations and f

.. math::
v_{dW} = \beta v_{dW} + (1 - \beta) dW \\
W = W - \alpha v_{dW}
.. note::

- :math:`v` - the exponentially weighted average
- :math:`dW` - cost gradient with respect to current layer weight tensor
- :math:`W` - weight tensor
- :math:`\beta` - hyperparameter to be tuned
- :math:`\alpha` - the learning rate

v_{dW} = \beta v_{dW} + (1 - \beta) \frac{\partial \mathcal{J} }{ \partial W } \\
W = W - \alpha v_{dW}
Expand Down

0 comments on commit 290e610

Please sign in to comment.