Skip to content

Commit

Permalink
a very small change. (#26)
Browse files Browse the repository at this point in the history
I think to solve the gradient we need to iterate through the data points using new weights(m) and biases(b) instead of n values.
  • Loading branch information
AyushSenapati authored and bfortuner committed Jul 23, 2018
1 parent d49e88f commit 98a1a01
Showing 1 changed file with 1 addition and 1 deletion.
2 changes: 1 addition & 1 deletion docs/gradient_descent.rst
Original file line number Diff line number Diff line change
Expand Up @@ -60,7 +60,7 @@ The gradient can be calculated as:
\frac{1}{N} \sum -2(y_i - (mx_i + b)) \\
\end{bmatrix}
To solve for the gradient, we iterate through our data points using our new :math:`m` and :math:`n` values and compute the partial derivatives. This new gradient tells us the slope of our cost function at our current position (current parameter values) and the direction we should move to update our parameters. The size of our update is controlled by the learning rate.
To solve for the gradient, we iterate through our data points using our new :math:`m` and :math:`b` values and compute the partial derivatives. This new gradient tells us the slope of our cost function at our current position (current parameter values) and the direction we should move to update our parameters. The size of our update is controlled by the learning rate.


.. rubric:: Code
Expand Down

0 comments on commit 98a1a01

Please sign in to comment.