Skip to content

Commit

Permalink
Update sgd doc to insist on momentum buffer initial value (#92111)
Browse files Browse the repository at this point in the history
Following the discussion in #91108
Pull Request resolved: #92111
Approved by: https://github.com/soumith, https://github.com/janeyx99
  • Loading branch information
albanD authored and pytorchmergebot committed Jan 13, 2023
1 parent a26e5e2 commit 60e37a6
Showing 1 changed file with 4 additions and 0 deletions.
4 changes: 4 additions & 0 deletions torch/optim/sgd.py
Original file line number Diff line number Diff line change
Expand Up @@ -88,6 +88,10 @@ class SGD(Optimizer):
\end{aligned}
The Nesterov version is analogously modified.
Moreover, the initial value of the momentum buffer is set to the
gradient value at the first step. This is in contrast to some other
frameworks that initialize it to all zeros.
"""

def __init__(self, params, lr=required, momentum=0, dampening=0,
Expand Down

0 comments on commit 60e37a6

Please sign in to comment.