Skip to content

Commit

Permalink
Merge pull request #4291 from kmaehashi/fix-typos
Browse files Browse the repository at this point in the history
fix typo backporp -> backprop
  • Loading branch information
hvy committed Feb 7, 2018
2 parents 6c143be + cc115e7 commit ad3809c
Show file tree
Hide file tree
Showing 4 changed files with 4 additions and 4 deletions.
2 changes: 1 addition & 1 deletion chainer/function_node.py
Original file line number Diff line number Diff line change
Expand Up @@ -707,7 +707,7 @@ def grad(outputs, inputs, grad_outputs=None, grad_inputs=None, set_grad=False,
If you set loss scaling factor, gradients of loss values are to be
multiplied by the factor before backprop starts. The factor is
propagated to whole gradients in a computational graph along the
backporp. The gradients of parameters are divided by the factor
backprop. The gradients of parameters are divided by the factor
just before the parameters are to be updated.
Returns:
Expand Down
2 changes: 1 addition & 1 deletion chainer/training/updaters/parallel_updater.py
Original file line number Diff line number Diff line change
Expand Up @@ -46,7 +46,7 @@ class ParallelUpdater(standard_updater.StandardUpdater):
If you set loss scaling factor, gradients of loss values are to be
multiplied by the factor before backprop starts. The factor is
propagated to whole gradients in a computational graph along the
backporp. The gradients of parameters are divided by the factor
backprop. The gradients of parameters are divided by the factor
just before the parameters are to be updated.
"""
Expand Down
2 changes: 1 addition & 1 deletion chainer/training/updaters/standard_updater.py
Original file line number Diff line number Diff line change
Expand Up @@ -40,7 +40,7 @@ class StandardUpdater(_updater.Updater):
If you set loss scaling factor, gradients of loss values are to be
multiplied by the factor before backprop starts. The factor is
propagated to whole gradients in a computational graph along the
backporp. The gradients of parameters are divided by the factor
backprop. The gradients of parameters are divided by the factor
just before the parameters are to be updated.
Attributes:
Expand Down
2 changes: 1 addition & 1 deletion chainer/variable.py
Original file line number Diff line number Diff line change
Expand Up @@ -892,7 +892,7 @@ def backward(self, retain_grad=False, enable_double_backprop=False,
training. If you set loss scaling factor, gradients of loss
values are to be multiplied by the factor before backprop
starts. The factor is propagated to whole gradients in a
computational graph along the backporp. The gradients of
computational graph along the backprop. The gradients of
parameters are divided by the factor just before the parameters
are to be updated.
"""
Expand Down

0 comments on commit ad3809c

Please sign in to comment.