-
Notifications
You must be signed in to change notification settings - Fork 5.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Correcting some grammatical mistakes in the design docs #4378
Correcting some grammatical mistakes in the design docs #4378
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks a lot! @abhinavarora @NorthStar
I'd say Figure 1., Figure 2. for consistency @abhinavarora otherwise all the changes are improvements. |
Made changes as per feedback. @NorthStar and @dzhwinter, please review the changes again. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Some suggestions on the 1st paragraph. Pronouns were made confusing. Please make the quick fix and merge. LGTM.
paddle/framework/backward.md
Outdated
@@ -2,7 +2,7 @@ | |||
|
|||
## Motivation | |||
|
|||
In Neural Network, many model is solved by the the backpropagation algorithm(known as BP) at present. Technically it caculates the gradient of the loss function, then distributed back through the networks. Follows the chain rule, so we need a module chains the gradient operators/expressions together with to construct the backward pass. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass. | |||
In Neural Network, most models are solved by the the backpropagation algorithm(known as BP) at present. Technically, it calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. Hence we need a module that chains the gradient operators/expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator/expression's backward pass will be generated with respect to the forward pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
At the moment, most neural network models are solved by the the backpropagation algorithm (known as BP). Technically, BP calculates the gradient of the loss function and propagates the gradient back through the network, following the chain rule. Hence we need a module that chains the gradient operators or expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator or expression's backward pass will be generated with respect to the forward pass.
LGTM! |
paddle/framework/backward.md
Outdated
@@ -2,7 +2,7 @@ | |||
|
|||
## Motivation | |||
|
|||
In Neural Network, many model is solved by the the backpropagation algorithm(known as BP) at present. Technically it caculates the gradient of the loss function, then distributed back through the networks. Follows the chain rule, so we need a module chains the gradient operators/expressions together with to construct the backward pass. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass. | |||
In Neural Network, most models are solved by the the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. Hence we need a module that chains the gradient operators/expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator/expression's backward pass will be generated with respect to the forward pass. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
by the the backpropagation => by the backpropagation
paddle/framework/backward.md
Outdated
|
||
</p> | ||
|
||
Because our framework finds variables accord to their names, we need to rename the output links. We add a suffix of number to represent its position in clockwise. | ||
Because the framework finds variables according to their names, we need to rename the output links. We add an integer suffix to represent its position in the clockwise direction. | ||
|
||
5. Part of Gradient is Zero. | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Maybe In our implement => In our implementation? I am not sure about the grammar error. :)
No description provided.