Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Correcting some grammatical mistakes in the design docs #4378

Merged
merged 4 commits into from
Sep 26, 2017

Conversation

abhinavarora
Copy link
Contributor

No description provided.

dzhwinter
dzhwinter previously approved these changes Sep 26, 2017
Copy link
Contributor

@dzhwinter dzhwinter left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks a lot! @abhinavarora @NorthStar

@NorthStar
Copy link
Contributor

I'd say Figure 1., Figure 2. for consistency @abhinavarora otherwise all the changes are improvements.

@abhinavarora
Copy link
Contributor Author

abhinavarora commented Sep 26, 2017

Made changes as per feedback. @NorthStar and @dzhwinter, please review the changes again.

NorthStar
NorthStar previously approved these changes Sep 26, 2017
Copy link
Contributor

@NorthStar NorthStar left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Some suggestions on the 1st paragraph. Pronouns were made confusing. Please make the quick fix and merge. LGTM.

@@ -2,7 +2,7 @@

## Motivation

In Neural Network, many model is solved by the the backpropagation algorithm(known as BP) at present. Technically it caculates the gradient of the loss function, then distributed back through the networks. Follows the chain rule, so we need a module chains the gradient operators/expressions together with to construct the backward pass. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass.
In Neural Network, most models are solved by the the backpropagation algorithm(known as BP) at present. Technically, it calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. Hence we need a module that chains the gradient operators/expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator/expression's backward pass will be generated with respect to the forward pass.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

At the moment, most neural network models are solved by the the backpropagation algorithm (known as BP). Technically, BP calculates the gradient of the loss function and propagates the gradient back through the network, following the chain rule. Hence we need a module that chains the gradient operators or expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator or expression's backward pass will be generated with respect to the forward pass.

@NorthStar
Copy link
Contributor

LGTM!

@@ -2,7 +2,7 @@

## Motivation

In Neural Network, many model is solved by the the backpropagation algorithm(known as BP) at present. Technically it caculates the gradient of the loss function, then distributed back through the networks. Follows the chain rule, so we need a module chains the gradient operators/expressions together with to construct the backward pass. Every forward network needs a backward network to construct the full computation graph, the operator/expression's backward pass will be generated respect to forward pass.
In Neural Network, most models are solved by the the backpropagation algorithm(known as **BP**) at present. Technically, BP calculates the gradient of the loss function, then propagates it back through the networks following the chain rule. Hence we need a module that chains the gradient operators/expressions together to construct the backward pass. Every forward network needs a backward network to construct the full computation graph. The operator/expression's backward pass will be generated with respect to the forward pass.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

by the the backpropagation => by the backpropagation


</p>

​ Because our framework finds variables accord to their names, we need to rename the output links. We add a suffix of number to represent its position in clockwise.
​ Because the framework finds variables according to their names, we need to rename the output links. We add an integer suffix to represent its position in the clockwise direction.

5. Part of Gradient is Zero.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe In our implement => In our implementation? I am not sure about the grammar error. :)

NorthStar
NorthStar previously approved these changes Sep 26, 2017
@abhinavarora abhinavarora merged commit 8635103 into PaddlePaddle:develop Sep 26, 2017
@abhinavarora abhinavarora deleted the fix_doc_typos branch September 26, 2017 18:57
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants