Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Documentation for tf.math.add_n #27293

Merged
merged 10 commits into from
Apr 3, 2019

Conversation

shashvatshahi1998
Copy link
Contributor

Modified tf.math.add_n #25820 by adding example and linking it to tf.math.accumulate_n.

@shashvatshahi1998
Copy link
Contributor Author

please @dynamicwebpaige review this

@mihaimaruseac mihaimaruseac changed the title tf.math.add_n #25820 Documentation for tf.math.add_n Mar 29, 2019
@shashvatshahi1998
Copy link
Contributor Author

I have done the changes as asked by you.

@mihaimaruseac
Copy link
Collaborator

I still see the trailing whitespace line. Did you push the new commit?

@shashvatshahi1998
Copy link
Contributor Author

https://github.com/tensorflow/tensorflow/blob/master/tensorflow/python/eager/backprop.py
you can check line 215 and 216 in original repo.

@rthadur rthadur self-assigned this Mar 29, 2019
@rthadur rthadur added the size:S CL Change Size: Small label Mar 29, 2019
@mihaimaruseac
Copy link
Collaborator

mihaimaruseac commented Mar 29, 2019

The lines are the same, if you ignore whitespace changes. But you do have trailing whitespace which you should remove, see below:
bp

This PR should not have tensorflow/python/eager/backprop.py as a changed file, please revert to the version on master

@mihaimaruseac
Copy link
Collaborator

You can try running the following commands (make sure to copy them exactly, one by one):

git stash
git checkout master
git pull
git checkout -
git checkout master -- tensorflow/python/eager/backprop.py
git commit -m "Fixed whitespace"
git stash pop
git push

@shashvatshahi1998
Copy link
Contributor Author

@mihaimaruseac changed that white line trailing issue int his pr also.

@shashvatshahi1998
Copy link
Contributor Author

Now please somebody review

@shashvatshahi1998
Copy link
Contributor Author

Please somebody review this PR

@shashvatshahi1998
Copy link
Contributor Author

@mihaimaruseac please review

@shashvatshahi1998
Copy link
Contributor Author

Please review this PR

@shashvatshahi1998
Copy link
Contributor Author

@rthadur please assign a reviewer to this pr

a = tf.constant([[3, 5], [4, 8]])
b = tf.constant([[1, 6], [2, 9]])
tf.math.add_n([a, b, a]) # [[7, 16], [10, 25]]

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please remove this blank line and add one before Args:.

@@ -2786,7 +2786,17 @@ def add_n(inputs, name=None):
"""Adds all input tensors element-wise.

Converts `IndexedSlices` objects into dense tensors prior to adding.

tf.math.add_n performs same operation as'tf.math.accumulate_n' but it waits
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

tf.math.add_n should have ` around it (e.g. `tf.math.add_n`). If you do that our doc generator will correctly add links between the pages.

Please also replace the single quotes around 'tf.math.accumulate_n' with ` too.

Finally, grammar: "performs the same" (add "the").

@@ -2786,7 +2786,17 @@ def add_n(inputs, name=None):
"""Adds all input tensors element-wise.

Converts `IndexedSlices` objects into dense tensors prior to adding.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please add a newline after this sentence.


tf.math.add_n performs same operation as'tf.math.accumulate_n' but it waits
for all of its inputs to be ready before beginning to sum, that consumes more
memory.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest adding more detail on why the mem usage is higher. For example:

This can increase memory if inputs are ready at different times, since minimum temporary storage is proportional to inputs size rather than the output size.

@shashvatshahi1998
Copy link
Contributor Author

@tomhennigan done with the changes please review it now.

@shashvatshahi1998
Copy link
Contributor Author

please review @tomhennigan

[[ 0 0 0 0]
[ 0 0 128 63]
[ 0 0 128 63]], shape=(3, 4), dtype=uint8)
In case ofconverting from a real data type to complex data type(e.g. tf.complex64,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please revert changes in this file, they are not related.

for all of its inputs to be ready before beginning to sum, that consumes more
memory as inputs are not ready at different times, since minimum temporary storage
is proportional to the output size rather than the inputs size.
For example:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'd suggest putting a newline before "For example:" otherwise the documentation generator will put the "For example:" at the end of the previous sentence.

For reference you can use gist.github.com to see how your markdown will end up get rendered. Here is the current version: https://gist.github.com/tomhennigan/d2789426d38bdbd9471e7e00555eced0


`tf.math.add_n` performs the same operation as `tf.math.accumulate_n` but it waits
for all of its inputs to be ready before beginning to sum, that consumes more
memory as inputs are not ready at different times, since minimum temporary storage
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

that consumes more memory as inputs are not ready at different times

I'm not sure this makes sense. How about "This can consume more memory when inputs are ready at different times, since the minimum temporary storage required is proportional to the input size rather than the output size."

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Basically, in tf.math.accumulate_n documentation anti of this is given as the reason for it to be better than tf.math.add_n, you can check there.So I just manipulated that statement so that why tf.math.add_n is less preferrable than accumulate_n.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Ok still I am adding what you just said.

@shashvatshahi1998
Copy link
Contributor Author

@tomhennigan please review it now

@shashvatshahi1998
Copy link
Contributor Author

Please review it @tomhennigan

tomhennigan
tomhennigan previously approved these changes Apr 3, 2019
@shashvatshahi1998
Copy link
Contributor Author

@rthadur please add kokoro force run

@shashvatshahi1998
Copy link
Contributor Author

@rthadur tests are still pending

@shashvatshahi1998
Copy link
Contributor Author

please somebody intialize checks

@rthadur rthadur added kokoro:force-run Tests on submitted change ready to pull PR ready for merge process labels Apr 3, 2019
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label Apr 3, 2019
@shashvatshahi1998
Copy link
Contributor Author

@tomhennigan can you please tell me why it is failing ubuntu sanity check.

@shashvatshahi1998
Copy link
Contributor Author

@tomhennigan I think that error was because of longer length of lines so I reduced length of lines.

@mihaimaruseac
Copy link
Collaborator

Please keep tf.math. prefix

@rthadur
Copy link
Contributor

rthadur commented Apr 3, 2019

Please keep tf.math. prefix

@mihaimaruseac i made changes internally , please approve

@tensorflow-copybara tensorflow-copybara merged commit bd5a966 into tensorflow:master Apr 3, 2019
tensorflow-copybara pushed a commit that referenced this pull request Apr 3, 2019
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
cla: yes ready to pull PR ready for merge process size:S CL Change Size: Small
Projects
None yet
Development

Successfully merging this pull request may close these issues.

8 participants