-
Notifications
You must be signed in to change notification settings - Fork 74.2k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[TF 2.0 API Docs] Docstring for tf.train.experimental.enable_mixed_precision_graph_rewrite #29249
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Great work! Just a few small things to tweak :)
Can you change the colours in the image as well? On my monitor the green and blue look fairly similar its hard to distinguish. |
1129f0a
to
85b7da7
Compare
I have made all the requested changes! |
@rthadur Can you make sure the links are updated when this is merged? |
1d0c165
to
025c463
Compare
Fixed the pylint 80 char line limit errors. |
Pushed the amended commit with the requested changes. |
@tlkh, when enabling "enable_mixed_precision_graph_rewrite", how about the data type of model network when defining the model? |
If you wish to use the network outside of TensorFlow for inference in FP16, typically just converting all the datatypes to FP16 will work. Only for training does there have to be special considerations. There are other toolkits to help with inference optimization outside of TensorFlow, such as TensorRT. Only Volta and Turing GPUs have the FP16 units and Tensor Cores that can benefit from mixed precision. Older GPUs will not see a benefit, hence the feature is not enabled for them. @recrusader let's move this discussion to a new GitHub issue if you wish to raise any concerns or suggestions. Keep this thread for discussion specifically about this PR. |
I asked this question in official model. However, no one can give me an answer. Thank you very much! I think that I have got the answer. |
@perfinion @martinwicke requesting review once again. Changes made:
Thank you! |
02d2eea
to
75f7b2b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@MarkDaoust will this render correctly? I see there's no indent on the args and returns sections, I'm wondering whether the indent is required.
@martinwicke thanks for pointing that out. Let me fix that anyway. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Hi @tlkh,
It looks like you miss understood @martinwicke's comment.
All the Args/Returns/Raises blocks need to be indented, or they will not be recognized by our linters, and will render incorrectly on tensorflow.org (They will be interpreted as markdown which will just flatten them into a single paragraph).
Thanks.
Sorry about that! I fixed it. |
MarkDaoust Hmm I've addressed the changes but I'm not sure why GitHub is still complaining about "1 change requested". Do you need to approve it as well? Edit: never mind |
In response to #29241
Improved the docstring for
tf.train.experimental.enable_mixed_precision_graph_rewrite
:loss_scale
argumentA gist with the rendered docstring is here for ease of review.
Thank you, any feedback or criticism is welcome.