New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Bug Fixed in tf.custom_gradient API Document #58486
Merged
copybara-service
merged 1 commit into
tensorflow:master
from
SuryanarayanaY:tf_gradient_docstring_fix
Dec 28, 2022
Merged
Bug Fixed in tf.custom_gradient API Document #58486
copybara-service
merged 1 commit into
tensorflow:master
from
SuryanarayanaY:tf_gradient_docstring_fix
Dec 28, 2022
Conversation
This file contains bidirectional Unicode text that may be interpreted or compiled differently than what appears below. To review, open the file in an editor that reveals hidden Unicode characters.
Learn more about bidirectional Unicode characters
tf.custom_gradient documentation have the below code snippet from line nos.63 to 65 which is throwing Runtime Error:tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead. x = tf.constant(100.) y = log1pexp(x) dy_dx = tf.gradients(y, x) # Will be NaN when evaluated. The above code snippet needs to be replaced with the one attached below to avoid the runtime error in 2.X versions. with tf.GradientTape() as tape: tape.watch(x) y=log1pexp(x) dy_dx = tape.gradient(y, x) # Will be NaN when evaluated. Please refer to the attached gist below for same. https://colab.research.google.com/gist/SuryanarayanaY/e692b30c318c42e86819bc153ab2ee77/tf_gradient.ipynb#scrollTo=YQoLZkmVgUok
Thanks for your pull request! It looks like this may be your first contribution to a Google open source project. Before we can look at your pull request, you'll need to sign a Contributor License Agreement (CLA). View this failed invocation of the CLA check for more information. For the most up to date status, view the checks section at the bottom of the pull request. |
rohan100jain
approved these changes
Nov 11, 2022
google-ml-butler
bot
added
kokoro:force-run
Tests on submitted change
ready to pull
PR ready for merge process
labels
Nov 11, 2022
gbaned
added
kokoro:force-run
Tests on submitted change
ready to pull
PR ready for merge process
and removed
ready to pull
PR ready for merge process
labels
Dec 2, 2022
gbaned
added
kokoro:force-run
Tests on submitted change
ready to pull
PR ready for merge process
and removed
ready to pull
PR ready for merge process
labels
Dec 15, 2022
gbaned
approved these changes
Dec 28, 2022
mihaimaruseac
approved these changes
Dec 28, 2022
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Labels
cla: yes
comp:ops
OPs related issues
ready to pull
PR ready for merge process
size:XS
CL Change Size: Extra Small
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
tf.custom_gradient documentation have the below code snippet from line nos.63 to 65 which is throwing Runtime Error:tf.gradients is not supported when eager execution is enabled. Use tf.GradientTape instead.
The above code snippet needs to be replaced with the one attached below to avoid the runtime error in 2.X versions.
Please refer to the attached gist below for same.