Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Attempt to make sense of reasoning for loss #20220

Merged
merged 3 commits into from Jun 22, 2018

Conversation

Naman-Bhalla
Copy link
Contributor

I feel the document seems a bit weird. Adding "when" seems to make better sense of it. Open to suggestions on alternatives.
Thanks

I feel the document seems a bit weird. Adding "when" seems to make better sense of it. Open to suggestions on alternatives.
Thanks
@@ -362,7 +362,7 @@ model's loss. This is the
that will be optimized.

We can calculate the loss by calling @{tf.losses.sparse_softmax_cross_entropy}.
The value returned by this function will be lowest, approximately 0,
When the value returned by this function will be lowest, approximately 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How about this:

The value returned by this function will be approximately 0 at lowest, when the probability of the correct class (at index label') is near 1.0.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

That sounds better. Thanks.
Should I update the PR with it ?

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just patched it.

@qlzh727 qlzh727 requested a review from MarkDaoust June 22, 2018 14:42
@qlzh727 qlzh727 self-assigned this Jun 22, 2018
@qlzh727 qlzh727 added the awaiting review Pull request awaiting review label Jun 22, 2018
Copy link
Member

@MarkDaoust MarkDaoust left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thanks helping us get this fixed.

@@ -362,7 +362,7 @@ model's loss. This is the
that will be optimized.

We can calculate the loss by calling @{tf.losses.sparse_softmax_cross_entropy}.
The value returned by this function will be lowest, approximately 0,
When the value returned by this function will be lowest, approximately 0,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I just patched it.

@qlzh727 qlzh727 added awaiting testing (then merge) and removed awaiting review Pull request awaiting review labels Jun 22, 2018
@qlzh727 qlzh727 added the kokoro:force-run Tests on submitted change label Jun 22, 2018
@kokoro-team kokoro-team removed the kokoro:force-run Tests on submitted change label Jun 22, 2018
@qlzh727 qlzh727 merged commit 56357d0 into tensorflow:master Jun 22, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants