Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Cross Entropy and Accuracy are not matched #37

Closed
jmren168 opened this issue Dec 12, 2017 · 2 comments
Closed

Cross Entropy and Accuracy are not matched #37

jmren168 opened this issue Dec 12, 2017 · 2 comments

Comments

@jmren168
Copy link

Hi,

For a two-class problem, I got the following result.
image

The cross-entropy is very very small, but I can not get high accuracy.

I traced the code of computing loss value (following code),
"loss = tf.reduce_mean(tf.nn.softmax_cross_entropy_with_logits(logits=score, labels=y))"
and found that the score need to be " Unscaled log probabilities" (https://www.tensorflow.org/api_docs/python/tf/nn/softmax_cross_entropy_with_logits)

So it seems that you need to perform a log function to "score" first before feeding it to softmax function?

Please correct me if I'm wrong, and many thanks.

@kratzert
Copy link
Owner

I'm pretty sure it is used correctly in my code and the documentation is unclear here. See e.g. this StackOverflow topic, where you can find a more detailed explanation.

@jmren168
Copy link
Author

Hi there,

I solved this problem. This is cased by wrongly one-hot encoding. For two-class problem, the label of the first class need to be "0" and the other label should be "1". See the correct results below, and many thanks for the explanation.

image

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants