-
Notifications
You must be signed in to change notification settings - Fork 266
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
difference between evaluate_preds and model.evaluate #16
Comments
Thanks for your question. Note that the |
Sorry for late response. I think I solved this issue. I mistakenly thought that the argument
To be more simplified, we can also get rid of loops over epochs and use |
This sounds good, thanks for looking into this! Feel free to make a pull request if you think this might be helpful for other users as well. |
Thanks for your excellent work. Your codes are really helpful.
In your code about evaluating the gcn model, what confused me is the difference between utils.evaluate_preds(your implementation) and model.evaluate(keras API). Here are my changes to evaluate gcn using model.evaluate function:
accuracy
to model.compile for accuracy logging:the rest code:
And here are the outputs I got after 10 loops:
According to keras doc, regularization mechanisms, such as Dropout and L1/L2 weight regularization, are turned off at testing time.
So why does the loss returned by model.evaluate is not exactly the same as utils.evaluate_preds?
what I have tried:
I tried to implement
categorical_crossentropy
loss function according to keras tensorflow backend. Here are my codes:but the results of this function are exactly the same as utils.categorical_crossentropy.
The text was updated successfully, but these errors were encountered: