-
Notifications
You must be signed in to change notification settings - Fork 19.4k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
How do keras calulate the metric? #11705
Comments
Hi, Here ths the quote from #5794:
Now ppl tend to do these callback implementations for metrics. But I think they are ugly and feel like hacks. In your example the validation data is processed twice. Once by keras itself and once by your callback. That is why I did write a callback that is not picking the validation data from the model ( |
@Suncicie Did this help you? |
@Suncicie , can you study @PhilipMay suggestion and let us know. Thanks. |
It'a appericiate of you, now I get the idea that the f1_macro always samller thab Callback, and this gap is getting large with the increase of epoch. Indeed, the average result each epoch of model.compile(..., metric=[f1-macro]) is misleading, at the first epochs, the model haven't fit the data carefully, it's f1-macro of couse will be low than which after the batch. So f1_macro is misleading, Callback is correct. |
Closing this issue since its been addressed. Feel free to reopen if have any further questions. Thanks! |
I'm trying to write a new metric of f1-macro for the softmax, I have tried two methods, I can't understand why the result of them is different.
The first method, I defined a f1-macro function like that,
then use it like that
model.compile(..., metric=[f1-macro])
The second method, I used the Callback to calculate the f1-macro by sklearn f1-score, the f1 calculated by f1_macro function is always less than the Callback. the Callback function is defined below:
and I use it by
(X_valid, y_valid)
?The text was updated successfully, but these errors were encountered: