You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
{{ message }}
This repository has been archived by the owner on Nov 3, 2022. It is now read-only.
I wonder if there are some problems with that definition:
Does it work with one-hot-encoded data? In particular, I would've expected axis to be everything but the channel dimension, but it is just the channel dimension.
Shouldn't it skip the background label? (At least optionally)
I also wondered about the multiplication at the end.
Furthermore, smooth = 100 looks like a really arbitrary default.
Sign up for freeto subscribe to this conversation on GitHub.
Already have an account?
Sign in.
Here is jaccard loss implementation:
https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py
Why jaccard loss is prefered to dice loss?
Why we apply
K.abs
? as I understandy_true
,y_pred
should be already in [0,1] range.https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py#L31
Why default
smooth=100
? howsmooth
parameter affect training?As I understand adding
smooth
to denominator prevents division by zero(why not smooth=eps=1e-6?), but why it's added to numerator?Why we multiply loss by
smooth
at the end?https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py#L34
The text was updated successfully, but these errors were encountered: