Skip to content
This repository has been archived by the owner on Nov 3, 2022. It is now read-only.

Questiona about jaccard loss #329

Open
mrgloom opened this issue Dec 13, 2018 · 1 comment
Open

Questiona about jaccard loss #329

mrgloom opened this issue Dec 13, 2018 · 1 comment

Comments

@mrgloom
Copy link

mrgloom commented Dec 13, 2018

Here is jaccard loss implementation:
https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py

Why jaccard loss is prefered to dice loss?

Why we apply K.abs? as I understand y_true, y_pred should be already in [0,1] range.
https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py#L31

Why default smooth=100? how smooth parameter affect training?

As I understand adding smooth to denominator prevents division by zero(why not smooth=eps=1e-6?), but why it's added to numerator?

Why we multiply loss by smooth at the end?
https://github.com/keras-team/keras-contrib/blob/master/keras_contrib/losses/jaccard.py#L34

@hmeine
Copy link

hmeine commented Feb 16, 2019

I wonder if there are some problems with that definition:

  1. Does it work with one-hot-encoded data? In particular, I would've expected axis to be everything but the channel dimension, but it is just the channel dimension.
  2. Shouldn't it skip the background label? (At least optionally)
  3. I also wondered about the multiplication at the end.
  4. Furthermore, smooth = 100 looks like a really arbitrary default.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants