Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jaccard Index & Dice coefficient remains zero throughout #23

Closed
BashCache opened this issue Jan 7, 2022 · 5 comments
Closed

Jaccard Index & Dice coefficient remains zero throughout #23

BashCache opened this issue Jan 7, 2022 · 5 comments

Comments

@BashCache
Copy link

Hello,
I have tried using the MultiRes U-Net Architecture on a different dataset. My accuracy and loss works well. However, the Jaccard index and DICE coefficient remains always zero. Could you please let me know why this might happen and any possible suggestions for this?

@saskra
Copy link

saskra commented Jan 7, 2022

One of several possible explanations for the fact that the values of accuracy differ strongly between Jaccard/ IoU or Dice/ F1 could be that the former is calculated on all classes (e.g. foreground and background) but the latter for binary decisions always only on the first of two classes, by default the smaller one. Now, if you have an unbalanced class ratio and accidentally use the wrong class as the first class, this could lead to wrong values. For this reason I prefer to use the Matthews correlation coefficient (MCC) or the Mean Intersection over Union, which corresponds to the Jaccard averaged over all classes.

Another possible source of error, has also been mentioned here: #16
Here, the two values are calculated on decimal numbers instead of class numbers 0 and 1 (or 1 and 2). I have therefore adjusted these lines:

"def dice_coef(y_true, y_pred):\n",

Now they look like this to me:

def jaccard(y_true, y_pred):
	"""calculate the Jaccard during training and validation
	Args:
		y_true: groundtruth labels
		y_pred: predicted labels
	"""

	y_true_f = k.round(k.flatten(y_true))
	y_pred_f = k.round(k.flatten(y_pred))
	intersection = k.sum(y_true_f * y_pred_f)
	union = k.sum(y_true_f + y_pred_f - y_true_f * y_pred_f)
	jacc = intersection / (union + k.epsilon())
	return jacc

(The author had good arguments for his variant, but on my own data these advantages did not show up).

No idea if any of these will solve your problem, but it's certainly worth looking into.

@BashCache
Copy link
Author

Thank you @saskra for the immediate reply. I will try with the above said methods for my model training.

@saskra
Copy link

saskra commented Feb 16, 2022

Out of curiosity, did my tips help?

@BashCache
Copy link
Author

Hi @saskra,
I tried with the formula you mentioned here and after rounding off the pixel values, the Jaccard index and dice coefficient seems to be performing really well. Apologies, I was busy with some work and couldn't respond for the thread after trying out. Thank you so much for resolving the issue :)

@nibtehaz
Copy link
Owner

nibtehaz commented Apr 3, 2023

I am thankful to @saskra for his contribution, I am closing the issue as the problem of @BashCache is resolved.

@nibtehaz nibtehaz closed this as completed Apr 3, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants