Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Dice over all samples #67

Closed
ghost opened this issue Mar 4, 2018 · 5 comments
Closed

Dice over all samples #67

ghost opened this issue Mar 4, 2018 · 5 comments

Comments

@ghost
Copy link

ghost commented Mar 4, 2018

I am new in Keras. I am trying to use your code in brain tumor segmentation but I confused about the dice metric. I don't know actually this code measures the dice for each image and then measures the mean value over all samples or it just measures the dice over all sample once. What I mean, consider that I have an input like (3000,218,218) includes 3000 samples. is it calculate the dice for each 218*218 image and then measure the mean over all 3000 samples or it convert this huge matrix to a vector and the measure the dice for all samples once.
Thanks

@jocicmarko
Copy link
Owner

Hi, welcome to Keras community :).
The code I provided calculates Dice metric over whole batch - it doesn't average over samples.

@ghost
Copy link
Author

ghost commented Mar 5, 2018

Thanks for your response. I understand now. So, in this way, when I give a validation set, it measure the dice again over set!
I have another question! I wrote the your code using np insted of keras values. However, the dice value is different. What do you think! is there any advantage to use keras variables than numpy!?

@jocicmarko
Copy link
Owner

So, in this way, when I give a validation set, it measure the dice again over set!

Exactly, so this is not a reliable metric for reporting (in a paper for example). I suggest that for validation/test you calculate the Dice metric per image (batch size 1) and then average over those.

I wrote the your code using np insted of keras values. However, the dice value is different. What do you think! is there any advantage to use keras variables than numpy!?

How large is the difference? It's normal that there is some difference, usually because default numpy uses float64 precision and Tensorflow uses float32 precision. However, if the difference is large, it might be that the implementations are different.

@ghost
Copy link
Author

ghost commented Mar 5, 2018

For instance; for 1 epoch in Keras I have dice over whole images in validation set like 0.0442, but when I save the model exactly related to that epoch and try to use model.predict function over whole same validation set, I have dice value (Numpy) around 0.0698!!
The difference is high actually if you look some other metrics like precision!!
Keras (precision:0.3563)
Numpy(recall: 0.9464)

def precision(y_true, y_pred):
true_positives = K.sum(K.round(K.clip(y_true * y_pred, 0, 1)))
predicted_positives = K.sum(K.round(K.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives + K.epsilon())
return precision

def precision(y_true, y_pred):
true_positives = np.sum(np.round(np.clip(y_true, * y_pred, 0, 1)))
predicted_positives = np.sum(np.round(np.clip(y_pred, 0, 1)))
precision = true_positives / (predicted_positives)
return precision

@ghost
Copy link
Author

ghost commented Mar 6, 2018

I found some bugs in my code. So basically the values are more or less same with a few differences that can be ignored I think.
dice (Keras): 0.002789
dice (Numpy): 0.002946
and the other metrics is more or less like this in initial epochs.

@ghost ghost closed this as completed Mar 13, 2018
This issue was closed.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

1 participant