New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Dice over all samples #67
Comments
Hi, welcome to Keras community :). |
Thanks for your response. I understand now. So, in this way, when I give a validation set, it measure the dice again over set! |
Exactly, so this is not a reliable metric for reporting (in a paper for example). I suggest that for validation/test you calculate the Dice metric per image (batch size 1) and then average over those.
How large is the difference? It's normal that there is some difference, usually because default numpy uses float64 precision and Tensorflow uses float32 precision. However, if the difference is large, it might be that the implementations are different. |
For instance; for 1 epoch in Keras I have dice over whole images in validation set like 0.0442, but when I save the model exactly related to that epoch and try to use model.predict function over whole same validation set, I have dice value (Numpy) around 0.0698!! def precision(y_true, y_pred): def precision(y_true, y_pred): |
I found some bugs in my code. So basically the values are more or less same with a few differences that can be ignored I think. |
I am new in Keras. I am trying to use your code in brain tumor segmentation but I confused about the dice metric. I don't know actually this code measures the dice for each image and then measures the mean value over all samples or it just measures the dice over all sample once. What I mean, consider that I have an input like (3000,218,218) includes 3000 samples. is it calculate the dice for each 218*218 image and then measure the mean over all 3000 samples or it convert this huge matrix to a vector and the measure the dice for all samples once.
Thanks
The text was updated successfully, but these errors were encountered: