Dice losses computed for the whole batch? #22
Comments
Hi Fabian, |
This behaviour is now fixed in commit: Thanks again for pointing it out! |
Thanks both, I'm closing this issue. |
Hello, I'm having a little trouble understanding this. Under the generalized_dice_loss function, when the score is calculated as :
|
Hi @shilpa-ananth , |
Hey there,
when looking into the implementation of the dice losses I noticed that they do not respect each image in the batch individually but are rather computed over the entire batch (if I understood that correctly). This was not mentioned in the corresponding paper ("Generalized Dice overlap as a deep learning loss function for highly unbelanced segmentations" by CH Sudre et al.).
It that behavior intentional? Due to the nature of the dice loss, computing it over the entire batch vs computing it for each sample individually and then taking the mean is not equivalent.
Regards,
Fabian
The text was updated successfully, but these errors were encountered: