New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
predicted mask format #21
Comments
Hi, thanks for reaching out. To make it short, you should not convert the output to a binary mask. Did you train using only the dice loss as well (alpha=0? What do your results look like in this case? :) |
@jocpae Thanks for replying. I think I fixed this by just only applying sigmoid to the logits. You are right, I shouldn't convert the output to only 0 and 1. I trained with alpha = 0.5. I found that the results are better when combine soft_dice_cldice with other loss such as bce |
Good to hear :) |
During training, should I convert the output from the Unet to a binary mask prior feeding into soft_dice_cldice? I used raw logits, but the training result is really bad.
However when I tried to convert to a binary mask, I lost the grad on the tensor.
The text was updated successfully, but these errors were encountered: