New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Class weights for Losses #554
Comments
After some searching, I have solved the problem, by implementing class weights into DiceLoss. After loss is calculated I multiply it by class weights and changed aggregate_loss to loss.sum(). Weights must be normalized. Here is the code:
|
Hi @augasur, This is very lucky when I still need to implement weighted dice loss. Your model works well with weighted dice loss, right? Thank you. |
It seems to train more accurate, when I reduce background class weight. I will test more in the future. If you try it, please share your findings, how it changed your output for better or worse. |
I think this line is to add weight for each class?
|
Each loss is multiplied by class weight in this line: |
I mean two lines that I mentioned you is to add weight for each class from the author? In your case, each weight contributes equally and I don't think your code is right. |
If I read correctly, these two lines are used for masking non-empty classes, but does not work as weights. I have implemented class weights like this code shown here, just a bit more efficient: https://github.com/pytorch/pytorch/issues/1249#issuecomment-339904369. As for the training, my model now achieves far better results than with simple Dice Loss. |
I checked your code, for example I have 3 classes. So, the weight classes from your code are (0.33,0.33,0.33). I don't think this weight will help model learn with imbalanced dataset. |
If you pass class_weights = None, it will calculate each weights evenly, in your case (0.33,0.33,0.33), but if you pass (0.45, 0.45, 0.1), first 2 classes will have a lot bigger impact than the last for the loss computation. In my case, it works perfectly. BTW, you have to calculate weights for the training dataset before you pass them, they are not calculated on each step. |
Yeah, If weights are (0.45,0.45,0.1) this will makes loss function changes.
y_true.size(1) is number classes right? So, the returned weights have the same ratio. |
Oh sorry, I realized this line
How can I compute class weights before forward? Thank you! Sorry again. |
@augasur Can you share me a tutorial to calculate the class weights? I guess the progress is:
This proposed approach is right? Thank you |
This line is used to define equal weights if you class_weights are None, because code would throw exception Weights can be calculated by counting how many different mask / class pixels there are in your train dataset. |
Yes you are right. |
Sorry @augasur, I still confirm this solution again. I don't want to implement wrong way! Assume that I get ratio after compute step 2 for 3 classes: So, I will compute class weight = (1/0.3,1/0.5,1/0.2). After I got class weight, your code will compute one more time right?
Step 4. sum_weights = (1/0.3 + 1/0.5 + 1/0.2) --> class_weights = 1/0.3 / sum_weights, etc. This step is right? Thank you! |
Class weight should be (0.3,0.5,0.2), smaller value means it will have lower influence on the loss. This line just normalizes the weights, so that you can pass (3,5,2), instead of (0.3,0.5,0.2) |
Hi @augasur, I think if one class has a probability is 0.5, we should set class weight = 1 / 0.5 = 2, instead of 0.5 x 10 = 5. Because the loss need to focus into class which has lower probability, right? |
Hello @augasur, I re-wrote my weighted dice loss based on original paper. You can test it!
|
Hi @augasur, i get a bit confused. I try to use your weighted-dice_loss for my multiclass image classification/segmentation.
The line where: " assert len(self.class_weights) == y_true.size(1)" fails since i have 8 classes, but my y_true.size(1) is 1, since it`s a grayscale image with 8 classes. Could you help me out with this ? |
Hi, love using this library.
I have encountered problem, that my datasets are very imbalanced, they have multiple classes, but classes take less than 2% of the image space, they are mainly small objects, the rest is background and it seems that Unet fails to predict accurately.
Using your segmentation_models for Tensorflow library I was able to use class weights for losses and it increased model prediction accuracy.
Is it possible to use class weights on this library? Might there be any code snippet?
Best Regards,
Augustas
The text was updated successfully, but these errors were encountered: