-
Notifications
You must be signed in to change notification settings - Fork 1.5k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
RuntimeError: expected dtype Half but got dtype Long #25
Comments
As shown in the line 124-125 of u2net_train.py: inputs = inputs.type(torch.FloatTensor), labels = labels.type(torch.FloatTensor). Inputs, labels and prediction are all FloatTensor. If you have changed the training code, pls make sure you are converted you mask to FloatTensor. To use it for binary segmentation, just replace the path of training data by your own data. |
@Nathanua I am trying to use it with FastAI. How can i convert the output to obtain 0 if not selected pixel and 1 if selected pixel? |
Just threshold it. "prediction=prediction>0.5" should work. |
Thank you, for all your help!!! I updated my code for passing mask to floattensor. Now, I am getting next error!
|
The dimension of two tensors d0 and labels_v should be the same. You have to reshape your ground truth by kind of reshaping operation e.g. np.reshape(gt,(2,1,height,width)) or similar operation in petroch.
… On May 19, 2020, at 2:03 PM, David Lacalle Castillo ***@***.***> wrote:
As shown in the line 124-125 of u2net_train.py: inputs = inputs.type(torch.FloatTensor), labels = labels.type(torch.FloatTensor). Inputs, labels and prediction are all FloatTensor. If you have changed the training code, pls make sure you are converted you mask to FloatTensor.
Thank you, for all your help!!!
I updated my code for passing mask to floattensor. Now, I am getting next error!
547 def muti_bce_loss_fusion(d0, d1, d2, d3, d4, d5, d6, labels_v):
548 print(labels_v.shape)
--> 549 loss0 = bce_loss(d0, labels_v)
ValueError: Target size (torch.Size([2, 1002, 1002])) must be the same as input size (torch.Size([2, 1, 1002, 1002]))
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#25 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADSGORL765H4T6272OCSZLTRSLQ2PANCNFSM4NFJGAOQ>.
|
Now, loss it's working. However, I don't understand what is the output of the model. I tried to use the treshold as you said. |
I am using next code @Nathanua :
|
I think if you didn’t change the definiton of u2net, then please remove you F.sigmoid() function in your code because there is already sigmoid at the end of our model. It should work then.
… On May 19, 2020, at 4:08 PM, David Lacalle Castillo ***@***.***> wrote:
I am using next code @Nathanua <https://github.com/NathanUA> :
d0, d1, d2, d3, d4, d5, d6 = self.model(*self.xb)
self.pred = d0.clone()
self.pred = F.sigmoid(self.pred)
self.pred=normPRED(self.pred)
self.pred=self.pred>0.5
self.pred=self.pred.type(torch.uint8)
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#25 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ>.
|
I removed the F.sigmoid at the forward:
I also chnaged bceloss to
This changes are for allowing the use of torch.cuda.amp.autocast and MixedPrecisionTraining. |
All my code looks as follows:
One batch
|
Then you have to debug step by step. For example, output the pre-threhsolded probability maps and to see if they make sense. If yes, then try to debug the normalization and the thresholding function. A good way to debug is to output the intermediate variables and visually check the results. Best of luck.
… On May 19, 2020, at 4:24 PM, David Lacalle Castillo ***@***.***> wrote:
I think if you didn’t change the definiton of u2net, then please remove you F.sigmoid() function in your code because there is already sigmoid at the end of our model. It should work then.
… <x-msg://18/#>
On May 19, 2020, at 4:08 PM, David Lacalle Castillo @.***> wrote: I am using next code @Nathanua <https://github.com/NathanUA> https://github.com/NathanUA <https://github.com/NathanUA> : d0, d1, d2, d3, d4, d5, d6 = self.model(*self.xb) self.pred = d0.clone() self.pred = F.sigmoid(self.pred) self.pred=normPRED(self.pred) self.pred=self.pred>0.5 self.pred=self.pred.type(torch.uint8) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#25 (comment) <#25 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ <https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ>.
I removed the F.sigmoid at the forward:
return d0, d1, d2, d3, d4, d5, d6
I also chnaged bceloss to
bce_loss = nn.BCEWithLogitsLoss(size_average=True)
This changes are for allowing the use of torch.autocast and MixedPrecisionTraining.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#25 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADSGORINEFS5UMGHUXFATL3RSMBIVANCNFSM4NFJGAOQ>.
|
Thank you for your help. I will debug it tomorrow. I don't know what is happening. Printing torch.max and torch.min of sigmoid and normalization looks okey. Could you post the code that you used for transforming the prediction in the black white images that are in readme, please? Thank you very much for all your help!!! |
Sorry, in our readme file, all the maps are probability maps not thresholded ones. I have already sent you the code predicton = prediction>0.5 My suggestion is trying to VISUALLY look at the prediction results. If that make sense, you can check if your data format is correct and dice computation function get the correctly matched inputs with correct format. Best of luck!
… On May 19, 2020, at 4:41 PM, David Lacalle Castillo ***@***.***> wrote:
Then you have to debug step by step. For example, output the pre-threhsolded probability maps and to see if they make sense. If yes, then try to debug the normalization and the thresholding function. A good way to debug is to output the intermediate variables and visually check the results. Best of luck.
… <x-msg://19/#>
On May 19, 2020, at 4:24 PM, David Lacalle Castillo @.> wrote: I think if you didn’t change the definiton of u2net, then please remove you F.sigmoid() function in your code because there is already sigmoid at the end of our model. It should work then. … x-msg://18/# On May 19, 2020, at 4:08 PM, David Lacalle Castillo @.> wrote: I am using next code @Nathanua <https://github.com/NathanUA> https://github.com/NathanUA <https://github.com/NathanUA> https://github.com/NathanUA <https://github.com/NathanUA> https://github.com/NathanUA <https://github.com/NathanUA> : d0, d1, d2, d3, d4, d5, d6 = self.model(*self.xb) self.pred = d0.clone() self.pred = F.sigmoid(self.pred) self.pred=normPRED(self.pred) self.pred=self.pred>0.5 self.pred=self.pred.type(torch.uint8) — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#25 <#25> (comment) <#25 (comment) <#25 (comment)>>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ <https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ> https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ <https://github.com/notifications/unsubscribe-auth/ADSGORMZVG5YPSH44HYRJITRSL7ODANCNFSM4NFJGAOQ>. I removed the F.sigmoid at the forward: return d0, d1, d2, d3, d4, d5, d6 I also chnaged bceloss to bce_loss = nn.BCEWithLogitsLoss(size_average=True) This changes are for allowing the use of torch.autocast and MixedPrecisionTraining. — You are receiving this because you were mentioned. Reply to this email directly, view it on GitHub <#25 (comment) <#25 (comment)>>, or unsubscribe https://github.com/notifications/unsubscribe-auth/ADSGORINEFS5UMGHUXFATL3RSMBIVANCNFSM4NFJGAOQ <https://github.com/notifications/unsubscribe-auth/ADSGORINEFS5UMGHUXFATL3RSMBIVANCNFSM4NFJGAOQ>.
Thank you for your help. I will debug it tomorrow.
<https://user-images.githubusercontent.com/41203448/82385336-24859d00-9a32-11ea-977b-522b0d72a581.png>
Loss is working.
I don't know what is happening. Printing torch.max and torch.min of sigmoid and normalization looks okey.
Could you post the code that you used for transforming the prediction in the black white images that are in readme, please?
Thank you very much for all your help!!!
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub <#25 (comment)>, or unsubscribe <https://github.com/notifications/unsubscribe-auth/ADSGORNCMHZY6PVMDZJ2A2TRSMDITANCNFSM4NFJGAOQ>.
|
Debugging my coude I found that the problem was with dice metric, that metric was making an argmax of prediction. However, the output of this model is just a channel. So, there is no need to apply that. This model throws this warning:
How can I solve it?? Thank you very much for your help @Nathanua . |
I guess that is possibily because you are using a different version of pytorch. This is just a warming. Shouldn't be a big issue if it dosn't impact the performance. |
I am trying to use this model for binary segmentation.
When i pass the Mask as Tensor to muti_bce_loss_fusion I get this error:
What is the format of the output of the model? What is the expected format for the label?
How could I use this model for binary segmentation?
The text was updated successfully, but these errors were encountered: