New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Size don't match when augument #102
Comments
It come from encoder. I print out and get shape 57 57. Shoud I adjust somethings in encoder or decoder to get suitable size ? |
Sorry, I cannot fully understand what you have encountered. What does ['pre'] exactly mean? |
It is outs ['pred'] of yours |
Tks for reply sir ! |
1.How many used images for extract unreliable and for contrastive loss ? |
|
U said that All features of Unreliable pixel were stored in a memory bank . But I search all find don't see u and them to list memory bank. Do u get from dist ? |
Provide your script and log, please. |
''' trainer: # Required. saver: criterion: net: # Required. |
It seems you run our code on your own dataset. I am not sure there are any issues in the dataset. |
I tried with my own and succesed about 6 months ago. I want to know the memorybank and dequue_andqueue of u |
Please refer to PyTorch documentation for understanding |
|
how can I extract negative samples sir ? |
https://github.com/Haochen-Wang409/U2PL/blob/main/u2pl/utils/loss_helper.py#L190 |
Not use contrastive loss will significantly affect to training model ? |
Without contrastive loss, our method is equivalent to CutMix, which is expected to cause performance degradation. |
Hi, when I get ['pre'] from model and augument them.
RuntimeError: The size of tensor a (57) must match the size of tensor b (224) at non-singleton dimension 1
Dimension of out don't match with real images
Why ? Error come from decode or encoder of model ?
The text was updated successfully, but these errors were encountered: