-
Notifications
You must be signed in to change notification settings - Fork 349
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Zero-dimensional tensor concatenation problem #69
Comments
Hi, I got the same problem recently. I think it is connected to a newer version of pytorch. What worked for me is replacing Regards |
Hi Frank, Sorry for the late response. I've tried your solution and it works. I'm not sure if Boxiao |
Yeah, I come across the same issue and my Pytorch version is 0.4. Hope the author @Cysu can look into this. |
Actually, this issue is fixed for me at 0.4.1 |
@dem123456789 It works fine with pytorch 0.3.0. |
I'm having a similar problem, where I cant concatenate the elements in a list of zero-dimensional tensors:
Here are my terminal logs:
|
Alright so apparently, I need to do |
By slicing items of one-dimensional tensors you get zero-dimensional tensors that cannot be concatenated. To force getting one-dimensional tensors you can slice Side note: I am not sure what you are doing in production, but element-wise multiplication in pytorch is easily done using the
|
Another option is to use unsqueeze to turn a 0-dim tensor into a 1-dim tensor: |
Well, it seems that last versions supported that operation, but from 0.4, you should unsqueeze the tensors which are the elements of the list. for i in range(len(lst)): So the list should look like this. |
It's useful |
Hi there,
Thank you for the code!
While training the ResNet50 model using the market1501 dataset, I got the following Runtime error:
The problem turned out to happen at this specific line of code in
triplet.py
:dist_ap = torch.cat(dist_ap)
I have printed out
dist_ap
, which is a python list full of zero-dimensional (its printed-out size is:torch.Size([])
) tensors (I used a batch-size of 64 so the list has a length of 64):[tensor(0.2895, device='cuda:0'), tensor(0.3334, device='cuda:0'), tensor(0.3334, device='cuda:0'), tensor(0.3175, device='cuda:0'), tensor(0.3078, device='cuda:0'), tensor(0.3078, device='cuda:0'), tensor(0.3045, device='cuda:0'), tensor(0.3045, device='cuda:0'), tensor(0.2636, device='cuda:0'), tensor(0.2630, device='cuda:0'), tensor(0.2497, device='cuda:0'), tensor(0.2636, device='cuda:0'), tensor(0.2967, device='cuda:0'), tensor(0.2657, device='cuda:0'), tensor(0.2967, device='cuda:0'), tensor(0.2936, device='cuda:0'), tensor(0.3517, device='cuda:0'), tensor(0.2939, device='cuda:0'), tensor(0.3517, device='cuda:0'), tensor(0.3185, device='cuda:0'), tensor(0.3318, device='cuda:0'), tensor(0.3357, device='cuda:0'), tensor(0.3260, device='cuda:0'), tensor(0.3357, device='cuda:0'), tensor(0.2928, device='cuda:0'), tensor(0.2906, device='cuda:0'), tensor(0.2928, device='cuda:0'), tensor(0.2906, device='cuda:0'), tensor(0.1992, device='cuda:0'), tensor(0.2086, device='cuda:0'), tensor(0.2086, device='cuda:0'), tensor(0.2040, device='cuda:0'), tensor(0.2742, device='cuda:0'), tensor(0.2836, device='cuda:0'), tensor(0.3117, device='cuda:0'), tensor(0.3117, device='cuda:0'), tensor(0.2838, device='cuda:0'), tensor(0.2686, device='cuda:0'), tensor(0.2435, device='cuda:0'), tensor(0.2838, device='cuda:0'), tensor(0.3124, device='cuda:0'), tensor(0.3268, device='cuda:0'), tensor(0.3304, device='cuda:0'), tensor(0.3304, device='cuda:0'), tensor(0.2591, device='cuda:0'), tensor(0.2671, device='cuda:0'), tensor(0.2825, device='cuda:0'), tensor(0.2825, device='cuda:0'), tensor(0.3309, device='cuda:0'), tensor(0.2836, device='cuda:0'), tensor(0.3126, device='cuda:0'), tensor(0.3309, device='cuda:0'), tensor(0.3232, device='cuda:0'), tensor(0.3493, device='cuda:0'), tensor(0.3493, device='cuda:0'), tensor(0.3379, device='cuda:0'), tensor(0.3044, device='cuda:0'), tensor(0.3173, device='cuda:0'), tensor(0.3173, device='cuda:0'), tensor(0.3009, device='cuda:0'), tensor(0.2941, device='cuda:0'), tensor(0.3048, device='cuda:0'), tensor(0.3048, device='cuda:0'), tensor(0.2704, device='cuda:0')]
The values of the tensors seem to be of no problem, but the concatenation fails. Any idea about what the problem is?
Thank you very much.
Boxiao
The text was updated successfully, but these errors were encountered: