New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Tensor size matching error #9
Comments
Hi: |
Hi, thanks for the quick reply! The original size of my image is (360x480x3) and (360x480x1) for PDR. While passing |
Hi: |
Hi, |
I think it probably comes from the gen2channel.py |
Why does your PDR only have one channel? did you discard the confidence channel? |
I've discarded the confidence channel just for experimentation. Thanks for your advice!
Yes indeed. After resizing the PDR the problem goes away. I was using a custom generation script and missed out on that part. Thank you so much! |
Dear authors, thanks for the great work! I'm trying to train with a custom dataset containing images and Pseudo Dense Representations Generation of size H * W * 1 and have changed the Resnet encoder dimension from 2 to 1 accordingly. However, I'm getting
RuntimeError: The size of tensor a (10) must match the size of tensor b (15) at non-singleton dimension 3
atx = input_features[-1] + beam_features[-1]
indepth_decoder.py
. I guess it's related to scaling as the Pseudo Dense Representations Generation has the original scale while for image it's scaled down. However in your originalinputs["2channel"] = self.load_4beam_2channel(folder, frame_index, side, do_flip)
it seems that there's no scaling down involved. Do you have any idea what might be the issue? Thanks!The text was updated successfully, but these errors were encountered: