New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Error while Running monodepth2 on Jetson Nano #38
Comments
Thanks for your interest in this repo, and thanks for the interesting issue! We have never seen this problem on our machines here, though we are using more 'traditional' GPUs. There is an issue here with pytorch which seems to be similar. There they are using the Nvidia TX-2, which seems to be similar to the Jetson Nano.Unfortunately the solutions proposed there seem to involve modifying the pytorch source code, which is a bit of a pain. |
To add to michael's answer, you might want to consider converting the model to TensorRT. It will likely fail due to the use of an upsample bilinear, but you could replace them with nearest neighbor upsamples and fine tune the models a bit. |
Thank you for the suggestions! @mdfirman I had already seen that issue, but the problem with that is the file in question doesn't seem to be on my machine anywhere (I was running into separate errors with building pytorch from source, so I used the prebuilt wheels provided at the link). @mrharicot I'll be sure to look into that. |
@arjunkeerthi thanks, please do report back here so that other people using this repo on a Jetson don't run into the same problem. |
@mdfirman Hi, I'm still working on the suggestion @mrharicot provided. I'm pretty new to this stuff so it may take me awhile, but feel free to close this and I'll get back when I gotten more done. Thanks. |
Hi, sorry for the super late reply. I did try @mrharicot 's suggestion, and was able to successfully convert the encoder over to TensorRT. But reflection padding is not supported in TensorRT (only zero-padding is included), so I was unable to convert the decoder. Since I'm busy with other components of our project, I didn't pursue this, but I suppose it wouldn't be overly difficult to add support for this in TensorRT and be able to convert the decoder model. But as quick fix, changing the mode from
I didn't notice any glaring differences between the disparity images produced from these modes (I tested with "bilinear" on my personal laptop), but my testing was not very extensive. Using pytorch version 1.1.0 and torchvision version 0.3.0, the Nano took about 0.5 seconds per image. When running on a webcam, we were able to achieve roughly 4fps. Hopefully this helps someone avoid a lot of headache in trying to get this running on the Jetson Nano. |
Super, thank you for the update! |
Hello! |
Hi,
|
Hi, ValueError: align_corners option can only be set with the interpolating modes: linear | bilinear | bicubic | trilinear thanks! |
Hi @remindchobits, sorry I forgot to add in my comment above that when switching to nearest neighbor, you need to take out the last argument (align_corners=True) since it can only be used with the modes listed in the error message given. |
Hi, @arjunkeerthi |
Hi, thanks a lot for this project. I am trying to use your work for a summer research project at my university and want to run monodepth2 on the Nvidia Jetson Nano. However, when I try to run the example command to test out the depth prediction for a single image, I get this error:
Weirdly, the file in question (SpatialUpSamplingBilinear.cu) doesn't exist anywhere on my machine. I installed pytorch using this guide: https://devtalk.nvidia.com/default/topic/1049071/jetson-nano/pytorch-for-jetson-nano/post/5324123/#5324123
(This might be a pytorch specific issue, in which case I'd be happy to close this)
Thank you!
The text was updated successfully, but these errors were encountered: