-
Notifications
You must be signed in to change notification settings - Fork 31
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Jetson Nano #25
Comments
@sakalauskas Awesome - I dont know how it gets down to 30 frames per second on normal CUDA using the jetson reference apps... but I really like this :D Regards, |
Good stuff! Thanks! |
Has anyone tried this recently or I may be missing a step? Tested on a NVIDIA Jetson Nano 2GB Developer Kit and getting the following:
Thanks! |
@mloebl I think you might have deleted too many things in Only three lines need to be commented out (e.g. see below, keep in mind that this file might have changed in the latest doods version as at the time of writing I was on 2a850c9 HEAD)
|
@sakalauskas Thank you for the quick reply! Same error with the file from you, so may very well be a version issue, I'll try checking out that version specifically and see what happens. |
That was it! 2a850c9 built fine for me, thank you! Now to start playing with it :) |
@sakalauskas Hopefully last question, I got it up and running using the config.yaml:
Looks like I may be running out of memory on the GPU?
As it runs for about 25 seconds and then:
Is there anything in the config.yaml I can adjust for this? Thank you! |
@mloebl Actually I get this error too (but I have 4GB RAM Jetson). If the machine is loaded with other tasks and you start the docker image, it can't obtain the GPU memory. For some reason, on boot, the Tensorflow GPU device is being created correctly. So as long as the docker image is started at the startup, GPU should be created successfully - I have been running it for months with no issues:
|
@sakalauskas Got it, this is the 2gb board, and even with gdm disabled, still getting only about 178mb free which I sounds like the issue. Checking on ram usage, doods is alone about 1gb, not counting the rest of the services running so that makes sense. Guessing even at 178mb still not enough for it as it's failing to process anything. Thanks again for the help, maybe I'll look into the Coral units as can just plug that into my HA box. |
@mloebl I think you should try adding swap, it should help a bit since doods eat a lot of RAM (currently the ubuntu running doods uses 2.8G/3.87G and swap 1.9G/5.96G) |
I have taken inspiration and written a C++ version of DOODS that runs natively on a Jetson Nano. Its a proof-of-concept currently, and based on this fantastic project. It should run on a 2GB Nano
https://github.com/RichardPar/JetsonCUDA_DOODS Regards, |
This is great! I managed to get DOODS up and running on my Jetson Nano using TensorFlow. I used @sakalauskas 's Docker file from the first post and the detector.go file a few posts down. I'm running JetPack 4.3 (which comes with CUDA 10.0). I did the git pull from master: b2a1c53. Docker-compose enables a production-like object detection service for my Home Assistant server:
|
just to save us all the steps, do you mind pushing your docker image to dockerhub? |
Actually I stopped using DOODS as I moved on to running Shinobi on my Jetson Nano. So I don't even have the docker image left. And probably there have been some changes in DOODS since I compiled it. |
@bgulla I moved away from DOODS as well and reinstalled Ubuntu on Jetson. I found https://github.com/blakeblackshear/frigate to be quite great. Sadly there is no hardware acceleration with Jetson yet. |
It took me quite some time to add support for Jetson Nano, so I thought I would share my progress. This isn't an ideal/complete solution, maybe someone could build upon this or reuse this. Using this docker image detection time decreased from ~4 seconds to ~ 1 second using the
faster_rcnn_inception_v2_coco_2018_01_28
model as the processing was offloaded to GPU.nvcr.io/nvidia/l4t-base
image so I thought I just use pre-built binaries where Bazel is needed.3.1. For the doods to compile
doods/detector/detector.go
needs to be modified and references to Tensorflow Lite should be removed/commented before building the image. Tensorflow Lite is not really needed for Jetson Nano as we can just use Tensorflow so I did not bother adding Tensorflow Lite support.Here is the Dockerfile to build the image.
To build it:
doods
repoDockerfile.jetsonnano
docker build -t MYUSERNAME/doods:jetsonnano -f Dockerfile.jetsonnano .
The text was updated successfully, but these errors were encountered: