Skip to content
Jetson Nano ML install scripts, automated optimization of robotics detection models, and filter-based tracking of detections
Python Shell
Branch: master
Clone or download

Latest commit

Latest commit 4f862c8 Jul 14, 2019


Type Name Latest commit message Commit time
Failed to load latest commit information.
data adding ms coco labels Apr 29, 2019
install_scripts please, last install issue Apr 29, 2019
media adding new image Jul 15, 2019
src Merge branch 'master' into KF-fix May 1, 2019
LICENSE Initial commit Apr 29, 2019 adding higher res shot Jul 15, 2019 reorg Apr 29, 2019 fix pip after install Apr 29, 2019 allow model to be overwritten May 25, 2019 handling both import cases seen by some users Jul 11, 2019


Jetson Nano ML install scripts, automated optimization of robotics detection models, and filter-based tracking of detections

Buy Me A Coffee


Installing and setting up the new Nvidia Jetson Nano was surprisingly time consuming and unintuitive. From protobuf version conflicts, to Tensorflow versions, OpenCV recompiling with GPU, models running, models optimized, and general chaos in the ranks.

This repository is my set of install tools to get the Nano up and running with a convincing and scalable demo for robot-centric uses. In particular, using detection and semantic segmentation models capable at running in real-time on a robot for $100. By convincing, I mean not using Nvidia's 2-day startup model you just compile and have magically working without having control. This gives you full control of which model to run and when.

In the repository, you'll find a few key things:

Install of dependencies

Getting the right versions of Tensorflow, protobufs, etc and having everyone play well on the Jetson Nano platform was a big hassle. Hopefully these will help you.

This can be accomplished via ./ run in the root of this repository, where all the models are going to be installed and linked.

Download of pretrained models for real-time detection

Scripts to automatically download pretrained Tensorflow inference graphs and checkpoints, then optimize with TensorRT (which I found as a critical must-have to even run on the Nano).

Also there's nothing here that prohibits you from using your own Tensorflow model and then using the same scripts to optimize it with TensorRT and then deploy as described below. I have retrained a model from the zoo and followed these same instructions with equal success (I really needed that additional glass of beer class for... reasons).

Execution of live detection with an attached MIPI-based camera

This will run the argus streamer for a MIPI camera compatible with the Jetson Nano. There are a number out there available, I happen to use the Raspberry Pi v2.1 camera simply because I had it around from another project and also because its shockingly high resolution for a $20 toy.

Filter-based tracking of detections

This uses a constant velocity Kalman Filter to track detections in the image frame and report stabilized detections based on the centroid. This is to handle 2 things. The first is to deal with irregular detections so that a few missing frames doesn't make an upstream application think a person disppeared out of thin air for 57 ms. Secondarily, it acts as smoother so if individual frames detect irraneous things (like an airplane rather than my ear) single frame detections aren't introduced into the system. For robotics applications it would be pretty bad if we saw an airplane in my living room.

Walk-through is the main live object detection program. It will take no flags and run in a debug mode with printed statements about detections found and a visualization. The visualization will include the bounding boxes around an object where the line thickness is proportional to confidence. Example use to run an ssd mobilenet v1 trt optimized model in debug mode:

python3 ssd_mobilenet_v1_trt_graph.pb True will be your pretrained model savior. You're able to download pretrained models unoptimized from zoo and have them placed in the ./data directory along side the ms coco labels. After download, it will run the TensorRT optimization over them and leave you with a file named [model]_trt_graph.pb for use. Example use: [model]

Model options include:

  • ssd_mobilenet_v1_coco
  • ssd_mobilenet_v2_coco
  • ssd_inception_v2_coco

There are other models available, but considering the use-case of this project is real-time detection in robotics, these are your main valid options. I make no warranty of other model uses.

You can’t perform that action at this time.