Skip to content

NVIDIA’s Jetpack 4.6 capabilities and how to use them with EdgeIQ, alwaysAI Computer Vision framework.

Notifications You must be signed in to change notification settings

alwaysai/jetpack-46-hacky-hour

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

11 Commits
 
 
 
 
 
 
 
 

Repository files navigation

jetpack-46-hacky-hour

This repo is designed as a companion to go with alwaysAIs March 30th 2022 Hacky Hour. This Hacky Hour is about NVIDIA’s Jetpack 4.6 capabilities and how to use them with EdgeIQ, alwaysAI Computer Vision framework. The repo provides a python script, TensorRT models and a DLA (Deep Learning Accelerator) monitoring tool that you can use in your projects. The csi-camera script can be used as a starter application to jump start your computer vision application project. It is a object detection script that processes a video stream from IMX477 camera https://www.arducam.com/arducam-imx477-jetson-cameras/ and use a CUDA backend to perform inferencing. The machine learning model that is being used by the script is ssd_mobilenet_v1_coco_2018_01_28 a Tensorflow model created by Google using the coco dataset. The model can be found in alwaysAI Model Catalog (https://console.alwaysai.co/model-catalog?model=alwaysai/ssd_mobilenet_v1_coco_2018_01_28). The Model Catalog is a set of open source models that can be used in prototyping your project until you are ready to train your own production model. In the models folder of the repo we provide two TensoRT models that can be used with your Jetson Xavier NX GPU and DLA that are based on ssd_mobilenet_v1_coco_2018_01_28 . TensoRT models allow your project to execute at speeds needed for realtime applications. If you decide to upload the TensoRT models from the repo to your private catalog the input stats need for the model are purpose = ObjectDetection, framework = TensoRT, architecture = SSD, size = 300x300, scalefactor = 1, crop = false . In the tools folder we have a bash script you can execute on your Xavier NX to monitor the usage of the DLA’s, jtop (https://github.com/rbonghi/jetson_stats/wiki/jtop) the popular performance monitoring tool for Jetson devices is not capable of measuring DLA usage. To run the bash script from the command line enter the following command watch -n 0.1 bash check_dla_usage.sh. The code in this repo was only tested on Xavier NX, modifications may be necessary to run on other Jetson devices like the Nano. The TensorRT models will only work with a Xavier NX, TensorRT models are device specific by design to take advantage of the hardware.

Repo Programs

Folder Description
csi-camera Program uses CSI IMX477 camera to perform object detection.
models TensoRT models to use with your project
tools Bash script used to monitor DLA usage

Setup

To run the csi-camera app requires an alwaysAI account. Head to the Sign up page if you don't have an account yet. Follow the instructions to install the alwaysAI tools on your development machine.

Next, create an empty project to be used with this app. When you clone this repo, you can run aai app configure within the repo directory and your new project will appear in the list.

Usage

Once the alwaysAI tools are installed on your development machine (or edge device if developing directly on it) you can run the following CLI commands:

To set up the target device & install path

aai app configure

To install the app to your target

aai app install

To start the app

aai app start

About

NVIDIA’s Jetpack 4.6 capabilities and how to use them with EdgeIQ, alwaysAI Computer Vision framework.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published