Skip to content

Deploying Watsor to Jetson Nano

Alexander Smirnov edited this page Jan 10, 2021 · 4 revisions

Jetson Nano is a small computer with GPU on board, where the infrastructure for running neural networks to detect objects is available out of the box, therefore the device is suitable for running Watsor.

The guide below explains how to deploy it as Python module. Docker image for Jetson devices is also available.

Installation

  1. Install system dependencies.
sudo apt-get update && sudo apt-get install -y --no-install-recommends \
    python3-pip \
    python3-venv \
    libgeos-c1v5 libgeos-3.6.2
  1. Create Python virtual environment and activate it. The new environment will inherit the system modules, installed as part of JetPack.
python3 -m venv venv --system-site-packages

source venv/bin/activate
  1. Install Python tools.
python3 -m pip install --upgrade \
    pip \
    setuptools \
    wheel
  1. Install application dependencies.
export CPATH=$CPATH:/usr/local/cuda/targets/aarch64-linux/include
export LIBRARY_PATH=$LIBRARY_PATH:/usr/local/cuda/targets/aarch64-linux/lib

python3 -m pip install \
    PyYaml \
    cerberus \
    shapely \
    werkzeug \
    paho-mqtt \
    pycuda \
    https://github.com/google-coral/pycoral/releases/download/release-frogfish/tflite_runtime-2.5.0-cp36-cp36m-linux_aarch64.whl
  1. Install Watsor.
python3 -m pip install --no-deps \
    watsor
  1. Download object detection model.
mkdir model \
    && wget -q https://github.com/asmirnou/todus/raw/models/ssd_mobilenet_v2_coco_2018_03_29.uff -O model/gpu.uff
  1. Install FFmpeg with accelerated decode. Nano’s GPU will be used for neural network inference, while video decoding will be performed through L4T Multimedia API on dedicated video decoder hardware freeing up the GPU.
echo "deb https://repo.download.nvidia.com/jetson/ffmpeg main main" |  sudo tee -a /etc/apt/sources.list
sudo apt update
sudo apt install ffmpeg

Running

Prepare configuration file. When configuring FFmpeg decoder, enable hardware acceleration as follows:

H264
- -c:v
-  h264_nvv4l2dec
- -i
- ...
H265
- -c:v
-  hevc_nvv4l2dec
- -i
- ...

The GPU supports Half precision (also known as FP16), we enable this mode at the first run to boost performance.

export TRT_FLOAT_PRECISION=16

python3 -m watsor.main_for_gpu --config config/config.yaml --model-path model/

Running as a service

Create a new user, making sure to use the correct full path to the installation directory:

addgroup -gid 1001 watsor
adduser -uid 1001 -gid 1001 -gecos watsor -home /opt/watsor --no-create-home --disabled-password watsor
usermod -a --groups video,plugdev watsor
chown -R watsor /opt/watsor

Create a file for a new service:

 sudo vi /etc/systemd/system/watsor@watsor.service

Add the following context, making sure to use the correct full path to the installation directory.

[Unit]
Description=Watsor
After=network-online.target
[Service]
Type=simple
User=%i
WorkingDirectory=/opt/watsor
ExecStart=/opt/watsor/venv/bin/python -m watsor.main_for_gpu --config config/watsor.yaml
[Install]
WantedBy=multi-user.target

Activate systemd service:

sudo systemctl daemon-reload
sudo systemctl enable watsor@watsor.service --now

Now Watsor should be up and running.

Clone this wiki locally