Skip to content

Edge AI TIOVX Apps reference guide

Rahul T R edited this page Apr 12, 2024 · 11 revisions

Edge AI TIOVX Apps demonstrates an interplay of TIOVX kernels and V4L2 elements to define typical capture/decode-inference-display/encode pipelines with zero buffer copies between the processing elements. Using intuitive user config YAML files, the application launches intricate dataflows with multiple cameras and deep-learning network instances at ease. It provides a seamless access to the device's underlying vision and deep-learning accelerators and stiches multiple processing elements concurrently to maintain high-throughput, real-time performance needs.

Edge AI TIOVX Apps imagines a new way of writing an OpenVX application by incorporating some of the best concepts from Gstreamer. It makes it simpler to connect compatible nodes, exchange input/output data via Pads, automatically create and manage shared buffer pools and provide a structured way of connecting custom processing blocks as Modules.

Introducing you to some of the concepts below,

  1. Graph Object
  2. Node Object
  3. Pads
  4. Buffers and Buffer pools
  5. Modules

image

Graph Object

Graph object is a wrapper around OpenVX Graph. Along with OpenVX graph it also stores additional information like list of nodes in the graph, graph parameter indexes, OpenVX context etc...

Node Object

Node object is a wrapper around OpenVX node. Along with OpenVX Node it will have a sink-pad for each input and source-pad for each output. Two node objects are connected via Pads.

Pads

Pads represents the input/output to a node. There are two types of pads,

  • sink-pad - for input parameter
  • source-pad - for output parameter

Pad manages things like node parameter index, data objects for corresponding node parameter, number of channels etc. A source-pad can be linked to a sink-pad to connect two nodes or left floating to expose them as graph parameter

Buffers and Buffer pools

Application need to enqueue/dequeue buffers for all the pads exposed as graph parameter. To make this easier, a pool of buffers are allocated for all floating pads. Application can acquire buffers form buffer pools and use it and release it back when done

Modules

Each module is a wrapper around an OpenVX kernel, which encapsulates the code for

  1. Creating data objects for inputs and outputs
  2. Initialize Pads based on number of inputs and outputs
  3. Create the OpenVX node

And exposes a simple config data structure for user

TEE Module

One source pad and only connect to one sink pad. On node output cannot be give to multiple inputs because of this limitation. To enable this we have introduced a special module called TEE, which can replicated a input source pad into n number of source pads like depicted below

image

Application Flow

Below figure depicts the code flow of an OpenVX application written using this layer image

Non TIOVX Modules

We also support some non OpenVX modules for interacting with frameworks like V4L2, DRM optimally using DMABuf feature of Linux. Below figure depicts the interaction between a OpenVX graph and V4L2 framework via buffers

image

Core APIs

Documentation of APIs exposed to users by this layer can be found in Edge AI TIOVX Apps API docs

Directory Structure

.
├── apps
│       Contains OpenVX based deep learning applications with different input and output options
│       like  V4L2 capture, TIOVX capture, h264 decode etc.. to display, encode to file etc..
├── cmake
│       Build files
├── configs
│       YAML based config file for apps
│       Refer to template config file for details on how to modify or write new configs
├── modules
│       A thin framework to abstract some OpenVX things to make the application code
│       simpler. These modules can directly be used to write custom pipelines. 
├── tests
│       Some simple tests for modules that also servers as examples on how to use
│       modules to write custom pipelines
└── utils
        Some utility functions used by apps and modules

Steps to compile

  1. Building on the target

    root@j7-evm:/opt# cd edgeai-tiovx-apps
    root@j7-evm:/opt/edgeai-tiovx-apps# export SOC=(j721e or j721s2 or j784s4 or j722s or am62a)
    root@j7-evm:/opt/edgeai-tiovx-apps# mkdir build
    root@j7-evm:/opt/edgeai-tiovx-apps# cd build
    root@j7-evm:/opt/edgeai-tiovx-apps/build# cmake ..
    root@j7-evm:/opt/edgeai-tiovx-apps/build# make -j2
  2. Please use Edge AI App Stack for cross compilation

Steps to run

  1. Go to edgeai-tiovx-apps directory on target under /opt

    root@j7-evm:/opt# cd edgeai-tiovx-apps
  2. Run the app

    root@j7-evm:/opt/edgeai-tiovx-apps# ./bin/Release/edgeai-tiovx-apps-main configs/linux/object_detection.yaml

Features Supported

  • RTOS Capture Display
  • V4L2 Capture, decode, encode
  • Linux Display using DRM
  • End to End multi-channel AI pipelines
  • SDE and DOF pipelines

Upcoming Features

  • OpenMAX integration for codec on QNX
  • OpenGL integration for GPU