Release 2023.0
Intel® Deep Learning Streamer Pipeline Framework Release 2023.0
Intel® Deep Learning Streamer (Intel® DL Streamer) Pipeline Framework is a streaming media analytics framework, based on GStreamer* multimedia framework, for creating complex media analytics pipelines. It ensures pipeline interoperability and provides optimized media, and inference operations using Intel® Distribution of OpenVINO™ Toolkit Inference Engine backend, across Intel® architecture, CPU, and iGPU.
This release includes Intel® DL Streamer Pipeline Framework elements to enable video and audio analytics capabilities, (e.g., object detection, classification, audio event detection), and other elements to build end-to-end optimized pipeline in GStreamer* framework.
The complete solution leverages:
- Open source GStreamer* framework for pipeline management
- GStreamer* plugins for input and output such as media files and real-time streaming from camera or network
- Video decode and encode plugins, either CPU optimized plugins or GPU-accelerated plugins based on VAAPI
- Deep Learning models converted from training frameworks TensorFlow*, Caffe* etc. from Open Model Zoo (OMZ)
- The following elements in the Pipeline Framework repository:
Element | Description |
---|---|
gvadetect | Performs object detection on a full-frame or region of interest (ROI) using object detection models such as YOLOv4, MobileNet SSD, Faster-RCNN etc. Outputs the ROI for detected objects. |
gvaclassify | Performs object classification. Accepts the ROI as an input and outputs classification results with the ROI metadata. |
gvainference | Runs deep learning inference on a full-frame or ROI using any model with an RGB or BGR input. |
gvaaudiodetect | Performs audio event detection using AclNet model. |
gvatrack | Performs object tracking using zero-term, or imageless tracking algorithms. Assigns unique object IDs to the tracked objects. |
gvametaaggregate | Aggregates inference results from multiple pipeline branches |
gvametaconvert | Converts the metadata structure to the JSON format. |
gvametapublish | Publishes the JSON metadata to MQTT or Kafka message brokers or files. |
gvapython | Provides a callback to execute user-defined Python functions on every frame. Can be used for metadata conversion, inference post-processing, and other tasks. |
gvawatermark | Overlays the metadata on the video frame to visualize the inference results. |
gvafpscounter | Measures frames per second across multiple streams in a single process |
For the details of supported platforms, please refer to System Requirements section.
For installing Pipeline Framework with the prebuilt binaries or Docker* or to build the binaries from the open source, please refer to Intel® DL Streamer Pipeline Framework installation guide
New in this Release
Title | High-level description |
---|---|
Compatibility with OpenVINO™ Toolkit 2023.0 | Pipeline Framework has been updated to use the 2023.0.0 version of the OpenVINO™ Toolkit |
Intel® Data Center GPU Flex Series PV support | Validated on Intel® Data Center GPU Flex Series 140 and 170 with pipelines/models/videos from the Intel® DL Streamer Pipeline Zoo, Pipeline Zoo Models and Pipeline Zoo Media repositories. Tested with the Latest GPU Linux release (https://dgpu-docs.intel.com/releases/production_682.14_20230804.html) |
Updated to FFmpeg 5.1.3 | Updated FFmpeg from 5.1 to 5.1.3 |
New media analytics model support | Added support for DeepSort and object tracking |
Changed in this Release
Deprecation Notices
- Ubuntu 20.04 is no longer actively supported.
- See see full list of currently deprecated properties in this table
- YOLOv2 is no longer a supported model
Known Issues
Issue | Issue Description |
---|---|
Intermittent accuracy fails with YOLOv5m and YOLOv5s | Object detection pipelines using YOLOv5m and YOLOv5s show intermittent inconstancy between runs |
VAAPI memory with decodebin |
If you are using decodebin in conjunction with vaapi-surface-sharing preprocessing backend you should set caps filter using "video/x-raw(memory:VASurface)" after decodebin to avoid issues with pipeline initialization |
Artifacts on sycl_meta_overlay |
Running inference results visualization on GPU via sycl_meta_overlay may produce some partially drawn bounding boxes and labels |
Preview Architecture 2.0 Samples | Preview Arch 2.0 samples have known issues with inference results |
Memory grow with meta_overlay |
Some combinations of meta_overlay and encoders can lead to memory grow |
Fixed issues
Issue # | Issue Description | Fix | Affected platforms |
---|---|---|---|
336 | Regarding the length and width of rectangular training yolov5, specify them separately in dlstreamer | Fixed layouts handling in YOLO post processing. | All |
System Requirements
Please refer to Intel® DL Streamer documentation.
Installation Notes
There are several installation options for Pipeline Framework:
- Install Pipeline Framework from pre-built Debian packages
- Pull and run Docker image
- Build Pipeline Framework from source code
For more detailed instructions please refer to Intel® DL Streamer Pipeline Framework installation guide.
Samples
The samples folder in Intel® DL Streamer Pipeline Framework repository contains command line, C++ and Python examples.
Legal Information
No license (express or implied, by estoppel or otherwise) to any intellectual property rights is granted by this document.
Intel disclaims all express and implied warranties, including without limitation, the implied warranties of merchantability, fitness for a particular purpose, and non-infringement, as well as any warranty arising from course of performance, course of dealing, or usage in trade.
This document contains information on products, services and/or processes in development. All information provided here is subject to change without notice. Contact your Intel representative to obtain the latest forecast, schedule, specifications and roadmaps.
The products and services described may contain defects or errors which may cause deviations from published specifications. Current characterized errata are available on request.
Intel and the Intel logo are trademarks of Intel Corporation in the U.S. and/or other countries.
*Other names and brands may be claimed as the property of others.
© 2023 Intel Corporation.