Samples are simple applications that demonstrate how to use the Intel® DL Streamer. The samples are available in the /opt/intel/dlstreamer/samples
directory.
Samples separated into several categories
- gst_launch command-line samples (samples construct GStreamer pipeline via gst-launch-1.0 command-line utility)
- Face Detection And Classification Sample - constructs object detection and classification pipeline example with gvadetect and gvaclassify elements to detect faces and estimate age, gender, emotions and landmark points
- Audio Event Detection Sample - constructs audio event detection pipeline example with gvaaudiodetect element and uses gvametaconvert, gvametapublish elements to convert audio event metadata with inference results into JSON format and to print on standard out
- Vehicle and Pedestrian Tracking Sample - demonstrates object tracking via gvatrack element
- Human Pose Estimation Sample - demonstrates human pose estimation with full-frame inference via gvaclassify element
- Metadata Publishing Sample - demonstrates how gvametaconvert and gvametapublish elements are used for converting metadata with inference results into JSON format and publishing to file or Kafka/MQTT message bus
- gvapython Sample - demonstrates pipeline customization with gvapython element and application provided Python script for inference post-processing
- Action Recognition Sample - demonstrates action recognition via video_inference bin element
- Instance Segmentation Sample - demonstrates Instance Segmentation via object_detect and object_classify bin elements
- Detection with Yolo - demonstrates how to use publicly available Yolo models for object detection and classification
- Deployment of Geti™ models - demonstrates how to deploy models trained with Intel® Geti™ Platform for object detection and classification tasks
- C++ samples
- Draw Face Attributes C++ Sample - constructs pipeline and sets "C" callback to access frame metadata and visualize inference results
- Python samples
- Draw Face Attributes Python Sample - constructs pipeline and sets Python callback to access frame metadata and visualize inference results
- Benchmark
- Benchmark Sample - measures overall performance of single-channel or multi-channel video analytics pipelines
Samples with C/C++ code provide build_and_run.sh
shell script to build application via cmake before execution.
Other samples (without C/C++ code) provide .sh script for constructing and executing gst-launch or Python command line.
Intel® DL Streamer samples use pre-trained models from OpenVINO™ Toolkit Open Model Zoo
Before running samples, run script download_models.sh
once to download all models required for samples. The script located in samples
top folder.
NOTE: To install all necessary requirements for
download_models.sh
script run this command:
python3 -m pip install --upgrade pip
python3 -m pip install openvino-dev[onnx]
NOTE: To install all available frameworks run this command:
python3 -m pip openvino-dev[caffe,onnx,tensorflow2,pytorch,mxnet]
First command-line parameter in Intel® DL Streamer samples specifies input video and supports
- local video file
- web camera device (ex.
/dev/video0
) - RTSP camera (URL starting with
rtsp://
) or other streaming source (ex URL starting withhttp://
)
If command-line parameter not specified, most samples by default stream video example from predefined HTTPS link, so require internet connection.
NOTE: Most samples set property
sync=false
in video sink element to disable real-time synchronization and run pipeline as fast as possible. Change tosync=true
to run pipeline with real-time speed.
In order to run samples on remote machine over SSH with X Forwarding you should force usage of ximagesink
as video sink first:
source ./force_ximagesink.sh