Skip to content

git-disl/EVA

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

19 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

EVA: fast Edge Video Analytics

Demo Videos

  • : actual frame processing rate on NCS2
  • : the incoming video stream rate
  • : frame processing rate at no frame dropping
  • test video name: ETH-Sunnyday
Original Video ( = 14 FPS) Online Detection on one NCS2
(YOLOv3, is set to )
slow detection processing rate
= 2.5, = 14, = 2.5
ETH-Sunnyday Original Video ETH-Sunnyday Slow Online Detection (YOLOv3, 1 NCS2, <img src="https://render.githubusercontent.com/render/math?math=\sigma"> is set to <img src="https://render.githubusercontent.com/render/math?math=\mu">)
Online Detection on one NCS2
(YOLOv3, is set to )
cause large random frame dropping
= 14, = 14, = 2.5
Online Detection on six NCS2s
(YOLOv3, is set to )
significantly reduce random frame dropping
= 14, = 14, = 14.8
ETH-Sunnyday Online Detection (YOLOv3, 1 NCS2, <img src="https://render.githubusercontent.com/render/math?math=\sigma"> is set to <img src="https://render.githubusercontent.com/render/math?math=\lambda">) ETH-Sunnyday Online Detection (YOLOv3, 6 NCS2, <img src="https://render.githubusercontent.com/render/math?math=\sigma"> is set to <img src="https://render.githubusercontent.com/render/math?math=\lambda">)

Introduction

Deep Neural Network (DNN) trained object detectors are widely deployed in many mission-critical systems for real time video analytics at the edge, such as autonomous driving, video surveillance, Internet of smart cameras. A common performance requirement in these mission-critical edge services is the near real-time latency of online object detection on edge devices. However, even with well-trained DNN object detectors, the online detection quality at edge may deteriorate for a number of reasons, such as limited capacity to run DNN object detection models on heterogeneous edge devices, and detection quality degradation due to random frame dropping when the detection processing rate is significantly slower than the incoming video frame rate.

The first result from our project EVA is to address the above problems by exploiting multi-model multi-device detection parallelism for fast object detection in edge systems with heterogeneous edge devices. First, we analyze the performance bottleneck of running a well-trained DNN model at edge for real time online object detection. We use the offline detection as a reference model, and examine the root cause by analyzing the mismatch among incoming video streaming rate, the video processing rate for object detection, and the output rate for real time detection visualization of video streaming. Second, we study performance optimizations by exploiting multimodel detection parallelism. We show that the model-parallel detection approach can effectively speed up the FPS detection processing rate, minimizing the FPS disparity with the incoming video frame rate on heterogeneous edge devices. We evaluate the proposed approach using SSD300 and YOLOv3 (pre-trained DNN models) on benchmark videos of different video stream rates. The results show that exploiting multi-model detection parallelism can speed up the online object detection processing rate and deliver near real-time object detection performance for efficient video analytics at edge.

This research is partially sponsored by Cisco and our first joint paper below documented the results produced by June 2021:

  • Yanzhao Wu, Ling Liu, Ramana Kompella."Parallel Detection for Efficient Video Analytics at the Edge." to appear in IEEE 2021 CogMI Special Session on Edge Analytics, Dec. 2021. Also available on arXiv at https://arxiv.org/abs/2107.12563.

This project is developed based on the following resources: