Skip to content

Latest commit

 

History

History
66 lines (46 loc) · 3.92 KB

detectnet-camera-2.md

File metadata and controls

66 lines (46 loc) · 3.92 KB

Back | Next | Contents
Object Detection

Running the Live Camera Detection Demo

The detectnet.cpp / detectnet.py sample that we used previously can also be used for realtime camera streaming. The types of supported cameras include:

  • MIPI CSI cameras (csi://0)
  • V4L2 cameras (/dev/video0)
  • RTP/RTSP streams (rtsp://username:password@ip:port)
  • WebRTC streams (webrtc://@:port/stream_name)

For more information about video streams and protocols, please see the Camera Streaming and Multimedia page.

Run the program with --help to see a full list of options - some of them specific to detectNet include:

  • --network flag which changes the detection model being used (the default is SSD-Mobilenet-v2).
  • --overlay flag which can be comma-separated combinations of box, labels, conf, and none
    • The default is --overlay=box,labels,conf which displays boxes, labels, and confidence values
  • --alpha value which sets the alpha blending value used during overlay (the default is 120).
  • --threshold value which sets the minimum threshold for detection (the default is 0.5).

Below are some typical scenarios for launching the program on a camera feed:

C++

$ ./detectnet csi://0                    # MIPI CSI camera
$ ./detectnet /dev/video0                # V4L2 camera
$ ./detectnet /dev/video0 output.mp4     # save to video file

Python

$ ./detectnet.py csi://0                 # MIPI CSI camera
$ ./detectnet.py /dev/video0             # V4L2 camera
$ ./detectnet.py /dev/video0 output.mp4  # save to video file

note: for example cameras to use, see these sections of the Jetson Wiki:
             - Nano:  https://eLinux.org/Jetson_Nano#Cameras
             - Xavier: https://eLinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
             - TX1/TX2: developer kits include an onboard MIPI CSI sensor module (0V5693)

Visualization

Displayed in the OpenGL window are the live camera stream overlayed with the bounding boxes of the detected objects. Note that the SSD-based models currently have the highest performance. Here is one using the coco-dog model:

If the desired objects aren't being detected in the video feed or you're getting spurious detections, try decreasing or increasing the detection threshold with the --threshold parameter (the default is 0.5).

Next, we'll cover creating the code for a camera detection app in Python.

Next | Coding Your Own Object Detection Program
Back | Detecting Objects from Images

© 2016-2019 NVIDIA | Table of Contents