Skip to content

Latest commit

 

History

History
87 lines (62 loc) · 4.4 KB

segnet-camera-2.md

File metadata and controls

87 lines (62 loc) · 4.4 KB

Back | Next | Contents
Semantic Segmentation

Running the Live Camera Segmentation Demo

The segnet.cpp / segnet.py sample that we used previously can also be used for realtime camera streaming. The types of supported cameras include:

  • MIPI CSI cameras (csi://0)
  • V4L2 cameras (/dev/video0)
  • RTP/RTSP streams (rtsp://username:password@ip:port)

For more information about video streams and protocols, please see the Camera Streaming and Multimedia page.

Run the program with --help to see a full list of options - some of them specific to segNet include:

  • optional --network flag changes the segmentation model being used (see available networks)
  • optional --visualize flag accepts mask and/or overlay modes (default is overlay)
  • optional --alpha flag sets the alpha blending value for the overlay (default is 120)
  • optional --filter-mode flag accepts point or linear sampling (default is linear)

Below are some typical scenarios for launching the program - see this table for the models available to use.

C++

$ ./segnet --network=<model> csi://0                    # MIPI CSI camera
$ ./segnet --network=<model> /dev/video0                # V4L2 camera
$ ./segnet --network=<model> /dev/video0 output.mp4     # save to video file

Python

$ ./segnet.py --network=<model> csi://0                 # MIPI CSI camera
$ ./segnet.py --network=<model> /dev/video0             # V4L2 camera
$ ./segnet.py --network=<model> /dev/video0 output.mp4  # save to video file

note: for example cameras to use, see these sections of the Jetson Wiki:
             - Nano:  https://eLinux.org/Jetson_Nano#Cameras
             - Xavier: https://eLinux.org/Jetson_AGX_Xavier#Ecosystem_Products_.26_Cameras
             - TX1/TX2: developer kits include an onboard MIPI CSI sensor module (0V5693)

Visualization

Displayed in the OpenGL window are the live camera stream overlayed with the segmentation output, alongside the solid segmentation mask for clarity. Here are some examples of it being used with different models that are available to try:

# C++
$ ./segnet --network=fcn-resnet18-mhp csi://0

# Python
$ ./segnet.py --network=fcn-resnet18-mhp csi://0

# C++
$ ./segnet --network=fcn-resnet18-sun csi://0

# Python
$ ./segnet.py --network=fcn-resnet18-sun csi://0

# C++
$ ./segnet --network=fcn-resnet18-deepscene csi://0

# Python
$ ./segnet.py --network=fcn-resnet18-deepscene csi://0

Feel free to experiment with the different models and resolutions for indoor and outdoor environments.

Next, we're going to introduce the concepts of Transfer Learning and train some example DNN models on our Jetson using PyTorch.

Next | Transfer Learning with PyTorch
Back | Segmenting Images from the Command Line

© 2016-2019 NVIDIA | Table of Contents