Skip to content

nanovision code developed in the 2019 off season and 2020 preseason

Notifications You must be signed in to change notification settings

FRCTeam1073-TheForceTeam/vision20

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

32 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Vision20

Nanovision code developed in the 2019 off season and 2020 preseason

Introduction

Nanovision is a new concept for the team where we utilize the new Jetson Nano as a vision co-processor instead of Raspberry Pi. It is sligtly more expensive but tremendously more powerful as a processor and this can allow us to do more video /vision processing on the the robot.

Hardware Outline

The Jetson Nano Developmenet Board

The hardware for the Nanovision setup is the off-the-shelf JetsonNano baord, available for $99:

The Jetson Nano Development Kit

The NVIDIA website has a wealth of resources available for the boards and full access requires you sign up for a free account.

SD Card Storage

The baord requires a high-quality micro-SD card to be its "disk" storage and for that we use the following cards:

128GB SD Cards

Given the extremely tough environment of living in an FRC robot we don't recommend skimping on the cards or other hardware parts or they may fail on you in a match.

Robot Capable Power Supply

In order to survive being part of an FRC robot you need to provide very solid, unwavering power to this co-processor and you have to do this by using the board as a "custom circuit". It should have 18 Guage power wires coming from a snap-breaker protected circuit to a local power regulator on the board. The power on a robot is a very messy environment and you need a steady 5VDC supply for the Jetson Nano. For this we use the following buck-boost regulator board from Pololu for about $15.

Buck/Boost Power Module

This allows the board to tolerate the huge power swings seen in a defensive robot power system. This can only supply about 10W of power to the Jetson Nano so you can't use its maximum capabilities without a beefier power supply.

Camera Modules

One cool part of the JetsonNano is that it is hardware compatible with RaspberryPi Camera Modules and the clones out there. This gives you a really nice collection of MIPI based camera modules that you can choose from and flexibility to pick modules with different types of lenses for different situations. We use a wide-angle lens setup on the hardware MIPI camera port in our setups. We use the following basic drive camera on the MIPI camera port in our setup:

Jetson Nano / Raspberry Pi Wide Angle Camera

The JetsonNano can also work with any USB/UVC compatible camera modules and has 4x USB 3.0 ports for cameras or other things and it has the horsepower to capture, compress, process and transmit many video streams simultaneously.

Development Case

We also recommend getting a development system for the desktop in addition to any systems you embed into your robot setup. For these there is an excellent desktop case that helps with experimentation and camera development in a nice, clean package:

Jetson Nano Development Case

Software Outline

The JetsonNano is an Ubuntu based Linux machine running Linux on the ARMv8 or "64-bit ARM" architecture. Because of this it has an enormous and powerful collection of open-source software available for it. The details of software setup are included in our notes directory.

The JetsonNano includes many specialized hardware "processing accelerators" that allow you to do more than just the main processor can do alone. These are accessed through libraries that support the open-source Gstreamer-1.0 system.

GStreamer Software

By using open-source and custom NVIDIA Gstreamer-1.0 plugins you can use tools to create video capture and processing pipelines on your JetsonNano setups.

Our first, basic setups for the JetsonNano are simply bash scripts that use gst-launch-1.0 to launch a gstreamer pipeline of software modules that take advantage of the hardware on the JetsonNano and capture, compress and transmit video to our driver station very efficiently.

GStreamer is also available for windows and we load GStreamer onto our driver station and have GStreamer scripts that allow us to view the streams from our JetsonNano in real-time.

Python / OpenCV 4.x

The Jetson JetPack 4.3 version ships with OpenCV 4.x and Python bindings. In order to enable using these tools successfully you need to install some additional key development packages. These packages get you development tools, dependencies and utilities for python3

sudo apt-get install python3-pip

This will install several packages including python3-dev.

sudo apt-get install ipython3 python3-numpy

This will install a powerful, flexible interactive python3 shell and a python3 numerical library.

We can then use Python, OpenCV and the integration of OpenCV with Gstreamer to create python applications that create input (and output) gstreamer pipelines for capturing data from the primary MIPI camera or web cameras, then apply OpenCV vision processing and overlay drawing to the images in Python, and then send the results to the output pipelien to have it compressed by accelerated hardware and sent out for viewing.

The Python applications also make use of PyNetworkTables (The Python implementation of FRC Network Tables) allowing our vision programs to provide a Network Table interface to the main Robot Conrol program.

The NetworkTable interface allows us to have the operator select modes of cameras, switch between overlays and allows our vision code to send data to autonomy and operator assist commands.

The repo includes the current source release of Pynetworktables and you can install it by changing into the pynetworktables directory and running:

sudo python3 ./setup.py install

LibRealsense

If you want to use an Intel RealSense camera with the system then you also need to install a collection of dependencies and then download and build the Intel librealsense from github to install all the building blocks you need.

The RealSense cameras connect over USB 3.0 and provide high resolution color imagery with high quality optics and the RS435 in particular provides global-shutter depth images (3d point cloud data) as well.

Installing dependencies:

sudo apt-get install libxinerama-dev libxcursor-dev

Getting and installing librealsense sources:

git clone https://github.com/IntelRealSense/librealsense.git cd librealsense mkdir cmake-build cd cmake-build cmake .. make make install

This will get you librealsense and the realsense utilities for Intel RealSense cameras for NanoVision.

Vision Processing program outline:

import numpy as np import cv2 from networktables import NetworkTables

Set up capture

Capture video from gstreamer pipeline:

capture_pipeline = "< big string with gstreamer input pipeline> ! videoconvert ! video/x-raw,format=(string)BGR ! appsink"

capture = cv2.VideoCapture(capture_pipeline)

Set up output

Send output video to gstreamer pipeline:

output_pipeline = "appsrc ! videoconvert ! video/x-raw,format=(string)NV12 < big string with gstreamer output pipeline>"

Set up the rate and frame size of the output pipeline here as well.

output = cv2.VideoWriter(output_pipeline, cv2.CAP_GSTREAMER, 30, (640, 360))

if capture.isOpened() and output.isOpened(): print("Capture and output pipelines opened") else print("Problem creating pipelines...")

Connect to network tables server:

NetworkTables.initialize(server='127.0.0.1')

visionTable = NetworkTables.getTable("Vision")

Main vision processing loop:

while(True): # Capture a frame: ret, frame = capture.read()

# Do image processing, etc. operations in OpenCV
# ...

# Draw onto the frame using OpenCV
cv2.line(frame, (320,0), (320,360), (50,100,0),2)


# Send frame to compression pipeline:
output.write(frame)

# Check network tables for commands or inputs...
# Send some data to the network table or read input from it.
# For example:
visionTable.setNumberArray("targetPos", [-1]))
mode = visionTable.getString("VisionMode", 'default')

Networking Requirements

WiFi must be turned off and disabled.

In the IPV4 settings:

IP address = 10.10.73.10
netmask = 255.0.0.0

Ethernet connect to the robot when testing the cameras setup.

Setting Up Video on Driverstation

You need to download and install the complete runtime version of gstreamer for Windows first. The video playback works by creating and configuring standard gstreamer video components. You can find gstreamer here: https://gstreamer.freedesktop.org/

To test that the Jetson is functional, first adjust the network settings, then:

1. Connect Jetson to robot ethernet (switch) and power from the PDP

2. Connect cameras to the Jetson (USB)

3. On a laptop (not DS), go to terminal and type ssh team1073@10.10.73.3
to connect to the JetsonNano to ensure it is working. The password is "team1073".

4. On the DS, go to "c:\gstreamer\1.0\x86_64\bin" from the home directory and run the following
(which can be copied from the windowsplay.bat files):
   	gst-launch-1.0 -v udpsrc port=5801 ! "application/x-rtp, media=(string)video, clock-rate=(int)90000, encoding-name=(string)H264, payload=(int)96" ! rtph264depay ! h264parse ! avdec_h264 ! timeoverlay ! autovideosink

5. If the pipeline runs, then drag windowsplay.bat and windowsplay2.bat to the UPPER RIGHT CORNER of the DS
screen. If it does not run, fix it, and then complete this step. 		

Go to users/team1073/FRCWorkspace/vision20. In the scripts folder there should be two files named "windowsplay" and "windowsplay2". Drag the two files onto the UPPER RIGHT CORNER of the desktop screen

Depending on the setup, the gstreamer pipeline may need to be modified. This will only be necessary if the type of camera changes, the video feedback is not correctly oriented, or if the resolution needs to be altered.

To run, double click on the programs while connected to the robot's radio. Ensure that the Jetson is properly wired and has stable ethernet connection.

Setting up video servers to start automatically

The autostart directory contains several files that are systemd service unit files. These need to be copied to:

sudo cp /etc/systemd/system

Now reload configurations:

sudo systemctl daemon-reload

Now you enable the services you want enabled to auto-start:

sudo systemctl enable

You can then check on services using:

systemctl status

Note that the autostart files assume you are using user team1073 and that you have checked out the vision repository on the nano system at: /home/team1073/Projects/vision20

About

nanovision code developed in the 2019 off season and 2020 preseason

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published