This repository pairs with the tutorial on docs.nav2.org showcasing vision-only navigation using Nav2 and NVIDIA Isaac and Perceptor SDKs. This fully replaces the use of lidars and active depth cameras to perform navigation using only stereo cameras for mapping, localization, and collision avoidance. Please reference this tutorial for more information. Click on the image below to see a video demonstration of the system in action!
The repository contains the scripts, example maps, graphics, and software version information used for the tutorial.
This project relies on the NVIDIA Nova Carter project which uses the Segway Nova Carter robot powered by an NVIDIA Jetson AGX Orin to conduct vision-based navigation. This robot was preconfigured by Open Navigation in coordination with NVIDIA Robotics to use ROS 2 and Nav2, so we use these established configurations for this technology demonstration. That project contains the robot-specific bringup and Perceptor configurations using its NVIDIA Nova reference platform for camera capture and synchronization.
To adapt to another platform, make a new my_robot_nav package which:
- Launches the robot hardware drivers for accepting commands, bringing up sensors, providing transformation tree, etc
- Launches Isaac Perceptor, usually via
isaac_ros_perceptor_bringuppackage'sperceptor_general.launch.pyorperceptor_rgbd.launch.py - Launches Nav2 with the appropriate configurations (i.e. removed AMCL for cuVSLAM, Costmap configurations for NvBlox)
Use these launch files to replace nova_carter_bringup/launch/navigation.launch.py used in this package. More information is provided in the tutorial -- happy visual navigating!

