ORP is a point cloud-based visual object detector to allow vision to be easily integrated into robotic manipulation pipelines.
Table of Contents
- Download the repository from GitHub
cd catkin_ws/src git clone https://github.com/UTNuclearRobotics/orp.git
- Install dependencies
You may need to install camera drivers depending on what type of camera you are using.
- Build your catkin workspace
roslaunch orp example.launch
ORP was developed in 2015 in conjunction with UT Austin's Amazon Picking Challenge team and the Nuclear and Applied Robotics Group. We continued to improve the package and add features as we used it throughout multiple projects, including pick-and-place demos and human-robot interaction studies. It was further refined during additional research at UT Austin's Socially Intelligent Machines Lab.
We found that robotics groups (especially research labs) tend to re-create simple perception code over and over, often poorly wrapping Point Cloud Library functions in a way that was brittle and hard to reuse. ORP's focus is on flexibility and ease of configuration so that it can be quickly adapted to your needs.