Skip to content

Latest commit

 

History

History
82 lines (45 loc) · 3.85 KB

File metadata and controls

82 lines (45 loc) · 3.85 KB

Overview

stretch_deep_perception provides demonstration code that uses open deep learning models to perceive the world.

This code depends on the stretch_deep_perception_models repository, which should be installed under ~/stretch_user/ on your Stretch robot.

Link to the stretch_deep_perception_models repository: https://github.com/hello-robot/stretch_deep_perception_models

Getting Started Demos

There are two demonstrations for you to try.

Face Estimation Demo

First, try running the face detection demonstration via the following command:

ros2 launch stretch_deep_perception stretch_detect_faces.launch.py 

RViz should show you the robot, the point cloud from the camera, and information about detected faces. If it detects a face, it should show a 3D planar model of the face and 3D facial landmarks. These deep learning models come from OpenCV and the Open Model Zoo (https://github.com/opencv/open_model_zoo).

You can use the keyboard_teleop commands within the terminal that you ran the launch in order to move the robot's head around to see your face.

             i (tilt up)
	     
j (pan left)               l (pan right)

             , (tilt down)

Pan left and pan right are in terms of the robot's left and the robot's right.

Now shut down everything that was launched by pressing q and Ctrl-C in the terminal.

Object Detection Demo

Second, try running the object detection demo, which uses the tiny YOLO v5 object detection network (https://pytorch.org/hub/ultralytics_yolov5/). RViz will display planar detection regions. Detection class labels will be printed to the terminal.

ros2 launch stretch_deep_perception stretch_detect_objects.launch.py

References

[1] Hand It Over or Set It Down: A User Study of Object Delivery with an Assistive Mobile Manipulator, Young Sang Choi, Tiffany L. Chen, Advait Jain, Cressel Anderson, Jonathan D. Glass, and Charles C. Kemp, IEEE International Symposium on Robot and Human Interactive Communication (RO-MAN), 2009. http://pwp.gatech.edu/hrl/wp-content/uploads/sites/231/2016/05/roman2009_delivery.pdf

License

For license information, please see the LICENSE files.