This project provides an implementation of a robotic dog that is capable of searching a foreign environment for a target object (“ball”) and returning the object to a specified location using laser sensor and camera data. As a consequence of our implementation, the robot is also capable of mapping the explored environment as it proceeds in its search. We demonstrate through use of an OccupancyGrid paired with a laser sensor, a novel frontier-exploration algorithm, computer vision object detection and tracking using OpenCV and a camera, and a PD controller for object retrieval movement patterns that our robot dog implementation is capable of autonomously completing the specified task.
Authors: Eric Lu, Jordan Kirkbride, Julian Wu, Wendell Wu
These setup instructions assume that you have already set up the ROS development environment through Docker.
To download the program to your local PC environment:
- Navigate to your ROS dev directory and run
docker-compose up --build
- On a second terminal, navigate to your ROS dev directory and enter rosbash with
docker-compose exec ros bash
cd ~/catkin_ws/src
and runcatkin_create_pkg cs81-final-project std_msgs rospy
to create a new catkin package with the necessary dependenciescd ~/catkin_ws
and runcatkin_make
to update the packages in the catkin workspace- Copy the contents of this repository into the
~/catkin_ws/src/cs81-final-project
directory chmod +x group8_final.py
to grant the Python file executable permissionsapt-get install ros-melodic-nav2d-tutorials
to install the 2D simulator- Run
sudo apt-get install ros-melodic-rviz
To run the program locally on your PC:
- Navigate to your ROS development directory and run
docker-compose up --build
- On a second terminal, navigate to your ROS dev directory and enter rosbash with
docker-compose exec ros bash
- Run
source /opt/ros/melodic/setup.bash
- Run
roscore
to start the VNC viewer - On a third terminal, navigate to your ROS dev directory and enter rosbash with
docker-compose exec ros bash
cd ~/catkin_ws/src
,export TURTLEBOT3_MODEL=waffle
, and then runroslaunch turtlebot3_gazebo turtlebot3_world.launch
- On a fourth terminal, navigate to your ROS dev directory and enter rosbash with
docker-compose exec ros bash
cd ~/catkin_ws/src
and runrviz
- Open your web browser to
localhost:8080/vnc.html
and click connect. - In gazebo: add a sphere shape with radius 0.125m and rgba color (1, 0, 0, 1) for ambient, diffuse, specular, and emissive
- In gazebo: insert/change the world model to something you want
- We used the
turtlebot3_plaza
andturtlebot3_square
fromopt/ros/melodic/share/turtlebot3_gazebo/models
- On rviz: Go to "panels" in the menu bar, then hit add new panel, then hit tool properties. This should bring up a sidebar to the left.
- On rviz: In the left sidebar, click Global Options > Fixed Frame > change to "odom"
- On rviz: In the left sidebar, click add > By display type > Map. Then change its topic name to "map"
- On rviz: In the left sidebar, click add > By display type > PointCloud. Then change its topic name to "point_cloud", and size to 0.25 m (so that it can be more easily seen in rviz)
- On a fifth terminal, navigate to your ROS dev directory and enter rosbash with
docker-compose exec ros bash
cd ~/catkin_ws/src
and runrosrun cs81-final-project group8_final.py
- The robot should now be moving in the VNC viewer on the browser, there should be OccupancyGrid mapping + PointCloud visualizations in the rviz, and there should be a cv2 camera display in gazebo.