This project is a Mapping and Deep Learning Object Recognition Project for use with the Turtlebot. (June - 8th September 2017)
- Produce a 2D occupancy grid map and 3D point cloud using RTAB-Map
- Autonomously navigate an unknown environment
- Detect and identify objects in a room
- Calibrate camera to mask textures
-
Turtlebot 2
-
A USB cable (that works, ensure it does) to connect the Kobuki Base and Laptop
-
A Laptop running Ubuntu 16.04 and ROS Kinetic
-
A camera:
-
Xbox Kinect v2 connected to the 12v 5 amps socket on the Turtlebot. (Please note the existing cable for this is poor and will need a permanent solution with proper parts.)
-
Zed Camera connected to the laptop
-
-
Install Turtlebot packages (replace for kinetic, some will not work) following instructions found here:
http://wiki.ros.org/turtlebot/Tutorials/indigo/Turtlebot%20Installation
-
Install RTABMAP-ros following instructions found here:
-
Install the Exploration package:
cd ~/catkin_ws/src
git clone https://github.com/bnurbekov/Turtlebot_Navigation
cd ..
catkin_make
-
Install the Google Cloud SDK following instructions found here:
-
Enable the SDK for the Google Vision API and install the client library following instructions found here:
-
Clone and catkin_make in a catkin workspace:
cd ~/catkin_ws/src
git clone https://github.com/mcgeorgiev/terrapin-ros
cd ..
catkin_make
-
(OPTIONAL) Tensorflow will need to be trained and placed in
~/terrapin-ros/src/tensorflow/tf_files
following instructions found here:https://codelabs.developers.google.com/codelabs/tensorflow-for-poets/#0
a) Kinect v2:
b) Zed Camera:
- https://www.stereolabs.com/blog/index.php/2015/09/07/use-your-zed-camera-with-ros/ (Including the SDK instruction)
cd ~/catkin_ws/src/terrapin-ros
pip install –r requirements.txt
- Ensure that the catkin workspace directory is sourced for all terminals used. Usually:
source ~/catkin_ws/devel/setup.bash
- Run the launch file specific to your camera, either:
roslaunch terrapin-ros kinect.launch
OR
roslaunch terrapin-ros zed.launch
This will launch the turtlebot_bringup
, rtabmap_ros
, rtabmap visualisation
, specific camera node and depthimage_to_laserscan
nodes.
- Run the object detection programme:
roslaunch terrapin-ros stream.py kinect2
OR
roslaunch terrapin-ros stream.py zed
- Run RViz:
...
- Run the frontier exploration nodes:
rosrun final_project control.py
rosrun final_project mapping.py
-
Turtlebot should start mapping! However autonomous navigation can be replaced with tele-operation. Replace step 4) with:
roslaunch turtlebot-telep keyboard.launch
-
A calibration tool can be ran which will create a text file with calibration details. This will mask out the selected area. (E.g. flooring) Run:
python calibration.py
and point at an area and press ‘q’ or ‘p’.