-
Notifications
You must be signed in to change notification settings - Fork 0
Competition 1
The goal of this project is to demonstrate how a turtlebot can pursue another turtlebot as well as evade from a pursuing turtlebot. Evasion is done using the simulated laser scanner where as pursuit is done using object detection through computer vision.
- Turtlebot with an attached camera
- For the follower node, an Nvidia GPU and CUDA 8+ needs to be installed.
For the base code (provided in email prior to competition) of tracking given an ROI
https://github.com/pirobot/rbx1/blob/indigo-devel/rbx1_apps/nodes/object_tracker.py
For object detection
https://github.com/pjreddie/darknet
For the yolo detector, a weights file and cfg file has been provided in the
cfg
and weights
folder respectively. If you wish to modify these, change the params
with your own provided cfg
/weight
files.
Run catkin_make
in the src directory to build the yolo detector.
roslaunch comp_1 multiple_turtlebot_world.launch
Evader: roslaunch comp_1 sim_evader.launch
Follower: roslaunch comp_1 sim_follower.launch
roslaunch turtlebot_bringup minimal.launch
roslaunch turtlebot_bringup 3dsensors.launch
Evader: roslaunch comp_1 comp_evader.launch
Follower: roslaunch comp_1 comp_follower.launch
Concepts and code Link to code
Evader: The evader does a random walk, turning at set intervals while using the simulated laser scanner to avoid obstacles.
From evader.py
# line 48:
if (min(ranges[100:540]) < 1 or rospy.Time.now() > state_change_time):
driving_forward = False
state_change_time = rospy.Time.now() + rospy.Duration(5)
twist.linear.x = 0
if sum(ranges[0:320])<sum(ranges[320:639]):
turn_left = True
else:
turn_right = True
ranges[100:540] makes sure the robot doesn’t look too wide so that it can go through a narrow path. When the robot needs to turn, It always turn to the direction with bigger open space.
Follower: The follower takes in an ROI and attempts to keep the ROI in the center of the camera. In this competition we use a yolo detector, which takes a trained model to run inference and return ROIs from the camera. More info can be found https://pjreddie.com/darknet/yolo/
From object_tracker.py
# line 179
if (float(roi_area) / float(image_area) < 0.3):
self.move_cmd.linear.x = 0.5
Although originally defined as area, we ended up using the width of the ROI and the image to move the robot linearly in the x axis if the ROI was smaller that 30% of the image's width
From yolo_detector.cpp
# line 19
sub = it.subscribe(camera,1, &YoloDetector::imageCallback, this);
pub = private_nh.advertise<sensor_msgs::RegionOfInterest>("roi", 1);
We subscribe to the camera topic that then runs the yolo detection algorithm on the image callback. The detection(s) is then used to return an ROI that is published to the topic /roi
# line 108 - line 141
nodeHandle.getParam("class_names", classNames);
...
std::string weightsPath =
ros::package::getPath("comp_1") + "/weights/" + weightsModel;
...
nodeHandle.param("cfg_model", cfgModel, std::string("yolo-arvp.cfg"));
...
std::string configPath =
ros::package::getPath("comp_1") + "/cfg/" + cfgModel;
The above code uses ros params to set the model, the cfg for the model, and the classnames.
The evasion part of this competition went much better than the follower. We were able to evade most pretty well given that others lost track of us most of the time. The use of the simulated laser scanner to avoid obstacles proved to be useful as it avoided collision into walls and boxes littered around the last rounds of the competition.
Tracking on the other hand was much more disappointing. We lost track of the other robot when we got either too close or too far. It was unable to keep track of another robot for a whole minute.
An improvement to both the evader and follower would be to lower the position of the camera. There was a round in which the evader bumped into another robot as the camera could not scan low enough to see something in front of it. Same goes for the follower as that lost track if the other robot was too close, the base of the robot would be out of sight for the camera.
Example of program running in gazebo: On the right is rqt displaying a debug image of the ROI being returned for object_tracker.py