Programming a Real Self-Driving Car for the UDACITY Nanodegree
| Image | Name | ||
|---|---|---|---|
![]() |
Camilo Gordillo | Camilo | camigord@gmail.com |
![]() |
Stefano Salati | Stefano | stef.salati@gmail.com |
![]() |
Stefan Rademacher | Stefan | rademacher@outlook.com |
![]() |
Thomas GRELIER | Thomas | masto.grelier@gmail.com |
The objective of this project was to write ROS nodes to implement core functionality of the autonomous vehicle system. This included traffic light detection, waypoint planner and control.
The car behaviour is regulated by a FSM with two states: normal driving and braking.
The car drives normally when no traffic light is detected within a range of 100m or when the detected traffic light is green. This is obtained by setting the speed of all waypoints ahead to their default values.
As soon as a red traffic light is detected, the system computes the minimum braking distance to check if a braking is possible. If yes, a braking deceleration - dependent on the current speed and distance of the traffic light - is applied and the speed of all waypoints between the car and the traffic light is set so it gently decrease to zero at the light. All speeds of waypoints ahead of the traffic light are set to zero.
It's worth noting this scenario: a red light is detected and the car starts braking gently, if the light turns green while the car is braking, the car switches back to normal driving and accelerates to normal speed. This replicates the usual human behavior of not waiting the last moment to brake but to let go as soon as a red light is seen.
Throttle and brake pedal are then controlled by a PID, tuned with:
| Parameter | Value |
|---|---|
| VEL_PID_P | 0.8 |
| VEL_PID_I | 0.0001 |
| VEL_PID_D | 0.01 |
We assume that there is only one traffic light color in Carlas field of view at the same time. Thus the detection with the highest score gets selected to identify the traffic light color. If there is no detection with a score higher than 50% probability, the classifier function returns "UNKNOWN". The detection itself is done using a Convolutional Neural Network. We have used the TensorFlow Object Detection API and detection models, that are pre-trained on the COCO dataset. To be exact we have used two different models, the "ssd_inception_v2_coco" and the "faster_rcnn_inception_v2_coco".
Our dataset used for the training of the classifier consists of:
- 280 images from the Udacity Simulator
- 710 images from the training bag that was recorded on the Udacity self-driving car
All images of the dataset where labeled manually. The dataset had to be converted into the TFRecord format.
We have trained two classifier. One for the simulator and one for the real world. The training was done using the AWS Deep Learning AMI and the following configuration parameters:
- num_classes: 4 ('Green','Red','Yellow','Unknown')
- num_steps: 10000
- max_detections_per_class: 10
Other parameters were used from the sample configuration files provided by the tensorflow team (https://github.com/tensorflow/models/tree/master/research/object_detection/samples/configs).
We have then validated the trained classifier on 20 "unseen" images. Both classifier (for simulator and real world) have correctly detected and classified all traffic lights on the test images. Here are two examples:
-
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. [Ubuntu downloads can be found here](https://www.ubuntu.com/download/desktop).
-
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
-
Follow these instructions to install ROS
- [ROS Kinetic](http://wiki.ros.org/kinetic/Installation/Ubuntu) if you have Ubuntu 16.04.
- [ROS Indigo](http://wiki.ros.org/indigo/Installation/Ubuntu) if you have Ubuntu 14.04.
-
[Dataspeed DBW](https://bitbucket.org/DataspeedInc/dbw\_mkz\_ros) * Use this option to install the SDK on a workstation that already has ROS installed: [One Line SDK Install (binary)](https://bitbucket.org/DataspeedInc/dbw\_mkz\_ros/src/81e63fcc335d7b64139d7482017d6a97b405e250/ROS_SETUP.md?fileviewer=file-view-default) * Download the [Udacity Simulator](https://github.com/udacity/CarND-Capstone/releases).
[Install Docker](https://docs.docker.com/engine/installation/)
Build the docker container
docker build . -t capstoneRun the docker file
docker run -p 4567:4567 -v $PWD:/capstone -v /tmp/log:/root/.ros/ --rm -it capstoneTo set up port forwarding, please refer to the [instructions from term 2](https://classroom.udacity.com/nanodegrees/nd013/parts/40f38239-66b6-46ec-ae68-03afd8a601c8/modules/0949fca6-b379-42af-a919-ee50aa304e6a/lessons/f758c44c-5e40-4e01-93b5-1a82aa4e044f/concepts/16cf4a78-4fc7-49e1-8621-3450ca938b77)
1. Clone the project repository
git clone https://github.com/camigord/System-Integration-Project
2. Install python dependencies
cd CarND-Capstone
pip install -r requirements.txt3. Build project
cd ros
catkin_make
source devel/setup.shWe have developed two classifiers, the first one is a SSD, the second a RCNN. Simulation works well with both of them.
- To launch project with SSD classifier, type the following command. This is also used if no model parameter is specified or if the original launch file (that doesn't specify this parameter) is used.
roslaunch launch/styx.launch model:='frozen_inference_graph_simulation_ssd.pb'- To launch project with RCNN classifier, type the following command:
roslaunch launch/styx.launch model:='frozen_inference_graph_simulation_rcnn.pb'Then launch the simulator.
1. Download [training bag](https://s3-us-west-1.amazonaws.com/udacity-selfdrivingcar/traffic\_light\_bag_file.zip) that was recorded on the Udacity self-driving car. 2. Unzip the file
unzip traffic\_light\_bag_file.zip3. Play the bag file
rosbag play -l traffic\_light\_bag\_file/traffic\_light_training.bag4. Launch project in site mode
cd CarND-Capstone/ros
roslaunch launch/site.launch model:='frozen_inference_graph_real.pb'






