This is our Capstone project repo for the final project of the Udacity Self-Driving Car Nanodegree. The goal of this project is to enable a car to drive itself around a controlled track that contains traffic lights that will be recognized and responded to.
We implement speed and steering using a PID controller with angular velocity as the goal. This is all implemented in
# If Positive difference throttle = nonzero, brake = 0 # Else throttle = 0, brake = nonzero if(del_vel > 0) and (current_velocity < target_velocity): throttle = self.throttle_limit*(1.0 - (current_velocity/target_velocity)) brake = 0 else: throttle = 0 brake = torque
The waypoint_updater finds next waypoints from the closest waypoint based on the current position and the orientation of the vehicle. It then generates the final_waypoints with the velocity as defined by the launch file.
traffic_waypoint is published,
waypoint_updater will generate a smooth deceleration velocity for the
final_waypoints using a spline. In order to prevent deceleration taking too long, a parameter
MAX_BRAKE_DISTANCE is defined. We only start to decelerate if the distance between current location and the stopping waypoint is less than
In order to save bandwidth, waypoint_loader only publish
base_waypoints at rate of 1 hertz, and stop publishing after 5 seconds.
Traffic Sign Detection
We trained machine learning models to detect traffic lights in images in order to detect when we were coming up against a traffic light so that we could publish this information on a ROS node.
Traffic Sign Classification
The traffic sign classifier is based on Squeezenets that have seen a lot of success for the object detection problem. The advantage of using Squeezenet over other networks is that it provides AlexNet level accuracy with 50 times fewer parameters. The images have been preprocessed and resized to the size
300x300 for giving it input to the network. For the model to be effective in this scenario the hyperparameters have to be initialized intuitively.
For instance our number of classes is set to
3 (excluding no light: Red, Green and Yellow.) We set our learning rate to be
1e-4 and the number of epochs at
2000. Lastly we handle the data in batch sizes of
Be sure that your workstation is running Ubuntu 16.04 Xenial Xerus or Ubuntu 14.04 Trusty Tahir. Ubuntu downloads can be found here.
If using a Virtual Machine to install Ubuntu, use the following configuration as minimum:
- 2 CPU
- 2 GB system memory
- 25 GB of free hard drive space
The Udacity provided virtual machine has ROS and Dataspeed DBW already installed, so you can skip the next two steps if you are using this.
Follow these instructions to install ROS
- Use this option to install the SDK on a workstation that already has ROS installed: One Line SDK Install (binary)
Download the Udacity Simulator.
- Clone the project repository
git clone https://github.com/davidawad/Capstone.git
- Install python dependencies
cd Capstone pip install -r requirements.txt
- Make and run styx
cd ros catkin_make source devel/setup.sh roslaunch launch/styx.launch
- Run the simulator
Real world testing
- Download training bag that was recorded on the Udacity self-driving car
- Unzip the file
- Play the bag file
rosbag play -l traffic_light_bag_files/loop_with_traffic_light.bag
- Launch your project in site mode
cd Capstone/ros roslaunch launch/site.launch