Skip to content

cruise-control/CarND-Capstone

 
 

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Self Driving Car Final Project Report

Team Name: Cruise Control

  _____            _             _____            _             _
 / ____|          (_)           / ____|          | |           | |
| |     _ __ _   _ _ ___  ___  | |     ___  _ __ | |_ _ __ ___ | |
| |    | '__| | | | / __|/ _ \ | |    / _ \| '_ \| __| '__/ _ \| |
| |____| |  | |_| | \__ \  __/ | |___| (_) | | | | |_| | | (_) | |
 \_____|_|   \__,_|_|___/\___|  \_____\___/|_| |_|\__|_|  \___/|_|_

Team Member Names:

Garrett Pitcher garrett.pitcher@gmail.com
Mitch Maifeld udacity@maifeld.name
Hanqiu Jiang hanq.jiang@gmail.com
Shaun Cosgrove shaun.cosgrove@bogglingtech.com
Wei Guo guoweist@foxmail.com

System Architecture

Initial Architecture

Notes

Generated graph of the resulting nodes, topics and pub-sub details

Perception

The basis of the traffic light classification is a Deep Neural Network Single Shot MultiBox Detector (SSD) architecture. This classifies the object and detects a bounding box for said object. For greater classification accuracy, we created two models, one trained on simulator images and one trained on images taken from a run around the course on Carla (extracted from a rosbag provided by Udacity). These images were extracted and labeled using the VIA tool (see below). A pre-trained model from Tensorflow detection model zoo was re-trained with the Tensorflow Object Detection API. The underlying MobileNet architecture trades accuracy for high efficiency and enables us to detect traffic light with high fps.

Due to a lack of suitable Machine Learning acceleration hardware available among the development team, we chose to process a reduced number of video frames for the traffic light classification. The planner is crafted with this limitation in mind and the end result is a dependable traffic light detection mechanism that strikes a balance between accuracy and performance.

The vehicle's point of view as it approaches a set of traffic lights

Detection in simulator:

Detection in rosbags:

Planning

The Planner takes the high-level information about the world, information about the current and past states of the vehicle and what the vehicle currently perceives, meshes this together and generates both a position and velocity track for the vehicle to follow. Based on the current position of the vehicle, it selects a set of target world way points to visit and using a spline fitting utility, creates intermediate target way points (with associated velocities) for the vehicle to follow. This will ensure that the vehicle is always tracking to a smooth path and will re-track to the center of the lane in case it ever goes off center of the target. The design can be easily extended to enable lane keeping, lane changing or obstacle avoidance; This means operation in a multi-lane environment with other traffic. Traffic lights are processed by determining which set of lights the vehicle needs to obey next and 'looking' at the state of the lights as the vehicle approaches them. Based on the dynamic interaction of the vehicle and the traffic lights, the autopilot assesses the intersection state and chooses an appropriate behavior - prepare to stop, stop or proceed.

RVIZ was used to visualize the generated way-points from the planner.

Control

A PID controller is used for the Throttle and a PD controller is used for the Brake. We began tuning by using the on/off relay method to determine time periods to maximum desired speed and declerations to a stop. We used these as inputs to the Zigler-Nichols PID ratios to initialize our tuning. From there, we manually refined the proportional gain, integral, and derivative parameters to reach the response shown in the figure below. We created the plot in rqt_plot to show the throttle PID's recovery stability. The blue line shows our commanded throttle setting; the red line shows the vehicles response to the command. Prior to 282 in the figure, the vehicle was manually controlled to an over-desired velocity. Removing the manual control at 282, the automatic throttle kept a zero until the vehicle coasted back to the desired speed beginning about 289 and reaching steady state at 291. At 297, we switched to manual again and applied the brakes to reach a stop. At 300, we return to automatic control where the vehicle applies full throttle (such as at a stop light) to clear the intersection and quickly reach desired speed. It reaches that desired speed near 315 with minimal overshoot but quick return to set point.

Libraries & Acknowledgements

Cubic Spline library by Atsushi Sakai

Tensorflow Object Detection

An extremely approachable aritcle, Racoon Training by Dat Tran was used to guide the initial steps into the tensorflow object detection utilities. Tools from his repository were used or modified to create the protobuf binaries for training the object detector.

VGG Image Annotator (VIA)

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Packages

No packages published

Languages

  • Jupyter Notebook 94.6%
  • Python 2.8%
  • CMake 1.6%
  • Other 1.0%