Skip to content

Using Keras to train a deep neural network for predicting steering angles based on camera input. Trained on a Unity3D simulator.

Notifications You must be signed in to change notification settings

manavkataria/behavioral_cloning_car

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

36 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Table of Contents


Behavioral Cloning

Using Keras to train a deep neural network for predicting steering angles based on camera input. Trained on a Unity3D simulator.

TLDR; Watch the Video - Gone in 60 Seconds!

This video contains subtitles mentioning salient points and challenges encountered during the project. Overcoming these is no laughing matter! 😉

Youtube Video


Goal

The goals of this project are the following:

  • Use the simulator to collect data of good driving behavior
  • Build, a convolution neural network in Keras that predicts steering angles from images
  • Train and validate the model with a training and validation set
  • Test that the model successfully drives around track one without leaving the road

Files

My project includes the following files:

  • model.py - the script to create and train the model
  • drive.py - for driving the car in autonomous mode
  • model.h5 - a trained convolution neural network with weights. Download
  • utils.py - shared utils across module
  • settings.py - settings shared across module
  • plotter.py - to plot histograms, predictions, loss function, etc
  • run.sh - cleanup, train, validate, plot/visualize
  • install.sh - install dependencies
  • README.md - description of the development process (this file)
  • Udacity Dataset - Track1 Dataset Used for Training. Download here
  • Unity3D Simulator - Github Repo. Download MacOS

Repository includes all required files and can be used to run the simulator in autonomous mode.

Code Quality

Functional Code

Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing

$ python drive.py model.h5

Comments inline with code

The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model. It contains detailed comments to explain how the code works.

Model Architecture & Preprocessing

Architecture: nVidia End-to-End Deep Learning Network

My model consists of a convolution neural network with 3x3 filter sizes and depths between 32 and 128 model.py

The model includes ReLU layers to introduce nonlinearity. model

Objective, Loss function and Hyper-Parameter tuning

I used Mean Squared Error as a loss metric. It seems reasonably appropriate to train the model to follow the training steering angles to some close enough extent. It was important to not let the loss function go very close to zero. A very low loss indicates memorization and thus overfitting.

The optimization space was non-linear. Hence there were instances where the model training would not converge. Retraining reinitializes the weights randomly and provides another shot at convergence. The model used an Adam optimizer; the learning rate was not tuned manually. I did use Stochastic Gradient Descent initially but Adam is proven to be a better choice for most cases.

Controlling Overfitting

The model contains Dropout layers in order to reduce overfitting. The Dropout was set to 10%

The loss function and predictions were carefully monitored to ensure the loss doesn't go too low and the predictions aren't a perfect match. This ensures the model isn't overfitted. The model was ultimately tested by running it through the simulator and ensuring that the vehicle could stay on the track.

Image Preprocessing

The Image data is preprocessed in the model using the following techniques:

  • RGB to Grayscale
  • Crop to ROI Quadilateral
  • Resize to a smaller WIDTH = 200, HEIGHT = 66
  • Dynamic Range Adjustment

Steering Angle Preprocessing

The steering angles were scaled up to STEERING_MULTIPLIER = 100.

Training Strategy

Building an Overfitted Model with Minimal Data

The overall strategy for deriving a model architecture was to initially overfit the model on three critical images before building a regularized model for the entire track. This saves time and validates the approach. I look at this as a lean design approach or a MVP approach.

I used the following three images, twice for each training (3x2 = 6 per set) and validation set (50% validation split). Thus making a total of 12 images in the initial dataset.

Recovery: Extreme Left of Lane

center_2017_01_16_18_49_00_738

Drive Straight: Center of Lane

center_2017_01_16_18_49_02_100

Recovery: Extreme Right of Lane

center_2017_01_16_18_49_04_959

I ran this for 30 epochs to achieve satisfactory loss convergence.

Loss Function 30 Epochs

The predictions came out close but not extremely overfitted, which is ideal!

Predictions for 12 Frames

Building a Regularized Model with Augmented Data

The next step was to run the model on the entire training dataset (full track). The provided Udacity dataset had 8k images. The label distribution was quite Asymmetric and Unbalanced [Histogram: Asymmetric and Unbalanced]. I used Horizontal Flipping to Make this symmetric [Histogram: Symmetric But Unbalanced]. And lastly, Histogram Equalization for achieving balance in the training dataset [Histogram Equalization: Symmetric and Balanced]. I also added a random 10% noise to the steering angles for each image. This helps avoid overfitting as well.

Raw Data Histogram: Asymmetric and Unbalanced

Histogram: Asymmetric and Unbalanced

Horizontally Flipped Data Histogram: Symmetric But Unbalanced

Histogram: Symmetric But Unbalanced

Fully Processed with Histogram Equalization: Symmetric and Balanced

Histogram Equalization: Symmetric and Balanced

Loss Function 5 Epochs

Loss Function 5 Epochs

Finally, the balanced dataset was randomly shuffled before being fed into the model.

Once the dataset was balanced, the vehicle is able to drive autonomously around the track without leaving the road.

Predictions for 120 Frames

Predictions for 120 Frames

Acknowledgements & References

  • Sagar Bhokre - for project skeleton and constant support
  • Caleb Kirksey - for his motivating company during the course of this project
  • Mohan Karthik - for an informative blogpost motivating dataset balancing
  • Paul Hearty - for valuable project tips provided on Udacity forums that saved time
  • Andrew Ayers, Ashish Singh and Kalyanramu Vemishetty for the excellent questions and giving me the permission to share them

About

Using Keras to train a deep neural network for predicting steering angles based on camera input. Trained on a Unity3D simulator.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published