Skip to content

Using Convolutional Neural network we will clone the behaviour of a human driver into an autonomous vehicle

License

Notifications You must be signed in to change notification settings

ShayanBanerjee/CarND-Behavioral-Cloning-P3

Repository files navigation

Behavioral Cloning

Overview

This repository contains starting files for the Behavioral Cloning Project.

In this project, you will use what you've learned about deep neural networks and convolutional neural networks to clone driving behavior. You will train, validate and test a model using Keras. The model will output a steering angle to an autonomous vehicle.

We have provided a simulator where you can steer a car around a track for data collection. You'll use image data and steering angles to train a neural network and then use this model to drive the car autonomously around the track.

To meet specifications, the project will require submitting five files:

  • model.py (script used to create and train the model)
  • drive.py (script to drive the car - feel free to modify this file)
  • model.h5 (a trained Keras model)
  • a report writeup file (either markdown or pdf)
  • video.mp4 (a video recording of your vehicle driving autonomously around the track for at least one full lap)

This README file describes how to output the video in the "Details About Files In This Directory" section.

The Project

The goals / steps of this project are the following:

  • Use the simulator to collect data of good driving behavior
  • Design, train and validate a model that predicts a steering angle from image data
  • Use the model to drive the vehicle autonomously around the first track in the simulator. The vehicle should remain on the road for an entire loop around the track.
  • Summarize the results with a written report

Dependencies

This lab requires:

The lab enviroment can be created with CarND Term1 Starter Kit. Click here for the details.

The following resources can be found in this github repository:

  • drive.py
  • video.py
  • writeup_template.md

The simulator can be downloaded from the classroom. In the classroom, we have also provided sample data that you can optionally use to help train your model.

Details About Files In This Directory

drive.py

Usage of drive.py requires you have saved the trained model as an h5 file, i.e. model.h5. See the Keras documentation for how to create this file using the following command:

model.save(filepath)

Once the model has been saved, it can be used with drive.py using this command:

python drive.py model.h5

The above command will load the trained model and use the model to make predictions on individual images in real-time and send the predicted angle back to the server via a websocket connection.

Note: There is known local system's setting issue with replacing "," with "." when using drive.py. When this happens it can make predicted steering values clipped to max/min values. If this occurs, a known fix for this is to add "export LANG=en_US.utf8" to the bashrc file.

Saving a video of the autonomous agent

python drive.py model.h5 run1

The fourth argument, run1, is the directory in which to save the images seen by the agent. If the directory already exists, it'll be overwritten.

ls run1

[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_424.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_451.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_477.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_528.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_573.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_618.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_697.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_723.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_749.jpg
[2017-01-09 16:10:23 EST]  12KiB 2017_01_09_21_10_23_817.jpg
...

The image file name is a timestamp of when the image was seen. This information is used by video.py to create a chronological video of the agent driving.

video.py

python video.py run1

Creates a video based on images found in the run1 directory. The name of the video will be the name of the directory followed by '.mp4', so, in this case the video will be run1.mp4.

Optionally, one can specify the FPS (frames per second) of the video:

python video.py run1 --fps 48

Will run the video at 48 FPS. The default FPS is 60.

Rubric Points

Here I will consider the rubric points individually and describe how I addressed each point in my implementation.


Files Submitted & Code Quality

1. Submission includes all required files and can be used to run the simulator in autonomous mode

My project includes the following files:

  • model.py containing the script to create and train the model
  • drive.py for driving the car in autonomous mode
  • model.h5 containing a trained convolution neural network
  • writeup_report.md or writeup_report.pdf summarizing the results

2. Submission includes functional code

Using the Udacity provided simulator and my drive.py file, the car can be driven autonomously around the track by executing

python drive.py model.h5

3. Submission code is usable and readable

The model.py file contains the code for training and saving the convolution neural network. The file shows the pipeline I used for training and validating the model, and it contains comments to explain how the code works.

Model Architecture and Training Strategy

1. An appropriate model architecture has been employed

  • I decided to test the model provided by NVIDIA as suggested by Udacity. The model architecture is described by NVIDIA here. As an input this model takes in image of the shape (60,266,3) but our dashboard images/training images are of size (160,320,3). I decided to keep the architecture of the remaining model same. The model architecture:

alt text

The model includes RELU layers to introduce nonlinearity , and the data is normalized in the model using a Keras lambda layer.

2. Attempts to reduce overfitting in the model

The model contains one dropout layer in order to reduce overfitting with 0.25

The model was trained and validated on different data sets to ensure that the model was not overfitting . The model was tested by running it through the simulator and ensuring that the vehicle could stay on the track.

3. Model parameter tuning

  • No of epochs= 10
  • Optimizer Used- Adam
  • Learning Rate- Default 0.001
  • Validation Data split- 0.20
  • Batch size= 32
  • Loss Function Used- MSE(Mean Squared Error as it is efficient for regression problem).

4. Appropriate training data

Training data was chosen to keep the vehicle driving on the road. I used a combination of center lane driving, recovering from the left and right sides of the road ...

For details about how I created the training data, see the next section.

Model Architecture and Training Strategy

1. Solution Design Approach

  • I am using skimage to load the images, by default the images are read by RGB format hence we need not to convert to RGB.

  • Since we have a steering angle associated with three images we introduce a correction factor for left and right images since the steering angle is captured by the center angle.

  • I decided to introduce a correction factor of 0.2

  • For the left images I increase the steering angle by 0.2 and for the right images I decrease the steering angle by 0.2

  • To combat the overfitting, I modified the model and introduced dropout with rate=0.25

  • I decided to shuffle the images so that the order in which images comes doesn't matters to the CNN

  • Augmenting the data- i decided to flip the image horizontally and adjust steering angle accordingly, I used numpy to flip the images with code np.fliplr(image) .

  • In augmenting after flipping multiply the steering angle by a factor of -1 to get the steering angle for the flipped image.

  • So according to this approach we were able to generate 6 images corresponding to one entry in .csv file (3 original and 3 augmented)

At the end of the process, the vehicle is able to drive autonomously around the track without leaving the road.

2. Final Model Architecture

The final model architecture (model.py lines 18-24) consisted of a convolution neural network with the following layers and layer sizes

Here is a visualization of the architecture (note: visualizing the architecture is optional according to the project rubric)

alt text

3. Creation of the Training Set & Training Process

To capture good driving behavior, I first recorded two laps on track one using center lane driving. Here is an example image of center lane driving:

alt text

I then recorded the vehicle recovering from the left side and right sides of the road back to center so that the vehicle would learn to deal what a real driver deals on a road, and can act accordingly. Then I repeated this process on track two in order to get more data points.

To augment the data sat, I also flipped images and angles thinking that this would increase the data points and would cover

After the collection process,I initially had 6209 but after augmentation of flipped data I had 12418 number of data points. I then preprocessed these data by cropping the unecessary image bits , ie from vertical 70 to 25 pixels I removed with Cropping2D layer of Keras API. I finally randomly shuffled the data set and put 20% of the data into a validation set.

I used this training data for training the model. The validation set helped determine if the model was over or under fitting. The ideal number of epochs was 20 as evidenced by the gradual decrease in testing loss as well as validation loss. I used an adam optimizer so that manually training the learning rate wasn't necessary.

About

Using Convolutional Neural network we will clone the behaviour of a human driver into an autonomous vehicle

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published