Skip to content

fera0013/CarND-Behavior-Cloning-Project-P3

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

9 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CarND-Behavior-Cloning-Project-P3

##Introduction Goal of the Behavior Cloning Project is the construction of a (deep) neural network which is able to reproduce human steering behavior on a test track. The network is trained and tested based on image data from a simulator, which can be downloaded from https://d17h27t6h515a5.cloudfront.net/topher/2016/November/5831f3a4_simulator-windows-64/simulator-windows-64.zip

##Components The project consists of the following components:

  • data_utility.py which contains preprocessing steps and the generator logic described below
  • model.py which contains the network architecture and the training routine
  • model.json which is generated by model.py and contains the network architecture
  • model.h5 which is generated by model.py and contains the trained weights
  • drive.py which reads in model.json and model.h5 to reconstruct the model and output steering angles based on "live" images from the simulator
  • Training data

##Data aquisition The model was trained based on the training data provided by Udacity, which can be downloaded from https://d17h27t6h515a5.cloudfront.net/topher/2016/December/584f6edd_data/data.zip. This data was augmented by additional images taken at critical locations, such as the curve right after the first bridge, where the car consistently left the road. The training mode of the provided simulator was used to record additional data (see https://classroom.udacity.com/nanodegrees/nd013/parts/fbf77062-5703-404e-b60c-95b78b2f3f9e/modules/6df7ae49-c61c-4bb2-a23e-6527e69209ec/lessons/46a70500-493e-4057-a78e-b3075933709d/concepts/e942ed4b-0119-4361-b9a6-8962b533d29f for more detailed instructions).

The retraining sequenze for critical locations can be summarized as follows:

  • Positioning of the car at the orientation and location where it drove off the road
  • Turning of the wheel towards the middle of the road while the car is not moving
  • Turning on recording
  • Collecting images while the car is still not moving
  • Adding the additional images to the original training set
  • Retraining

##Preprocessing Each training sample conists of three images that were taken simultaneously from the right, the center and the left cameras of the simulator car and the corresponding steering angle as the target value. To tripple the available training data, the left, center and right images of each training sample are treated as seperated training samples. Since the right and left images represent extreme positions, their corresponding steering values are modified by a constant offset to steer the car towards the center of the road.

##Network structure The model is based on the network architecture described in the paper "end to end learning for self-driving cars" by NVIDIA (https://arxiv.org/abs/1604.07316).

The structure of the network can be summarized as follows:


Layer (type) Output Shape Param # Connected to


cropping2d_1 (Cropping2D) (None, 138, 320, 3) 0 cropping2d_input_1[0][0]


resizing (None, 66, 200, 3) 0 cropping2d_1[0][0]


normalization (None, 66, 200, 3) 0 resizing[0][0]


convolution2d_1 (Convolution2D) (None, 31, 98, 24) 1824 normalization[0][0]


spatialdropout2d_1 (SpatialDropo (None, 31, 98, 24) 0 convolution2d_1[0][0]


convolution2d_2 (Convolution2D) (None, 14, 47, 36) 21636 spatialdropout2d_1[0][0]


spatialdropout2d_2 (SpatialDropo (None, 14, 47, 36) 0 convolution2d_2[0][0]


convolution2d_3 (Convolution2D) (None, 5, 22, 48) 43248 spatialdropout2d_2[0][0]


spatialdropout2d_3 (SpatialDropo (None, 5, 22, 48) 0 convolution2d_3[0][0]


convolution2d_4 (Convolution2D) (None, 3, 20, 64) 27712 spatialdropout2d_3[0][0]


spatialdropout2d_4 (SpatialDropo (None, 3, 20, 64) 0 convolution2d_4[0][0]


convolution2d_5 (Convolution2D) (None, 1, 18, 64) 36928 spatialdropout2d_4[0][0]


spatialdropout2d_5 (SpatialDropo (None, 1, 18, 64) 0 convolution2d_5[0][0]


flatten_1 (Flatten) (None, 1152) 0 spatialdropout2d_5[0][0]


dropout_1 (Dropout) (None, 1152) 0 flatten_1[0][0]


activation_1 (Activation) (None, 1152) 0 dropout_1[0][0]


dense_1 (Dense) (None, 100) 115300 activation_1[0][0]


dropout_2 (Dropout) (None, 100) 0 dense_1[0][0]


dense_2 (Dense) (None, 50) 5050 dropout_2[0][0]


dense_3 (Dense) (None, 10) 510 dense_2[0][0]


dropout_3 (Dropout) (None, 10) 0 dense_3[0][0]


dense_4 (Dense) (None, 1) 11 dropout_3[0][0]


Total params: 252,219 Trainable params: 252,219 Non-trainable params: 0


The first three layers are preprocessing steps, starting with a cropping layer to reduce noise, followed by a resizing and a normalization layer, which maps the data to a range between -0.5 and 0.5. The following layers reproduce the NVIDIA model, augmented by several dropout layers to prevent overfitting.

##Training strategy The data is randomly split into 19200 training and 4800 validation samples. Training and validation data are fed to the network in batches of 128 sample images, by a generator that loads the images as needed, to preserve memory. The weights are optimized using an Adam optimizer with a learning rate of 0.001, and mean squared error as loss metric. Five training epochs proved to be enough for the current setup, after that the loss did not decrease significantly.

##Live testing To test the model on the simulated track, follow these steps:

  • Start the simulator by double clicking the .exe
  • Select a graphic setup and click play
  • select the first track (on the left) and click "autonomous mode"
  • Open a command line and cd into the project folder (i.e. the folder where drive.py, model.json and model.h5 are located)
  • enter python drive.py model.json
  • Watch

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages