Skip to content

colinmccormick/CarND-Semantic-Segmentation

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

CarND-Semantic-Segmentation

Self-Driving Car Engineer Nanodegree Program

Goals

The goal of this project is to train a deep neural network to do semantic segmentation, an image classification task that assigns every pixel in the image to a class. This is an important task for autonomous vehicles, because it can allow precise identification of the roadway, vehicles, pedestrians and other objects encountered while driving.

Solution

An extremely powerful DNN architure for semantic segmentation is a fully convolutional network (FCN), which combines a down-sampling "encoder" front half with an upsampling "decoder" back half. I used FCN-8, which uses VGG16 as the encoder. The final VGG16 layer is connected to a 1x1 convolutional layer, and then a series of transposed convolutional layers with skip connections (element-wise addition) from previous layers in VGG16. L2 regularization helps reduce overfitting.

This architecture allows semantic information that is derived from the convolution and pooling operations to be combined with spatial location information from the skip connections, giving the network the ability to combine classification and location.

Results

With hyperparameters [epochs,batch_size,keep_prob,learning_rate] = [6, 5, 0.5, 0.001], the network successfully trains on the KITTI dataset (see below for sample images). Overall it's quite successful, although it struggles with some situations, such as shadows on the road. In most cases, increasing training epochs (to 24) improves this, although not in all cases.

To try further improve the network's performance, I added data augmentation in the form of left-right image flips, using numpy's fliplr() function (added in get_batches_fn() in helper.py). For the same hyperparameters, this didn't significantly change the loss. Unfortunately, it appears to give worse performance for a handful of examples, notably the one involving lots of shadows on the road.

The next step for improving the network accuracy is probably to train for more epochs, and then to investigate other data-augmentation methods such as noise addition and brightness variation.

6 epochs 24 epochs 24 epochs with augmentation Notes
Good performance
Sidewalk confusion
Shadow confusion

Update

Based on feedback from the project reviewers it's clear that I was implementing L2 regularization incorrectly in tensorflow. (It's necessary to manually add an additional term to the loss representing the L2 loss). Having done that, here's the modified training loss plot and sample images. It's clear that regularization improves the results, particularly when using a multiplier less than 1.0.

Loss from training

ABOVE: Training loss (black = strong L2 regularization, blue = weak L2 regularization)

Example images below. For all three categories, the hyperparameters are epochs,batch_size,keep_prob,learning_rate] = [24, 5, 0.5, 0.001]. Data augmentation (L/R flip) is used. For the left column, no regularization was used. In the middle column strong L2 regularization was used (multiplier = 1.0) and in the right column weak L2 regularization was used (multiplier = 0.1).

No regularization Strong regularization Weak regularization

Setup Information

GPU

main.py will check to make sure you are using GPU - if you don't have a GPU on your system, you can use AWS or another cloud computing platform.

Frameworks and Packages

Make sure you have the following is installed:

You may also need Python Image Library (PIL) for SciPy's imresize function.

Dataset

Download the Kitti Road dataset from here. Extract the dataset in the data folder. This will create the folder data_road with all the training a test images.

Start

Implement

Implement the code in the main.py module indicated by the "TODO" comments. The comments indicated with "OPTIONAL" tag are not required to complete.

Run

Run the following command to run the project:

python main.py

Note: If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.

Example Outputs

Here are examples of a sufficient vs. insufficient output from a trained network:

Sufficient Result Insufficient Result
Sufficient Insufficient

Submission

  1. Ensure you've passed all the unit tests.
  2. Ensure you pass all points on the rubric.
  3. Submit the following in a zip file.
  • helper.py
  • main.py
  • project_tests.py
  • Newest inference images from runs folder (all images from the most recent run)

Tips

  • The link for the frozen VGG16 model is hardcoded into helper.py. The model can be found here.
  • The model is not vanilla VGG16, but a fully convolutional version, which already contains the 1x1 convolutions to replace the fully connected layers. Please see this post for more information. A summary of additional points, follow.
  • The original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1x1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.
  • When adding l2-regularization, setting a regularizer in the arguments of the tf.layers is not enough. Regularization loss terms must be manually added to your loss function. otherwise regularization is not implemented.

Why Layer 3, 4 and 7?

In main.py, you'll notice that layers 3, 4 and 7 of VGG16 are utilized in creating skip layers for a fully convolutional network. The reasons for this are contained in the paper Fully Convolutional Networks for Semantic Segmentation.

In section 4.3, and further under header "Skip Architectures for Segmentation" and Figure 3, they note these provided for 8x, 16x and 32x upsampling, respectively. Using each of these in their FCN-8s was the most effective architecture they found.

Optional sections

Within main.py, there are a few optional sections you can also choose to implement, but are not required for the project.

  1. Train and perform inference on the Cityscapes Dataset. Note that the project_tests.py is not currently set up to also unit test for this alternate dataset, and helper.py will also need alterations, along with changing num_classes and input_shape in main.py. Cityscapes is a much more extensive dataset, with segmentation of 30 different classes (compared to road vs. not road on KITTI) on either 5,000 finely annotated images or 20,000 coarsely annotated images.
  2. Add image augmentation. You can use some of the augmentation techniques you may have used on Traffic Sign Classification or Behavioral Cloning, or look into additional methods for more robust training!
  3. Apply the trained model to a video. This project only involves performing inference on a set of test images, but you can also try to utilize it on a full video.

Using GitHub and Creating Effective READMEs

If you are unfamiliar with GitHub , Udacity has a brief GitHub tutorial to get you started. Udacity also provides a more detailed free course on git and GitHub.

To learn about REAMDE files and Markdown, Udacity provides a free course on READMEs, as well.

GitHub also provides a tutorial about creating Markdown files.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published