Skip to content
Identifying roads in images using Fully Convolutional Neural Networks
Python Shell
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.

Semantic Segmentation


The goal of this project is to identify roads in pictures using semantic segmentation where pixel-wise classification is performed. A Fuly Convolutional Network (FCN) will be trained to label each pixel in the given picture as a road or not-road.


Frameworks and Packages

Make sure you have the following is installed:

You may also need Python Image Library (PIL) for SciPy's imresize function.


Download the Kitti Road dataset from here. Extract the dataset in the data folder. This will create the folder data_road with all the training a test images.



Implement the code in the module indicated by the "TODO" comments. The comments indicated with "OPTIONAL" tag are not required to complete.


Run the following command to run the project:



The FCN used in this project consists of two parts: an encoder and decoder.

FCN-8 Encoder

The encoder network is based on the FCN-8 architecture developed at UC-Berkeley. The encoder for FCN-8 is the VGG16 model pretrained on ImageNet for classification. The fully-connected layers are replaced by 1-by-1 convolutions. Here’s an example of going from a fully-connected layer to a 1-by-1 convolution in TensorFlow:

num_classes = 2
# Replace the following:
output = tf.layers.dense(input, num_classes)
# with:
output = tf.layers.conv2d(input, num_classes, 1, strides=(1,1))

FCN-8 - Decoder

To build the decoder portion of FCN-8, the input is upsampled to the original image size. The shape of the tensor after the final convolutional transpose layer will be 4-dimensional: (batch_size, original_height, original_width, num_classes). Let’s implement those transposed convolutions we discussed earlier as follows:

output = tf.layers.conv2d_transpose(input, num_classes, num_filters, strides=(2, 2))


The FCN was trained on the Kitti dataset using TensorFlow's Adam optimizer. A total of 50 epochs with a batch size of 10 images, learning rate of 0.001, and droup-out rate of 50% were found to be sufficient to produce reasonable segmentation performance. As can be shown in the following plot, the model's loss pretty much flattens beoynd 40 epochs.

Test Samples

The following images show the classifier's output on out-of-sample test images where most road pixels are correctly classified.


  • The link for the frozen VGG16 model is hardcoded into The model can be found here.
  • The model is not vanilla VGG16, but a fully convolutional version, which already contains the 1x1 convolutions to replace the fully connected layers. Please see this post for more information. A summary of additional points, follow.
  • The original FCN-8s was trained in stages. The authors later uploaded a version that was trained all at once to their GitHub repo. The version in the GitHub repo has one important difference: The outputs of pooling layers 3 and 4 are scaled before they are fed into the 1x1 convolutions. As a result, some students have found that the model learns much better with the scaling layers included. The model may not converge substantially faster, but may reach a higher IoU and accuracy.
  • When adding l2-regularization, setting a regularizer in the arguments of the tf.layers is not enough. Regularization loss terms must be manually added to your loss function. otherwise regularization is not implemented.

Why Layer 3, 4 and 7?

In, you'll notice that layers 3, 4 and 7 of VGG16 are utilized in creating skip layers for a fully convolutional network. The reasons for this are contained in the paper Fully Convolutional Networks for Semantic Segmentation.

In section 4.3, and further under header "Skip Architectures for Segmentation" and Figure 3, they note these provided for 8x, 16x and 32x upsampling, respectively. Using each of these in their FCN-8s was the most effective architecture they found.

Optional sections

Within, there are a few optional sections you can also choose to implement, but are not required for the project.

  1. Train and perform inference on the Cityscapes Dataset. Note that the is not currently set up to also unit test for this alternate dataset, and will also need alterations, along with changing num_classes and input_shape in Cityscapes is a much more extensive dataset, with segmentation of 30 different classes (compared to road vs. not road on KITTI) on either 5,000 finely annotated images or 20,000 coarsely annotated images.
  2. Add image augmentation. You can use some of the augmentation techniques you may have used on Traffic Sign Classification or Behavioral Cloning, or look into additional methods for more robust training!
  3. Apply the trained model to a video. This project only involves performing inference on a set of test images, but you can also try to utilize it on a full video.

Using GitHub and Creating Effective READMEs

If you are unfamiliar with GitHub , Udacity has a brief GitHub tutorial to get you started. Udacity also provides a more detailed free course on git and GitHub.

To learn about REAMDE files and Markdown, Udacity provides a free course on READMEs, as well.

GitHub also provides a tutorial about creating Markdown files.

You can’t perform that action at this time.