Skip to content

igolas0/Semantic-Segmentation-with-Fully-Convolutional-Nets

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

13 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Semantic Segmentation

Goals of the Project

The goal of the project is to label the pixels of a road in images using a Fully Convolutional Network (FCN).

Dataset

The dataset used was the Kitti Road dataset from here. Extract the dataset in the data folder. This will create the folder data_road with all the training a test images. This folder was not included in GitHub's repo.

Results

Here are some examples images that were infered by the network on unseen test images. It succeeds to paint in green every pixel where the road is detected:

alt text alt text alt text

Approach Description

In this project I used a FCN8 architecture to classify traffic scene images at the pixel level. In this case we train on two classes. 'Drivable Road' or 'Non Road' detection. The architecture of my network is based on the paper Fully Convolutional Networks for Semantic Segmentation. I started using the pre-trained VGG16 CNN. Then I replaced the final fully connected layer by 1x1 convolutions and upsampled the network to be able to infer back on images. I also use skip layers so that the network does not lose finer resolution capability. For details about the implementation please read the refered paper.

In the python script 'main.py' is where we implement the main part of the program. Namely, building the network, training it and then infering predictions on the test data. We use some helper functions defined in 'helper.py' and also some test methods defined in 'project_tests.py'.

The best results were achieved training for 50 epochs and choosing a learning rate of 1e-4 with an Adam Optimizer and cross entropy loss. The loss at the beginning of training was around 1.5 and at the end around 0.02. The new layers defined on top of VGG16 were initialized with random normal distribution (standard deviation of 0.01). For these layers an L2 regularizer with a value of 1e-3 was used. Dropout probability of neural connections in VGG was set to 0.5.

Training for more epochs (up to 100) and changes to the L2 regularizer did not help to further improve the results or decrease the loss.

You can find the the complete results (all infered images out of the test images) in the runs folder.

Setup

Frameworks and Packages

Make sure you have the following is installed:

Run

Run the following command to run the project:

python main.py

Note If running this in Jupyter Notebook system messages, such as those regarding test status, may appear in the terminal rather than the notebook.