Skip to content

Sensor fusion module for 3D Object Tracking. The project uses a combination of Lidar and Camera data for making an estimation of the distance for the preceding vehicle.

Notifications You must be signed in to change notification settings

Vladimir-Lazic/SFND_3D_Object_Tracking

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

SFND 3D Object Tracking

alt text

Project overview

Bellow I will address each point in the project rubric

The overall architecture of the project is described in this image: alt text

FP.1 Match 3D Objects

The first step in implementing this project was the keypoint matching. Using a combination of detectors and descriptors to extract the object keypoints on a sequence of images we are able to match keypoints in between two consecutive images. These are keypoints matches. Also using YOLO deep learning algorithm we can get the bounding boxes on each object on the image.

The object of this step is to achieve matching of bounding boxes detected by the YOLO algorithm. This is done by applying the following steps:

  • Looping over the keypoint matches for the previous and current frame and determining which keypoint belongs to which bounding box
  • Storing the bounding box ids in a multimap. A multimap is used because it allows multiple pairs that have the same key value
  • Looping over all the bounding boxes in the current frame and counting the number of occurrences for each matches bounding box in previous image and counting all the matches
  • Determining the maximum number of occurrences for a match of bounding box pairs

FP.2 Compute Lidar-based TTC

Computing TTC based on lidar measurements is done by first filtering lidar points. We want to eliminate the points that are in the lanes different from ego lane and the points that are reflection of the vehicle hood. We only use the lidar points which have the y values that indicates that they are in the vehicle ego lane. For filtering the x values we take the average values of all the lidar points x values in the vehicle ego lane. Using this we take an average closest x values of lidar points. This is done for both the previous and current frame. After filtering lidar points and extracting the average closest x, then we only apply the TTC formula for the constant velocity model.

alt text

FP.3 Associate Keypoint Correspondences with Bounding Boxes

For this step we loop over the keypoint matches for the previous and the current frame to determine which current frame keypoints are contained in the bounding box region of interest. Because there are outlies among the keypoints matches we filter the matches using a distance mean threshold. We determine the mean distance between keypoints mathces and scale it by 0.75 and take only the matches that are bellow that threshold.

FP.4 Compute Camera-based TTC

After performing keypoint correspondences with bounding boxes we use the perform the TTC computation for the camera data.

FP.5 Performance Evaluation 1

The main errors in the Lidar-based TTC computation come from the Lidar characteristics. Lidar points are sometimes reflected by the hood of the ego vehicle. These points make it appear as if there is an obstacle really close and they need to be filtered out. Other issues come from unstable lidar points reflected by the preceding vehicle reflective surfaces.

In the last three images in the tested sequence we se a jump in the TTC calculation for the Lidar.

alt text

alt text

alt text

FP.6 Performance Evaluation 2

After running all the detector-descriptor to compare the performance of each combination we see that detectors such as HARRIS and ORB produce unreliable results. HARRIS performed the lowest due to is poor keypoints detections. Detectors such as FAST, SIFT and AKAZE produce the most stabile results with no oscillations of the TTC measurements in between frame. Others combinations produce some jumps in the TTC calculation in between frames.

Dependencies for Running Locally

Basic Build Instructions

  1. Clone this repo.
  2. Make a build directory in the top level project directory: mkdir build && cd build
  3. Compile: cmake .. && make
  4. Run it: ./3D_object_tracking.

About

Sensor fusion module for 3D Object Tracking. The project uses a combination of Lidar and Camera data for making an estimation of the distance for the preceding vehicle.

Topics

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published