Self-Driving Car Engineer Nanodegree Program
In this project I implemented Extended Kalman filter to do sensor fusion, estimate the state of a moving object of interest with noisy lidar and radar measurements.
By correctly implementing sensor fusion, this implementation obtains RMSE values that are lower that the tolerance outlined in the project rubric: px, py, vx, vy output coordinates must have an RMSE <= [.11, .11, 0.52, 0.52] .
Simulation results are recorded as following youtube videos.
Video with Dataset 1 , Video with Dataset 2
The main flow is in main.cpp
, and the algorithm implementation is in:
FusionEKF.cpp
, values initialization, and flow of "Measurement", "Prediction", Sensor fusion.
KalmanFilter.cpp
, implementation Kalman filter for Lidar and Radar datas.
tools.cpp
, helper functions to calculate Jacobian
and RMSE
.
KalmanFilter.cpp
illustrated (taken from udacity course video):
FusionEKF.cpp
illustrated (taken from udacity course video):
- git clone this repository.
- Install
uWebSocketIO
. By runninginstall-ubuntu.sh
orinstall-mac.sh
- Build main program: Once the install for uWebSocketIO is complete, the main program can be built and run by doing the following from the project top directory.
- mkdir build
- cd build
- cmake ..
- make
- ./ExtendedKF
- Download and run simulator: This project involves the Term 2 Simulator which can be downloaded here
-
H_laser_
should be properly initialized, otherwise it will be filled up with random value, causing RMSE larger than 0.2 . -
When dealing with Radar data, angle should be normalized, as in
void KalmanFilter::UpdateEKF(const VectorXd &z)
VectorXd y = z - hofx;
Otherwise the estimation will goes wrong like this video .