This project is part of Udacity's Self-driving Car Engineer Nanodegree program.
The extended Kalman filter (EKF) is the nonlinear version of the Kalman filter which linearizes about an estimate of the current mean and covariance. In the case of well defined transition models, the EKF has been considered[1] the de facto standard in the theory of nonlinear state estimation, navigation systems, and GPS (EKF). The Extended Kalman Filter is also used widely in self-driving cars and sensor fusion.
The goal of this project is to implement an Extended Kalman Filter (EKF) and use the EFK with noisy LiDAR and RADAR measurements to estimate the state of a moving object of interest.
The video below shows what the simulator looks like when my C++ program is using its Kalman filter to track the object.
Notes:
- Lidar measurements are red circles
- Radar measurements are blue circles with an arrow pointing in the direction of the observed angle,
- Estimation markers are green triangles
The simulator provides the program with the measurement data (either lidar or radar), and the program feeds back the measured estimation marker and RMSE values from its Kalman filter.
The file obj_pose-laser-radar-synthetic-input.txt contains the RADAR and LiDAR data. Here is a sample extract:
...
R 2.990916e+00 2.176679e-01 5.191807e+00 1477010443450000 2.939043e+00 6.127858e-01 5.193090e+00 8.066803e-02 1.553247e-02 6.893328e-02
L 3.012223e+00 6.370455e-01 1477010443500000 3.198690e+00 6.172666e-01 5.191470e+00 9.854147e-02 1.897914e-02 7.578466e-02
R 3.593878e+00 1.354522e-01 5.161753e+00 1477010443550000 3.458253e+00 6.226855e-01 5.189627e+00 1.181798e-01 2.276837e-02 8.262407e-02
L 3.893650e+00 3.117930e-01 1477010443600000 3.717722e+00 6.291305e-01 5.187542e+00 1.395764e-01 2.689958e-02 8.945044e-02
R 4.255547e+00 1.648397e-01 5.433327e+00 1477010443650000 3.977082e+00 6.366893e-01 5.185194e+00 1.627238e-01 3.137210e-02 9.626268e-02
L 4.309346e+00 5.785637e-01 1477010443700000 4.236322e+00 6.454494e-01 5.182560e+00 1.876140e-01 3.618523e-02 1.030597e-01
...
Each row represents a sensor measurement where the first column tells you if the measurement comes from radar (R
) or lidar (L
).
For a row containing radar data, the columns are: sensor_type
, rho_measured
, phi_measured
, rhodot_measured
, timestamp
, x_groundtruth
, y_groundtruth
, vx_groundtruth
, vy_groundtruth
, yaw_groundtruth
, yawrate_groundtruth
.
For a row containing lidar data, the columns are: sensor_type
, x_measured
, y_measured
, timestamp
, x_groundtruth
, y_groundtruth
, vx_groundtruth
, vy_groundtruth
, yaw_groundtruth
, yawrate_groundtruth
.
Whereas radar has three measurements (rho
, phi
, rhodot
), lidar has two measurements (x
, y
).
My program uses the measurement values and timestamp in my Kalman filter algorithm. Groundtruth
, which represents the actual path the bicycle took, is for calculating root mean squared error.
We do not need to worry about yaw and yaw rate ground truth values for this project.
Basically, the simulator acts as a client, and the C++ program I develop acts as a server. They communicate using uWebSocketIO
.
The main project source files are in the src folder.
- main.cpp: communicates with the Simulator receiving data measurements, calls a function to run the Kalman filter, calls a function to calculate RMSE
- FusionEKF.cpp: initializes the filter, calls the
predict()
function, calls theupdate()
function - kalman_filter.cpp: defines the
predict()
function, theupdate()
function for LiDAR, and theupdate()
function for RADAR - tools.cpp: contains functions to calculate RMSE and the Jacobian matrix
The values for px
, py
, vx
, and vy
RMSE should be less than or equal to these values:
[.11, .11, 0.52, 0.52]
.
I verified over multiple runs of the simulator that my RMSE values were less than the specified threshold values. These are the values I got:
px |
py |
vx |
vy |
---|---|---|---|
0.0973 | 0.0855 | 0.4513 | 0.4399 |
The three main steps for programming a Kalman filter, which my EFK follows, are:
- initializing the Kalman filter variables
- predicting where our object is going to be after a time step Δt
- updating where our object is based on sensor measurements
Then the prediction and update steps repeat in a loop.
To measure how well our Kalman filter performs, we then calculate root mean squared error comparing the Kalman filter results with the provided ground truth.
These three steps (initialize, predict, update) plus calculating RMSE encapsulate the entire extended Kalman filter project.
My implementation follows this general sensor fusion flow:
Here is a brief overview of what happens when we run the simulator and program are running:
- main.cpp reads in the data and sends a sensor measurement to FusionEKF.cpp.
- FusionEKF.cpp takes the sensor data and initializes variables and updates variables. The Kalman filter equations are not in this file. FusionEKF.cpp has a variable called
ekf_
, which is an instance of aKalmanFilter
class. Theekf_
will hold the matrix and vector values. The program also uses theekf_
instance to call thepredict()
andupdate()
functions. - The
KalmanFilter
class is defined in kalman_filter.cpp and kalman_filter.h. TheKalmanFilter
class contains functions for the prediction and update steps.
This animated GIF provides a zoomed-out view:
The ProcessMeasurement()
function in FusionEKF.cpp implements the code to handle the first measurement:
void FusionEKF::ProcessMeasurement(const MeasurementPackage & measurement_pack)
{
if(!is_initialized_)
{
// first measurement
ekf_.x_ = VectorXd(4);
ekf_.x_ << 1, 1, 1, 1;
...
FusionEKF.cpp initializes the filter, calls the predict()
function, calls the update()
function
The MeasurementPackage
class in contains an enum
called SensorType
that specifies two sensor types, LASER
, and RADAR
.
class MeasurementPackage {
public:
enum SensorType{
LASER,
RADAR
} sensor_type_;
long long timestamp_;
Eigen::VectorXd raw_measurements_;
};
The ProcessMeasurement()
function in FusionEKF.cpp handles both RADAR and LiDAR measurements:
if(measurement_pack.sensor_type_ == MeasurementPackage::RADAR)
{
double rho = measurement_pack.raw_measurements_(0);
double phi = measurement_pack.raw_measurements_(1);
double rhodot = measurement_pack.raw_measurements_(2);
// polar to cartesian - r * cos(angle) for x and r * sin(angle) for y
ekf_.x_ << rho * cos(phi), rho * sin(phi), rhodot * cos(phi), rhodot * sin(phi);
}
else if(measurement_pack.sensor_type_ == MeasurementPackage::LASER)
{
// Initialize state.
ekf_.x_ << measurement_pack.raw_measurements_(0), measurement_pack.raw_measurements_(1), 0.0, 0.0;
}
My EFK algorithm avoids unnecessary calculations. I attempt to strike a balance between comprehension, stability, robustness, security, and speed.
This project employs Google's C++ style guide.
- cmake >= 3.5
- All OSes: click here for installation instructions
- make >= 4.1 (Linux, Mac), 3.81 (Windows)
- Linux: make is installed by default on most Linux distros
- Mac: install Xcode command line tools to get make
- Windows: Click here for installation instructions
- gcc/g++ >= 5.4
- Linux: gcc / g++ is installed by default on most Linux distros
- Mac: same deal as make - install Xcode command line tools
- Windows: recommend using MinGW
- uWebSockets
- Set up and install uWebSocketIO
- Note: the branch
e94b6e1
is the version ofuWebSocketIO
that works with the Udacity simulator
- Clone this repo.
- Make a build directory:
mkdir build && cd build
- Compile:
cmake .. && make
- On windows, you may need to run:
cmake .. -G "Unix Makefiles" && make
- On windows, you may need to run:
- Run it:
./ExtendedKF
- Udacity. https://www.udacity.com.
- Wikipedia. Kalman Filter.
- Wikipedia. Extended Kalman Filter
- James Teow, 2018. Understanding Kalman Filters with Python.
- Luc Jaulin, 2019. Mobile Robotics, 2nd Edition. Chapter 7: Kalman Filter. Wiley.
- Jean-Claude Bertein, Roger Ceschi, 2007. Discrete Stochastic Processes and Optimal Filtering queue. Chapter 7: The Kalman Filter. ISTE.
- Bruce P. Gibbs, 2011. Advanced Kalman Filtering, Least-Squares and Modeling: A Practical Handbook. Chapter 8: Kalman Filtering.
- Rangesh, A. and Trivedi, M.M., 2019. No Blind Spots: Full-Surround Multi-Object Tracking for Autonomous Vehicles using Cameras & LiDARs. IEEE Transactions on Intelligent Vehicles, 4(4), pp.588-599.
- Chavez-Garcia, R.O. and Aycard, O., 2015. Multiple sensor fusion and classification for moving object detection and tracking. IEEE Transactions on Intelligent Transportation Systems, 17(2), pp.525-534.
- Shantaiya, S., Verma, K. and Mehta, K., 2015. Multiple object tracking using Kalman filter and optical flow. European Journal of Advances in Engineering and Technology, 2(2), pp.34-39.
- Jeong, J.M., Yoon, T.S. and Park, J.B., 2014, September. Kalman filter based multiple objects detection-tracking algorithm robust to occlusion. In 2014 Proceedings of the SICE Annual Conference (SICE) (pp. 941-946). IEEE.
- Chen, X., Wang, X. and Xuan, J., 2018. Tracking multiple moving objects using unscented Kalman filtering techniques. arXiv preprint arXiv:1802.01235.
- Himmelsbach, M., Mueller, A., Lüttel, T. and Wünsche, H.J., 2008, October. LIDAR-based 3D object perception. In Proceedings of 1st international workshop on cognition for technical systems (Vol. 1).
- Hwang, S., Kim, N., Choi, Y., Lee, S. and Kweon, I.S., 2016, August. Fast multiple objects detection and tracking fusing color camera and 3D LIDAR for intelligent vehicles. In 2016 13th International Conference on Ubiquitous Robots and Ambient Intelligence (URAI) (pp. 234-239). IEEE.
- Arya Senna Abdul Rachman, A., 2017. 3D-LIDAR Multi Object Tracking for Autonomous Driving: Multi-target Detection and Tracking under Urban Road Uncertainties.