A SLAM algorithm written in C++ for ego-motion estimation and environment reconstruction with a RGB-D camera.
Video demonstration:
First, make sure you have the following packages installed:
- OpenCV 3
- PCL >= 1.8
- G2O
- Eigen 3
- SuiteSparse
- Spdlog
Then clone this repo into a local folder. Then run the good 'ol stuff:
mkdir build
cd build
cmake ..
make
Now you can find the executable in bin/ folder of our project root.
We recommend evaluating our algorithm with TUM dataset. Our program reads the associated image sequence generated by associate.py.
To use our algorithm, simply do:
RGBDSlamApp path_to_associate_txt start_seq end_seq parameters.txt
Explanation:
path_to_associate_txt
: associated image sequence file generated by associate.py
start_seq
/end_seq
: Image sequence range used for reconstruction
parameters.txt
: program configurations(see below)
We have provided you a template parameter file located in misc/parameters.txt
. The content of this file is very straight-forward. However, there might be a few more things you need to notice before beginning.
- If you are using the images extracted from your own Kinect / Xtion / etc, you have to fill in the calibrated intrinsics beforehand.
- Standard feature extractors (SIFT SURF ORB FAST) and descriptors (SIFT SURF ORB BRISK) etc are supported
Currently I have not implemented the interface with ROS, therefore online SLAM is not supported yet.