Skip to content

ns15417/multicol-slam2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

!!!!This is an early version of MultiCol-SLAM!!!!

MultiCol-SLAM

Author: Steffen Urban (urbste at googlemail.com).

MultiCol-SLAM is a multi-fisheye camera SLAM system. We adapt the SLAM system proposed in ORB-SLAM and ORB-SLAM2 and extend it for the use with fisheye and multi-fisheye camera systems.

News

  • 25/10/2016 added paper: Paper
  • See a video here: VIDEO

The novel methods and concepts included in this new version are:

  • MultiKeyframes
  • Generic camera model (Scaramuzza's polynomial model).
  • MultiCol - a generic method for bundle adjustment for multi-camera systems.
  • a hyper graph (g2o) formulation of MultiCol
  • dBRIEF and mdBRIEF a distorted and a online learned, masked version of BRIEF.
  • Multi-camera loop closing
  • minimal (non)-central absolute pose estimation (3 pts) instead of EPnP which is non-minimal (6 pts)

In terms of performance the following things were modified:

  • exchanged all tranformations and vectors from cv::Mat to cv::Matx and cv::Vec
  • changed matrix access (descriptors, images) from .at to .ptr()
  • set terminate criteria for bundle adjustment and pose estimation using g2o::SparseOptimizerTerminateAction

A paper of the proposed SLAM system will follow. Here some short descriptions on how the multi-camera integration works.

The MultiCol model is explained extensively in the paper given below. Here we briefly recapitulate the content: The MultiCol model is given by:

the indices are object point i, observed at time t, in camera c. The camera projection is given by \pi and we chose a general projection function, making this model applicable to a varity of prevalent (central) cameras, like perspective, fisheye and omnidirectional.

For a single camera, we could omit the matrix M_t. This yields the classic collinearity equations. Latter is depicted in the following figure. Each observation m' has two indices, i.e. t and i.

To handle multi-camera systems, the body frame is introduced, i.e. a frame that describes the motion of the multi-camera rig:

If we are optimizing the exterior orientation of our multi-camera system, we are actually looking for an estimate of matrix M_t. Now each observation has three indices.

The graphical representation of MultiCol can be realized in a hyper-graph and g2o can be used to optimize vertices of this graph:

1. Related Publications:

@Article{UrbanMultiColSLAM16,
  Title={{MultiCol-SLAM} - A Modular Real-Time Multi-Camera SLAM System},
  Author={Urban, Steffen and Hinz, Stefan},
  Journal={arXiv preprint arXiv:1610.07336},
  Year={2016}
}
@Article{UrbanMultiCol2016,
  Title = {{MultiCol Bundle Adjustment: A Generic Method for Pose Estimation, Simultaneous Self-Calibration and Reconstruction for Arbitrary Multi-Camera Systems}},
  Author = {Urban, Steffen and Wursthorn, Sven and Leitloff, Jens and Hinz, Stefan},
  Journal = International Journal of Computer Vision,
  Year = {2016},
  Pages = {1--19}
 }
@Article{urban2015improved,
  Title = {{Improved Wide-Angle, Fisheye and Omnidirectional Camera Calibration}},
  Author = {Urban, Steffen and Leitloff, Jens and Hinz, Stefan},
  Journal = {ISPRS Journal of Photogrammetry and Remote Sensing},
  Year = {2015},
  Pages = {72--79},
  Volume = {108},
  Publisher = {Elsevier},
}

2. Requirements

  • C++11 compiler

  • As the accuracy and speed of the SLAM system also depend on the hardware, we advice you to run the system on a strong CPU. We mainly tested the system using a laptop with i7-3630QM @2.4GHz and 16 GB of RAM running Win 7 x64. So anything above that should be fine.

3. Camera calibration

We use a rather generic camera model, and thus MultiCol-SLAM should work with any prevalent central camera. To calibrate your cameras follow the instructions on link. The systems expects a calibration file with the following structure:

# Camera Parameters. Adjust them!
# Camera calibration parameters camera back
Camera.Iw: 754
Camera.Ih: 480

# hyperparameters
Camera.nrpol: 5
Camera.nrinvpol: 12

# forward polynomial f(\rho)
Camera.a0: -209.200757992065
Camera.a1: 0.0 
Camera.a2: 0.00213741670953883
Camera.a3: -4.2203617319086e-06
Camera.a4: 1.77146086919594e-08

# backward polynomial rho(\theta)
Camera.pol0: 293.667187375663
.... and the rest pol1-pol10
Camera.pol11: 0.810799620714366

# affine matrix
Camera.c: 0.999626131079017
Camera.d: -0.0034775192597376
Camera.e: 0.00385134991673147

# principal point
Camera.u0: 392.219508388648
Camera.v0: 243.494438476351

You can find example files in ./Examples/Lafida

4. Multi-camera calibration

You can find example files in ./Examples/Lafida TODO

5.Dependencies:

Pangolin

For visualization. Get the instructions here: Pangolin

OpenCV 3

Required is at least OpenCV 3.0. The library can be found at: OpenCV.

Eigen 3

Required by g2o. Version 3.2.9 is included in the ./ThirdParty folder. Other version can be found at Eigen.

OpenGV

OpenGV can be found at: OpenGV. It is also included in the ./ThirdParty folder. We use OpenGV for re-localization (GP3P) and relative orientation estimation during initialization (Stewenius).

DBoW2 and g2o

As ORB-SLAM2 we use modified versions of DBoW2 and g2o for place recognition and optimization respectively. Both are included in the ./ThirdParty folder. The original versions can be found here: DBoW2, g2o.

6. Build MultiCol-SLAM:

Ubuntu:

This is tested with Ubuntu 16.04. Before you build MultiCol-SLAM, you have to build and install OpenCV and Pangolin. This can for example be done by running the following:

Build Pangolin:

sudo apt-get install libglew-dev cmake
git clone https://github.com/stevenlovegrove/Pangolin.git
cd Pangolin
mkdir build
cd build
cmake -DCPP11_NO_BOOST=1 ..
make -j

Build OpenCV 3.1

This is just a suggestion on how to build OpenCV 3.1. There a plenty of options. Also some packages might be optional.

sudo apt-get install libgtk2.0-dev pkg-config libavcodec-dev libavformat-dev libswscale-dev python-dev python-numpy libtbb2 libtbb-dev libjpeg-dev libpng-dev libtiff-dev libjasper-dev libdc1394-22-dev
git clone https://github.com/Itseez/opencv.git
cd opencv
mkdir build
cd build
cmake -D CMAKE_BUILD_TYPE=RELEASE -D WITH_CUDA=OFF ..
make -j
sudo make install

this will take some time...

Build MultiCol-SLAM

git clone https://github.com/urbste/MultiCol-SLAM.git MultiCol-SLAM
cd MultiCol-SLAM
chmod +x build.sh
./build.sh

for the rest run the build.sh This will create a library and an executable multi_col_slam_lafida, that you can run as shown in 7. below.

Windows:

This description assumes that you are familiar with building libraries using cmake and Visual Studio. Required is at least Visual Studio 2013. The first step is to build Pangolin.

  • Download or clone Pangolin.
  • Run cmake to create a VS project in $PATH_TO_PANGOLIN$/build.
  • You can add options if you like, but the most basic set of options is sufficient for MultiCol-SLAM and should build without issues.
  • Open build/Pangolin.sln switch to Release and build the solution (ALL_BUILD)

Next build OpenCV 3.1.

  • Download or clone OpenCV 3.1.
  • Run cmake to create a VS project in $PATH_TO_OpenCV$/build.
  • You might want to switch of building the CUDA libraries as this takes a long time
  • Open $PATH_TO_OpenCV$/build/OpenCV.sln switch to Release and build the solution. Open folder CMakeTargets. Right click on INSTALL and select build.

Now download or clone MultiCol-SLAM. Next build DBoW2:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/DBoW2/build
  • If you get configuration errors, you likely did not set the OpenCV_DIR
  • Set this path to $PATH_TO_OpenCV$/build/install and run Generate
  • $MultiCol-SLAM_PATH$/ThirdParty/DBoW2/build/DBoW2.sln, switch to Release and build the solution

Next build g2o:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/g2o/build
  • If you get any errors, you might want to set the EIGEN3_INCLUDE_DIR. Either you set it to your own version of Eigen or use the version that is provided in the ThirdParty folder $MultiCol-SLAM_PATH$/ThirdParty/Eigen.
  • Hit Generate and you will get the solution g2o.sln. Open it, select Release and build the solution.

In a last step, we will build OpenGV. Unfortunately this takes quite some time under Windows (hours).

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/ThirdParty/OpenGV/build
  • Open $MultiCol-SLAM_PATH$/ThirdParty/OpenGV/build/OpenGV.sln switch to Release and build the solution

Finally, we can build MultiCol-SLAM:

  • Run cmake to create a VS project in $MultiCol-SLAM_PATH$/build
  • If you get any errors:
  • Set OpenCV_DIR: $PATH_TO_OpenCV$/build/install
  • Set Pangolin_DIR: $PATH_TO_PANGOLIN$/build/src
  • Open $MultiCol-SLAM_PATH$/build/MultiCol-SLAM.sln and build the solution (ALL_BUILD)

7. Run examples

By now you should have compiled all libraries and MultiCol-SLAM. If everthing went well, you will find an executable in the folder ./Examples/Lafida/Release if you are running Windows and ./

First download the indoor dynamic dataset: dataset Then extract the folder, e.g. to the folder

$HOME$/Downloads/IndoorDynamic

The executable multi_col_slam_lafida expects 4 paths. The first is the path to the vocabulary file. The second is the path to the settings file. The third is the path to the calibration files. The fourth is the path to the images. In our example, we could run MultiCol-SLAM:

./Examples/Lafida/multi_col_slam_lafida ./Examples/small_orb_omni_voc_9_6.yml  ./Examples/Lafida/Slam_Settings_indoor1.yaml ./Examples/Lafida/ $HOME$/Downloads/IndoorDynamic

Important: To evaluate the trajectory you will need to transform the result into the rigid body coordinate system. The transformation matrix MCS_to_Rigid_body can be found here.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published