OKVIS: Open Keyframe-based Visual-Inertial SLAM.
Switch branches/tags
Clone or download
Latest commit 1dce912 Jul 22, 2016
Type Name Latest commit message Commit time
Failed to load latest commit information.
cmake initial commit Feb 4, 2016
config version 1.1.2 Mar 23, 2016
okvis_apps/src initial commit Feb 4, 2016
okvis_ceres initial commit Feb 4, 2016
okvis_common initial commit Feb 4, 2016
okvis_cv version 1.1.3 Jul 22, 2016
okvis_frontend version 1.1.3 Jul 22, 2016
okvis_kinematics initial commit Feb 4, 2016
okvis_matcher initial commit Feb 4, 2016
okvis_multisensor_processing version 1.1.2 Mar 23, 2016
okvis_time initial commit Feb 4, 2016
okvis_timing initial commit Feb 4, 2016
okvis_util initial commit Feb 4, 2016
.gitignore proper docu path Feb 4, 2016
CMakeLists.txt version 1.1.3 Jul 22, 2016
LICENSE initial commit Feb 4, 2016
README.md version number... Jul 22, 2016
contributors.txt initial commit Feb 4, 2016
doxygen.config initial commit Feb 4, 2016


README {#mainpage}

Welcome to OKVIS: Open Keyframe-based Visual-Inertial SLAM.

This is the Author's implementation of the [1] and [3] with more results in [2].

[1] Stefan Leutenegger, Simon Lynen, Michael Bosse, Roland Siegwart and Paul Timothy Furgale. Keyframe-based visual–inertial odometry using nonlinear optimization. The International Journal of Robotics Research, 2015.

[2] Stefan Leutenegger. Unmanned Solar Airplanes: Design and Algorithms for Efficient and Robust Autonomous Operation. Doctoral dissertation, 2014.

[3] Stefan Leutenegger, Paul Timothy Furgale, Vincent Rabaud, Margarita Chli, Kurt Konolige, Roland Siegwart. Keyframe-Based Visual-Inertial SLAM using Nonlinear Optimization. In Proceedings of Robotics: Science and Systems, 2013.

Note that the codebase that you are provided here is free of charge and without any warranty. This is bleeding edge research software.

Also note that the quaternion standard has been adapted to match Eigen/ROS, thus some related mathematical description in [1,2,3] will not match the implementation here.

If you publish work that relates to this software, please cite at least [1].


The 3-clause BSD license (see file LICENSE) applies.

How do I get set up?

This is a pure cmake project.

You will need to install the following dependencies,

  • CMake,

      sudo apt-get install cmake
  • google-glog + gflags,

      sudo apt-get install libgoogle-glog-dev

      sudo apt-get install libatlas-base-dev
  • Eigen3,

      sudo apt-get install libeigen3-dev
  • SuiteSparse and CXSparse,

      sudo apt-get install libsuitesparse-dev
  • Boost,

      sudo apt-get install libboost-dev libboost-filesystem-dev
  • OpenCV 2.4-3.0: follow the instructions on http://opencv.org/ or install via

      sudo apt-get install libopencv-dev
  • Optional: use the the package with the Skybotix VI sensor. Note that this requires a system install, not just as ROS package. Also note that Skybotix OSX support is experimental (checkout the feature/osx branch).

      git clone https://github.com/ethz-asl/libvisensor.git
      cd libvisensor

then download and expand the archive:

wget https://www.doc.ic.ac.uk/~sleutene/software/okvis-1.1.3.zip
unzip okvis-1.1.3.zip && rm okvis-1.1.3.zip

Or, if you were given bitbucket access, clone the repository:

git clone git@github.com:ethz-asl/okvis.git


git clone https://github.com/ethz-asl/okvis.git

Building the project

To change the cmake build type for the whole project use:

mkdir build && cd build
cmake -DCMAKE_BUILD_TYPE=Release ..
make -j8

NOTE: if you want to use the library, install the project (default or somewhere else), so the dependencies can be resolved.

make install

Running the demo application

You will find a demo application in okvis_apps. It can process datasets in the ASL/ETH format.

In order to run a minimal working example, follow the steps below:

  1. Download a dataset of your choice from http://projects.asl.ethz.ch/datasets/doku.php?id=kmavvisualinertialdatasets. Assuming you downloaded MH_01_easy/. You will find a corresponding calibration / estimator configuration in the config folder.

  2. Run the app as

     ./okvis_app_synchronous path/to/okvis/config/config_fpga_p2_euroc.yaml path/to/MH_01_easy/mav0/

Outputs and frames

In terms of coordinate frames and notation,

  • W denotes the OKVIS World frame (z up),
  • C_i denotes the i-th camera frame
  • S denotes the IMU sensor frame
  • B denotes a (user-specified) body frame.

The output of the okvis library is the pose T_WS as a position r_WS and quaternion q_WS, followed by the velocity in World frame v_W and gyro biases (b_g) as well as accelerometer biases (b_a). See the example application to get an idea on how to use the estimator and its outputs (callbacks returning states).

Configuration files

The config folder contains example configuration files. Please read the documentation of the individual parameters in the yaml file carefully. You have various options to trade-off accuracy and computational expense as well as to enable online calibration.

HEALTH WARNING: calibration

If you would like to run the software/library on your own hardware setup, be aware that good results (or results at all) may only be obtained with appropriate calibration of the

  • camera intrinsics,
  • camera extrinsics (poses relative to the IMU),
  • knowledge about the IMU noise parameters,

To perform a calibration yourself, we recommend the following:

Using the library

Here's a minimal example of your CMakeLists.txt to build a project using OKVIS.

cmake_minimum_required(VERSION 2.8)

set(OKVIS_INSTALLATION <path/to/install>) # point to installation

# require OpenCV
find_package( OpenCV COMPONENTS core highgui imgproc features2d REQUIRED )
include_directories(BEFORE ${OpenCV_INCLUDE_DIRS}) 

# require okvis
find_package( okvis 1.1 REQUIRED)

# require brisk
find_package( brisk 2 REQUIRED)

# require ceres
find_package( Ceres REQUIRED )

# require OpenGV
find_package(opengv REQUIRED)

# VISensor, if available
  message(STATUS "Found libvisensor.")
  message(STATUS "libvisensor not found")

# now continue with your project-specific stuff...

Contribution guidelines


The developpers will be happy to assist you or to consider bug reports / feature requests. But questions that can be answered reading this document will be ignored. Please contact s.leutenegger@imperial.ac.uk.