Skip to content

BluecatLi/direct_event_object_tracker

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Direct 3D model-based object tracking with event camera by motion interpolation

Event-based, Direct Camera Tracking

This is the code corresponding to the ICRA'24 paper Direct 3D model-based object tracking with event camera by motion interpolation by Yufan Kang, Guillaume Caron, Ryoichi Ishikawa, Adrian Escande, Kevin Chappellet, Ryusuke Sagawa, Takeshi Oishi.

We also provide the paper, poster, dataset and video. If you use any of this code, please cite the following publication:

@InProceedings{Kang24icra,
  author        = {Yufan Kang and Guillaume Caron and Ryichi Ishikawa and Adrien Escande and Kevin Chappellet and Ryusuke Sagawa and Takeshi Oishi},
  title         = {Direct 3D model-based object tracking with event camera by motion interpolation},
  booktitle     = {{IEEE} Int. Conf. Robot. Autom. (ICRA)},
  year          = 2024
}

Table of Contents

  1. Overview
  2. Installation
  3. Running an Example

1. Overview

We propose a method of 6-DoF object pose tracking with event camera and 3D model. To enable reliable and accurate tracking of objects, we use a new event representation and predict brightness increment images with motion interpolation. This implementation is based on the direct event camera tracker proposed in Event-based, Direct Camera Tracking from a Photometric 3D Map using Nonlinear Optimization (Bryner et. al., 2019 ICRA).


Main Window

Main window of tracking software with poses and keyframes loaded.

2. Installation

The software was originally developed and tested against ROS Noetic and Ubuntu 20.04.

Install ORFCV. Make sure the absolute path to resource.cfg file is correctly set at line 1098 of code/src/OGRE/gcOgre.cpp before cmake.

Install catkin tools and vcstool if needed:

sudo apt install python-catkin-tools python-vcstool

Create a catkin workspace if not already done so:

mkdir -p catkin_ws/src
cd catkin_ws
catkin config --init --extend /opt/ros/noetic --cmake-args -DCMAKE_BUILD_TYPE=Release

Then clone the repository and pull in all its dependencies using vcs-import:

cd catkin_ws/src
git clone https://github.com/BluecatLi/direct_event_object_tracker.git
vcs-import < direct_event_object_tracker/dependencies.yaml

This will pull in the following ROS dependencies that are not part of ROS:

The following system dependencies are also required

  • Qt 5.5 or higher (or a bit lower might be fine too)
  • GCC 5.4 or higher (must support the C++14 standard)
  • CMake 3.1 or higher
  • OpenCV 3
  • Eigen 3
  • yaml-cpp
  • PCL 1.7 (only for loading PLY files, could easily be replaced with assimp)
  • Boost
  • sophus (is actually bundled as I had to make some slight adjustments to the code)
  • QCustomPlot
  • GLM

They are all available through the package management of Ubuntu and can be installed using apt:

sudo apt install build-essential qt5-default cmake libopencv-dev libeigen3-dev \
    libyaml-cpp-dev libpcl-dev ros-melodic-pcl-ros libboost-dev libqcustomplot-dev \
    libglm-dev libproj-dev dh-autoreconf

All the software is contained in a ROS package and can be built and run using catkin:

cd .. # should be in catkin_ws now
catkin build direct_event_object_tracker
source devel/setup.bash

To launch it, you need a running roscore. So in one terminal run

roscore

In another terminal, pass a path to a configuration file. For example:

roscd direct_event_object_tracker
rosrun direct_event_object_tracker direct_event_object_tracker cfg/main.yaml

3. Running an Example

Download the dataset from https://www.cvl.iis.u-tokyo.ac.jp/~kyf/ICRA2024/.

Create a configuration file (you can use the example config from the respository and adjust the paths to the rosbag and 3D model. It also contains an explanation of the parameters):

Then, in your workspace run the software with this config:

roscd direct_event_object_tracker
rosrun direct_event_object_tracker direct_event_object_tracker path/to/config.yaml

To run the algorithm follow these steps:

  1. Load some events by clicking on "load events". You should see a grayscale "Event Frame" image on the right. If you get an error saying "start time not available" this means that the currently select time window is outside the loaded dataset. Increase "Current Time" by dragging the slider at the top.
  2. Generate a Keyframe from the pose by clicking on "genereate KF". This should fill the two rightmost columns.
  3. Set an initial pose. To this end, adjust the 6-DoF pose by the buttons at top left. Make sure the difference between desired events and rendered grayscale image is small.
  4. Run tracking by clicking either on "track step" or "track all" to either just optimize the current pose or to generate a new keyframe afterwards and thus continue tracking the rest of the dataset. "track all" will log results to the export_dir set in the config.

You can also run the optimization on just a single pyramid level with the "minimize" button. Clicking "plot" will generate a visualization of the error function.

About

No description, website, or topics provided.

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors

Languages