Skip to content
LOLA Environment Perception Package v3: Understanding the environment sensed by a Kinect-like sensor
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.


LOLA Environment Perception Package v3: understanding the environment sensed by a Kinect-like sensor. Previous version: lepp2


The goal of the project is to approximate arbitrary objects in the video feed of a 3D sensor, using a number of basic geometric shapes, such as polygons, spheres and cylinders, in real time.

The approximations can then be used as a basis for a robot control system, providing it with precise information on the positions of surfaces and obstacles.

As such, in order to meet the real-time requirement, precision in the generated approximations is sometimes sacrificed, but with the goal of never under-approximating an object, so that the robot does not accidentaly end up colliding with it.


As part of the research and development of humanoid robots that the Institute of Applied Mechanics of the Technical University Munich does, a subsystem for obstacle detection based on the sensory information provided by Kinect-like sensors was developed.

Initially, the system was based on analyzing pure depth maps (2D images, where each pixel is additionally annotated with depth measurements), as returned by the sensor. It subsequently moved to leveraging PCL, in the hopes of simplifying, as well as improving the subsystem by making use of the algorithms already implemented by PCL.

This project represents the latest evolution of this subsystem. It builds upon ideas that were previously implemented in the vision subsystem, improving the object approximation algorithms, implementing some new approaches for obtaining the best possible approximations, as well as optimizing and improving the efficiency and modularity of the previously existing codebase.

It is divided into different sections, where the src/lepp3 directory contains the purely vision-related components, which are sepparated from the humanoid communication files and external dependencies. This facilitates better code organization and modularity of the humanoid robot system.

The implementation of the robot communication subsystem is also found within this repository, in the src/lola tree. Most of the code there represents communication logic between the robot control and the vision subsystems, such as obtaining the robot's kinematic parameters and sending the detected surfaces and obstacles' descriptions back to the control, according to previously specified communication protocols. This can be considered as a good example how the lepp3 library can be used within the context of a larger system.



Once the necessary dependencies have been obtained and installed, building is just:

mkdir build && cd build
cmake ..


The lola executable requires only one argument, a config file to load. The config file contains settings for the specific components to be enabled and their parameters. The config directory contains several configuration files to get started.

./lola <config_file_path>

By adding or removing components in the config file, the executable can be made to run anything from a simple view from a connected camera, playback or recording of data obtained from a 3D sensor, or even the complete system involving communication with a robot and/or other networked components (e.g. to perform AR visualization on a Hololens).

See master-cfg.toml for an example of every available component and all the possible parameters which can be set. Note, however, that some components are incompatible, so the master-cfg file cannot be used as-is.

To aid with testing and development, the src/lola/iface folder includes several tools to send and receive data for the various components involved when using a real robot (e.g. artifical kinematic data can be sent to lepp3 and lepp3's results can be broadcast to a mock receiver).


The lepp3 library is inherently modular and flexible. To get started in its different components, it helps to look at the structure used for environment modeling during experiments with the robot LOLA (the different components can be activated/deactivated at will through the config file):

![Alt text]( digraph G { aize ="4,4"; main [label="Video Source / Pose Data",shape=box]; input [label="Input Filter",shape=box]; detector [label="SurfaceDetector",shape=box]; visualizer [label="Visualizer / Aggregator",shape=box]; plane [label="SurfaceFinder",shape=parallelogram]; inlier [label="Inlier Finder",shape=ellipse]; gmm [label="GMM Segmenation",shape=ellipse]; euclidean [label="Euclidean Segmentation",shape=ellipse]; object [label="Object Approximator",shape=ellipse]; lowpass [label="Euclidean Low-Pass-Filter",shape=ellipse]; kalman [label="Euclidean Kalman-Filter",shape=ellipse]; surfcluster [label="SurfaceClusterer",shape=hexagon]; surftracker [label="SurfaceTracker",shape=hexagon]; convexhull [label="ConvexHullDetector",shape=hexagon];
main -> input [weight=8]; input -> detector [weight=8]; detector -> plane [weight=8,color=".7 .3 1.0",label="Plane Detection"]; plane -> detector [weight=8,color=".7 .3 1.0"]; detector -> surfcluster [weight=8,color=".3 .7 1.0",label="Surface Approximation"]; surfcluster -> surftracker [weight=8,color=".3 .7 1.0"]; surftracker -> convexhull [weight=8,color=".3 .7 1.0"]; convexhull -> detector [weight=8,color=".3 .7 1.0"]; detector -> inlier [weight=8,label="Obstacle Approximation"]; inlier -> gmm [weight=8]; inlier -> euclidean [weight=8]; gmm -> object [weight=8]; euclidean -> object [weight=8]; object -> lowpass [weight=8,style=dotted]; object -> kalman [weight=8,style=dotted]; object -> visualizer [weight=8]; lowpass -> visualizer [weight=8,style=dotted]; kalman -> visualizer [weight=8,style=dotted]; } )


The program contains a few tracepoints for tracelogging. It uses the LTTng tool (version 2.9 or higher). In order to activate tracelogging, you need to enable the corresponding flag when running cmake:


Then, you need to run LTTng in another terminal before running lepp:

lttng create
lttng enable-events -u -a
lttng start
<Run lepp>
lttnh stop
lttng destroy

A small script is available under src/script/ to extract some useful metrics from the resulting trace data (although you can use any trace viewer capable of reading CTF trace data as well). The script requires Python 3, babeltrace (apt-get install babeltrace libbabeltrace-ctf-dev python3-babeltrace) and the Python lib (pip3 install plotly).


python3 src/script/ <trace_directory> <output_name>

    <trace_directory> : Directory containing the actual trace data
    <output_name>     : Name to give the .html page containing the graph plots

python3 src/sript/ ~/lttng-traces/auto-20171010-236640/ust/uid/1000/64-bit/ plot_output

The script will print some statistics about each event found in the trace, and create an .html page <output_name>.html containing a plot of each event across the duration of the trace (in frames).


The project is published under the terms of the MIT License.

You can’t perform that action at this time.