Skip to content
Unofficial Implementation of the paper: Multiple People Tracking by Lifted Multicut and Person Re-identification
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Type Name Latest commit message Commit time
Failed to load latest commit information.
cabbage finalize Jan 31, 2018
external cleanup after boost mess.. Dec 3, 2017
prototyping finalize Jan 31, 2018
src wip: search for bug in edge weight calculation Jan 28, 2018
tools mota calculations for report/ fix bug with gt aabb extraction Jan 27, 2018
training add readme for prototyping Nov 30, 2017
.gitignore wip: setup script Jan 20, 2018
.gitmodules cleanup after boost mess.. Dec 3, 2017
CMakeLists.txt make c++ part work Jan 20, 2018
LICENSE clean-up documentation Jan 20, 2018 add install script Jan 20, 2018 Work on DeepMatching Jan 8, 2018



Unofficial implementation of the paper[1]: Multiple People Tracking by Lifted Multicut and Person Re-identification

mot16_11 Tracking calculated by this library on the MOT16-11 video using dmax=100 over 10 frames


The software is developed using Ubuntu 16.04 and OSX with Python 3.5. The following libraries and tools are needed for this software to work correctly:

  • tensorflow (1.x+)
  • Keras (2.x+)

Download source tree

Download the source code and its submodules using

git clone --recursive


When the above criterias are met a simple install routine can be called inside the source root


This script will create a text file called settings.txt. You will need this file when you are using the end-to-end algorithm.

Execute Code

Follow this steps to do an end-to-end run on a video:

import numpy as np
from cabbage.MultiplePeopleTracking import execute_multiple_people_tracking

video_name = 'the_video_name'
X = np.zeros((n, h, w, 3))  # the whole video loaded as np array
dmax = 100

Dt = np.zeros((m, 6))  # m=number of detections

video_loc = '/path/to/video/imgs'  # the video must be stored as a folder with the individual frames

settings_loc = '/path/to/settings.txt'  # generated by the script

execute_multiple_people_tracking(video_loc, X, Dt, video_name, dmax, settings_loc)
# after the program has finished you can find a text file at the settings.data_root location
# called 'output.txt'. It is structured as follows:
#   id1, id2, 0 (has an edge) OR 1 (has no edge)
# sample:
#    0, 1, 0
#    0, 2, 0
#    0, 3, 1
#    ...
# the ids correspond with the positions of the first axis of the Dt-matrix


Icon made by Smashicons from

[1] Tang, Siyu, et al. "Multiple people tracking by lifted multicut and person re-identification." Proceedings of the IEEE Conference on Computer Vision and Pattern Recognition. 2017.

You can’t perform that action at this time.