Skip to content
[ICCV2019] Robust Multi-Modality Multi-Object Tracking
Branch: master
Clone or download
Latest commit 76635ad Sep 9, 2019
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
README.md Update README.md Sep 10, 2019

README.md

Robust Multi-Modality Multi-Object Tracking

This is the project page for our ICCV2019 paper: Robust Multi-Modality Multi-Object Tracking.

Authors: Wenwei Zhang, Hui Zhou, Shuyang Sun, Zhe Wang, Jianping Shi, Chen Change Loy

[arXiv]  [Project Page] 

Introduction

In this work, we design a generic sensor-agnostic multi-modality MOT framework (mmMOT), where each modality (i.e., sensors) is capable of performing its role independently to preserve reliability, and further improving its accuracy through a novel multi-modality fusion module. Our mmMOT can be trained in an end-to-end manner, enables joint optimization for the base feature extractor of each modality and an adjacency estimator for cross modality. Our mmMOT also makes the first attempt to encode deep representation of point cloud in data association process in MOT.

For more details, please refer our paper.

Codebase

We are still preparing the codebase for release. We will release the code and models as soon as possible.

Citation

If you use this codebase or model in your research, please cite:

@InProceedings{mmMOT_2019_ICCV,
    author = {Zhang, Wenwei and Zhou, Hui and Sun, Shuyang, and Wang, Zhe and Shi, Jianping and Loy, Chen Change},
    title = {Robust Multi-Modality Multi-Object Tracking},
    booktitle = {The IEEE International Conference on Computer Vision (ICCV)},
    month = {October},
    year = {2019}
}
You can’t perform that action at this time.