Skip to content
Joint Detection and Embedding for fast multi-object tracking
Python
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
assets add gif demo Oct 2, 2019
cfg Update DATASET_ZOO.md Oct 11, 2019
data training data Oct 11, 2019
tracker
utils
.gitignore add train list Oct 10, 2019
DATASET_ZOO.md Update DATASET_ZOO.md Oct 16, 2019
LICENSE Initial commit Sep 27, 2019
README.md Update README.md Oct 11, 2019
demo.py hh Sep 28, 2019
extract_ped_per_frame.py
models.py training data Oct 11, 2019
test.py
track.py
train.py

README.md

Towards-Realtime-MOT

NEWS:

  • [2019.10.11] Training and evaluation data uploaded! Please see DATASET_ZOO.md for details.
  • [2019.10.01] Demo code and pre-trained model released!

Introduction

This repo is the a codebase of the Joint Detection and Embedding (JDE) model. JDE is a fast and high-performance multiple-object tracker that learns the object detection task and appearance embedding task simutaneously in a shared neural network. Techical details are described in our arXiv preprint paper. By using this repo, you can simply achieve MOTA 64%+ on the "private" protocol of MOT-16 challenge, and with a near real-time speed at 18~24 FPS (Note this speed is for the entire system, including the detection step! ) .

We hope this repo will help researches/engineers to develop more practical MOT systems. For algorithm development, we provide training data, baseline models and evaluation methods to make a level playground. For application usage, we also provide a small video demo that takes raw videos as input without any bells and whistles.

Requirements

  • Python 3.6
  • Pytorch >= 1.0.1
  • syncbn (Optional, compile and place it under utils/syncbn, or simply replace with nn.BatchNorm here)
  • maskrcnn-benchmark (Their GPU NMS is used in this project)
  • python-opencv
  • ffmpeg (Optional, used in the video demo)
  • py-motmetrics (Simply pip install motmetrics)

Video Demo

Usage:

python demo.py --input-video path/to/your/input/video --weights path/to/model/weights
               --output-format video --output-root path/to/output/root

Dataset zoo

Please see DATASET_ZOO.md for detailed description of the training/evaluation datasets.

Pretrained model and baseline models

Darknet-53 ImageNet pretrained: [DarkNet Official]

JDE-1088x608-uncertainty: [Google Drive] [Baidu NetDisk]

Test on MOT-16 Challenge

Training instruction

  • Download the training datasets.
  • Edit cfg/ccmcpe.json, config the training/validation combinations. A dataset is represented by an image list, please see data/*.train for example.
  • Run the training script:
CUDA_VISIBLE_DEIVCES=0,1,2,3,4,5,6,7 python train.py 

We use 8x Nvidia Titan Xp to train the model, with a batch size of 32. You can adjust the batch size (and the learning rate together) according to how many GPUs your have. You can also train with smaller image size, which will bring faster inference time. But note the image size had better to be multiples of 32 (the down-sampling rate).

Train with custom datasets

Adding custom datsets is quite simple, all you need to do is to organize your annotation files in the same format as in our training sets. Please refer to DATASET_ZOO.md for the dataset format.

Acknowledgement

A large portion of code is borrowed from ultralytics/yolov3 and longcw/MOTDT, many thanks to their wonderful work!

You can’t perform that action at this time.