Skip to content
A PyTorch implementation of the "Tracking-by-Animation" algorithm published at CVPR 2019.
Python C++ Other
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
functions Add files via upload Mar 19, 2019
imbs Added the imbs algorithm Jul 19, 2019
imgs Delete poster.pdf Jul 19, 2019
modules typo fixed Sep 23, 2019
scripts Added scripts for preprocessing duke Jul 19, 2019
README.md Update README.md Sep 23, 2019
run.py Add files via upload Mar 19, 2019

README.md

Tracking by Animation: Unsupervised Learning of Multi-Object Attentive Trackers

NOTE:

  • A new implementation (with pytorch 1.3) will soon be available.
  • Recently the DukeMTMC website was disabled and would hopefully be recovered in the future.

1. Results

1.1 MNIST-MOT

a) Qualitative results


Click it to watch longer ↑
Left: input. Middle: reconstruction. Right: memory (Row 1), attention (Row 2), and output (Row 3).

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 99.6 99.6 99.6 99.5 78.4 0 978 0 49 49 22 7

1.2 Sprites-MOT

a) Qualitative results


Click it to watch longer ↑
Left: input. Middle: reconstruction. Right: memory (Row 1), attention (Row 2), and output (Row 3).

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 99.2 99.3 99.2 99.2 79.1 0.01 985 1 60 80 30 22

1.3 DukeMTMC

a) Qualitative results


Click it to watch longer ↑
Rows 1 and 4: input. Row 2 and 5: reconstruction. Rows 3 and 6: output.

b) Quantitative results

Configuration IDF1↑ IDP↑ IDR↑ MOTA↑ MOTP↑ FAF↓ MT↓ ML↓ FP↓ FN↓ IDS↓ Frag↓
TBA 82.4 86.1 79.0 79.6 80.4 0.09 1,026 46 64,002 151,483 875 1,481

Quantitative results are hosted at https://motchallenge.net/results/DukeMTMCT, where our TBA tracker is named as ‘MOT_TBA’.

2. Requirements

  • python 3.6
  • pytorch 0.3.1
  • py-motmetrics (to evaluate tracking performances)

3. Usage

3.1 Generate training data

Enter the project root directory cd path/to/tba.

For mnist and sprite:

python scripts/gen_mnist.py     # for mnist
python scripts/gen_sprite.py    # for sprite

For duke:

bash scripts/mts2jpg.sh 1                     # convert .mts files to .jpg files, please run over all cameras by setting the last argument to 1, 2, ..., 8
./scripts/build_imbs.sh                       # build imbs for background extraction
cd imbs/build
./imbs -c 1                                   # run imbs, please run over all cameras by setting c = 1, 2, ..., 8
cd ../..
python scripts/gen_duke_bb.py --c 1           # generate bounding box masks, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke_bb_bg.py --c 1        # refine background images, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke_roi.py                # generate roi masks
python scripts/gen_duke_processed.py --c 1    # resize images, please run over all cameras by setting c = 1, 2, ..., 8
python scripts/gen_duke.py                    # generate .pt files for training

3.2 Train the model

python run.py --task mnist     # for mnist
python run.py --task sprite    # for sprite
python run.py --task duke      # for duke

3.3 Show training curves

python scripts/show_curve.py --task mnist     # for mnist
python scripts/show_curve.py --task sprite    # for sprite
python scripts/show_curve.py --task duke      # for duke

3.4 Evaluate tracking performances

a) Generate test data

python scripts/gen_mnist.py --metric 1         # for mnist
python scripts/gen_sprite.py --metric 1        # for sprite
python scripts/gen_duke.py --metric 1 --c 1    # for duke, please run over all cameras by setting c = 1, 2, ..., 8

b) Generate tracking results

python run.py --init sp_latest.pt --metric 1 --task mnist                     # for mnist
python run.py --init sp_latest.pt --metric 1 --task sprite                    # for sprite
python run.py --init sp_latest.pt --metric 1 --task duke --subtask camera1    # for duke, please run all subtasks from camera1 to camera8

c) Convert the results into .txt

python scripts/get_metric_txt.py --task mnist                     # for mnist
python scripts/get_metric_txt.py --task sprite                    # for sprite
python scripts/get_metric_txt.py --task duke --subtask camera1    # for duke, please run all subtasks from camera1 to camera8

d) Evaluate tracking performances

python -m motmetrics.apps.eval_motchallenge data/mnist/pt result/mnist/tba/default/metric --solver lap      # form mnist
python -m motmetrics.apps.eval_motchallenge data/sprite/pt result/sprite/tba/default/metric --solver lap    # form sprite

To evaluate duke, please upload the file duke.txt (under result/duke/tba/default/metric/) to https://motchallenge.net.

You can’t perform that action at this time.