Skip to content

Commit

Permalink
Add MOT20 model file and results
Browse files Browse the repository at this point in the history
  • Loading branch information
timmeinhardt committed Dec 1, 2020
1 parent 7c7c9cd commit 93db23f
Show file tree
Hide file tree
Showing 16 changed files with 2,430 additions and 107 deletions.
1 change: 1 addition & 0 deletions .gitignore
Expand Up @@ -17,6 +17,7 @@ motchallenge-devkit
.python-version
__pycache__
*.egg-info
.ipynb_checkpoints

# MacOSX
.DS_Store
Expand Down
37 changes: 25 additions & 12 deletions README.md
Expand Up @@ -2,7 +2,7 @@

This repository provides the implementation of our paper **Tracking without bells and whistles** (Philipp Bergmann, [Tim Meinhardt](https://dvl.in.tum.de/team/meinhardt/), [Laura Leal-Taixe](https://dvl.in.tum.de/team/lealtaixe/)) [https://arxiv.org/abs/1903.05625]. This branch includes an updated version of Tracktor for PyTorch 1.3 with an improved object detector. The original results of the paper were produced with the `iccv_19` branch.

In addition to our supplementary document, we provide an illustrative [web-video-collection](https://vision.in.tum.de/webshare/u/meinhard/tracking_wo_bnw-supp_video_collection.zip). The collection includes examplary Tracktor++ tracking results and multiple video examples to accompany our analysis of state-of-the-art tracking methods.
In addition to our supplementary document, we provide an illustrative [web-video-collection](https://vision.in.tum.de/webshare/u/meinhard/tracking_wo_bnw-supp_video_collection.zip). The collection includes exemplary Tracktor++ tracking results and multiple video examples to accompany our analysis of state-of-the-art tracking methods.

![Visualization of Tracktor](data/method_vis_standalone.png)

Expand All @@ -19,7 +19,7 @@ In addition to our supplementary document, we provide an illustrative [web-video
2. Install Tracktor: `pip3 install -e .`

3. MOTChallenge data:
1. Download [MOT17Det](https://motchallenge.net/data/MOT17Det.zip), [MOT16Labels](https://motchallenge.net/data/MOT16Labels.zip), [2DMOT2015](https://motchallenge.net/data/2DMOT2015.zip), [MOT16-det-dpm-raw](https://motchallenge.net/data/MOT16-det-dpm-raw.zip) and [MOT17Labels](https://motchallenge.net/data/MOT17Labels.zip) and place them in the `data` folder. As the images are the same for MOT17Det, MOT17 and MOT16 we only need one set of images for all three benchmarks.
1. Download [MOT17Det](https://motchallenge.net/data/MOT17Det.zip), [MOT16Labels](https://motchallenge.net/data/MOT16Labels.zip), [2DMOT2015](https://motchallenge.net/data/2DMOT2015.zip), [MOT16-det-dpm-raw](https://motchallenge.net/data/MOT16-det-dpm-raw.zip) and [MOT17Labels](https://motchallenge.net/data/MOT17Labels.zip) and place them in the `data` folder. As the images are the same for MOT17Det, MOT17, and MOT16 we only need one set of images for all three benchmarks.
2. Unzip all the data by executing:
```
unzip -d MOT17Det MOT17Det.zip
Expand All @@ -29,12 +29,12 @@ In addition to our supplementary document, we provide an illustrative [web-video
unzip -d MOT17Labels MOT17Labels.zip
```

4. Download object detector and re-identifiaction Siamese network weights and MOTChallenge result files:
1. Download zip file from [here](https://vision.in.tum.de/webshare/u/meinhard/tracking_wo_bnw-output_v2.zip).
4. Download model (MOT17 object detector, MOT20 object detector, and re-identification network) and MOTChallenge result files:
1. Download zip file from [here](https://vision.in.tum.de/webshare/u/meinhard/tracking_wo_bnw-output_v3.zip).
2. Extract in `output` directory.

## Evaluate Tracktor
In order to configure, organize, log and reproduce our computational experiments we structured our code with the [Sacred](http://sacred.readthedocs.io/en/latest/index.html) framework. For a detailed explanation of the Sacred interface please read its documentation.
In order to configure, organize, log and reproduce our computational experiments, we structured our code with the [Sacred](http://sacred.readthedocs.io/en/latest/index.html) framework. For a detailed explanation of the Sacred interface please read its documentation.

1. Tracktor can be configured by changing the corresponding `experiments/cfgs/tracktor.yaml` config file. The default configuration runs Tracktor++ with the FPN object detector as described in the paper.

Expand All @@ -46,21 +46,34 @@ In order to configure, organize, log and reproduce our computational experiments

3. The results are logged in the corresponding `output` directory.

For reproducability, we provide the new result metrics of this updated code base on the `MOT17` challenge. It should be noted, that these surpass the original Tracktor results. This is due to the newly trained object detector. This version of Tracktor does not differ conceptually from the original ICCV 2019 version (see branch `iccv_19`). The results on the offical MOTChallenge [webpage](https://motchallenge.net/results/MOT17/) are denoted as the `Tracktor++v2` tracker. The train and test results are:
For reproducibility, we provide the new result metrics of this updated code base on the `MOT17` challenge. It should be noted, that these surpass the original Tracktor results. This is due to the newly trained object detector. This version of Tracktor does not differ conceptually from the original ICCV 2019 version (see branch `iccv_19`). The results on the official MOTChallenge [webpage](https://motchallenge.net/results/MOT17/) are denoted as the `Tracktor++v2` tracker. The train and test results are:

```
********************* MOT17 TRAIN Results *********************
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
65.2 83.8 53.3| 63.1 99.2 0.11| 1638 550 714 374| 1732124291 903 1258| 62.3 89.6 62.6
IDF1 IDP IDR| Rcll Prcn GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
65.2 83.8 53.3| 63.1 99.2 1638 550 714 374| 1732 124291 903 1258| 62.3 89.6 62.6
********************* MOT17 TEST Results *********************
IDF1 IDP IDR| Rcll Prcn FAR| GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
55.1 73.6 44.1| 58.3 97.4 0.50| 2355 498 1026 831| 8866235449 1987 3763| 56.3 78.8 56.7
IDF1 IDP IDR| Rcll Prcn GT MT PT ML| FP FN IDs FM| MOTA MOTP MOTAL
55.1 73.6 44.1| 58.3 97.4 2355 498 1026 831| 8866 235449 1987 3763| 56.3 78.8 56.7
```

We complement the results presented in the paper with `MOT20` train and test sequence results. To this end, we run the same tracking pipeline as for `MOT17` but apply an object detector model trained on the `MOT20` training sequences. The corresponding model file is the same as used for [this](https://github.com/dvl-tum/mot_neural_solver) work.

```
********************* MOT20 TRAIN Results *********************
IDF1 IDP IDR| Rcll Prcn GT MT PT ML| FP FN IDs FM| MOTA
60.7 73.4 51.7| 68.5 97.4 2212 892 1064 259| 20860 357227 2664 6504| 66.4
********************* MOT20 TEST Results *********************
IDF1 IDP IDR| Rcll Prcn GT MT PT ML| FP FN IDs FM| MOTA
52.6 73.7 41.0| 54.3 97.6 1242 365 546 331| 6930 236680 1648 4374| 52.6
```


## Train and test object detector (Faster-RCNN with FPN)

For the object detector we followed the new native `torchvision` implementations of Faster-RCNN with FPN which are pretrained on COCO. The provided object detection model was trained and tested with [this](https://colab.research.google.com/drive/1_arNo-81SnqfbdtAhb3TBSU5H0JXQ0_1) Google Colab notebook. The `MOT17Det` train and test results are:
For the object detector, we followed the new native `torchvision` implementations of Faster-RCNN with FPN which are pre-trained on COCO. The provided object detection model was trained and tested with [this](https://colab.research.google.com/drive/1_arNo-81SnqfbdtAhb3TBSU5H0JXQ0_1) Google Colab notebook (or alternatively the `experiments/scripts/faster_rcnn_fpn_training.ipynb` Jupyter notebook). The object detection results on the `MOT17Det` train and test sets are:

```
********************* MOT17Det TRAIN Results ***********
Expand All @@ -74,7 +87,7 @@ Rcll Prcn| FAR GT TP FP FN| MODA MODP
86.5 88.3| 2.23 114564 99132 13184 15432| 75.0 78.3
```

## Training the reidentifaction model
## Training the re-identification model

1. The training config file is located at `experiments/cfgs/reid.yaml`.

Expand Down
6 changes: 3 additions & 3 deletions experiments/cfgs/tracktor.yaml
Expand Up @@ -13,16 +13,16 @@ tracktor:

# fpn
obj_detect_model: output/faster_rcnn_fpn_training_mot_17/model_epoch_27.model
# obj_detect_config: output/fpn/res101/mot_2017_train/voc_init_iccv19/config.yaml
# obj_detect_weights: output/fpn/res101/mot19_cvpr_train/v1/fpn_1_3.pth
# obj_detect_config: output/fpn/res101/mot19_cvpr_train/v1/config.yaml
# obj_detect_model: output/faster_rcnn_fpn_training_mot_20/model_epoch_27.model

reid_weights: output/tracktor/reid/res50-mot17-batch_hard/ResNet_iter_25245.pth
reid_config: output/tracktor/reid/res50-mot17-batch_hard/sacred_config.yaml

interpolate: False
# compile video with: `ffmpeg -f image2 -framerate 15 -i %06d.jpg -vcodec libx264 -y movie.mp4 -vf scale=320:-1`
write_images: False
# load tracking results if available and only evaluate
load_results: False
# dataset (look into tracker/datasets/factory.py)
dataset: mot17_train_FRCNN17
# [start percentage, end percentage], e.g., [0.0, 0.5] for train and [0.75, 1.0] for val split.
Expand Down

0 comments on commit 93db23f

Please sign in to comment.