Environment setting is follows.
Operating system: windows, need to change (from scipy.io import loadmat in TNT/train_cnn_trajectory_2d.py) to (h5py) in Linux.
Python: 3.5.4
tensorflow: 1.4.0
cuda: 8.0
cudnn: 5.1.10
opencv: 3.2.0
Other packages: numpy, pickle, sklearn, scipy, matplotlib, PIL.
- Prepare the Data.
Ground truth tracking file: follow the format of MOT (https://motchallenge.net/).
The frame index and object index are from 1 (not 0) for both tracking ground truth and video frames. - Convert MOT fromat to UA-Detrac format.
TNT/General/MOT_to_UA_Detrac.m - Crop the ground truth detection into individual bounding box images.
TNT/General/crop_UA_Detrac.m - Create validation pairs for FaceNet.
TNT/General/create_pair.m - Train the triplet appearance model based on FaceNet using the cropped data.
See https://github.com/davidsandberg/facenet.
All the useful scource code are in TNT/src/. - Train 2D tracking.
Set directory paths in TNT/train_cnn_trajectory_2d.py before the definition of all the functions.
Change the sample probability (sample_prob) according to your data density. The number of element in sample_prob is the number of your input Mat files.
Set the learning rate (lr) to 1e-3 at the beginning. Every 2000 steps, decrease lr by 10 times until it reaches 1e-5.
The output model will be stored in save_dir. - Run python TNT/train_cnn_trajectory_2d.py.
- Prepare the detection data.
follow the format of MOT (https://motchallenge.net/).
The frame index and object index are from 1 (not 0) for both tracking ground truth and video frames. - Set your data and model paths correctly on the top of TNT/tracklet_utils_3c.py.
- Set the file_len to be the string length of your input frame name before the extension.
- Adjust the tracking parameters in track_struct['track_params'] of TNT/tracklet_utils_3c.py in the function TC_tracker().
- Run python TNT/TC_tracker.py.
Use this bibtex to cite this repository:
@inproceedings{wang2019exploit,
title={Exploit the connectivity: Multi-object tracking with trackletnet},
author={Wang, Gaoang and Wang, Yizhou and Zhang, Haotian and Gu, Renshu and Hwang, Jenq-Neng},
booktitle={Proceedings of the 27th ACM International Conference on Multimedia},
pages={482--490},
year={2019},
organization={ACM}
}