This is the code for the paper "Kernalised Multi-resolution Convnet for Visual Tracking"
Code for monocular, generic object tracking.
Gist: Kernalised Correlation Filter -> Convnet Prediction
by Di WU: stevenwudi@gmail.com, 2017/05/01
If you use this toolbox as part of a research project, please cite the corresponding paper
@inproceedings{wu2017cvprw,
title={Kernalised Multi-resolution Convnet for Visual Tracking},
author={Wu, Di and Wenbin, Zou and Xia, Li and Yong, Zhao},
booktitle={Proc. Conference on Computer Vision and Pattern Recognition (CVPR) Workshop},
year={2017}
}
Some dependent libraries requirements: Keras: for deep learning libarary: https://github.com/fchollet/keras Backend: tensorflow
To reproduce the experimental result for test submission, you need to download the trained model from: https://drive.google.com/open?id=0BzicoAl6Jud9WTNDS0RFUEpkQ1E
there is a Python file:
....py
To train the network, you first need to extract the CNN from the OTB2015:
1)step_1_OTB_100_collect_CNN.py
Voila, here you go.
According to some reader recommendation, I supplement the links of the datasets used in the paper as follows:
-
OTB-2015 Dataset
--> http://cvlab.hanyang.ac.kr/tracker_benchmark -
UAV123Dataset
--> https://ivul.kaust.edu.sa/Pages/pub-benchmark-simulator-uav.aspx
If you read the code and find it really hard to understand, please send feedback to: stevenwudi@gmail.com Thank you!