Here, we provide a number of tracker models trained using PyTracking. We also report the results of the models on standard tracking datasets.
Model | VOT18 EAO (%) |
OTB100 AUC (%) |
NFS AUC (%) |
UAV123 AUC (%) |
LaSOT AUC (%) |
TrackingNet AUC (%) |
GOT-10k AO (%) |
Links |
---|---|---|---|---|---|---|---|---|
ATOM | 0.401 | 66.3 | 58.4 | 64.2 | 51.5 | 70.3 | 55.6 | model |
DiMP-18 | 0.402 | 66.0 | 61.0 | 64.3 | 53.5 | 72.3 | 57.9 | model |
DiMP-50 | 0.440 | 68.4 | 61.9 | 65.3 | 56.9 | 74.0 | 61.1 | model |
PrDiMP-18 | 0.385 | 68.0 | 63.3 | 65.3 | 56.4 | 75.0 | 61.2 | model |
PrDiMP-50 | 0.442 | 69.6 | 63.5 | 68.0 | 59.8 | 75.8 | 63.4 | model |
SuperDimp | - | 70.1 | 64.7 | 68.1 | 63.1 | 78.1 | - | model |
The raw results can be downloaded automatically using the download_results script.
You can also download and extract them manually from https://drive.google.com/open?id=1Sacgh5TZVjfpanmwCFvKkpnOA7UHZCY0. The folder benchmark_results
contains raw results for all datasets except VOT. These results can be analyzed using the analysis module in pytracking. Check pytracking/notebooks/analyze_results.ipynb for examples on how to use the analysis module. The folder packed_results
contains packed results for TrackingNet and GOT-10k, which can be directly evaluated on the official evaluation servers, as well as the VOT results.
The raw results are in the format [top_left_x, top_left_y, width, height]. Due to the stochastic nature of the trackers, the results reported here are an average over multiple runs. For OTB-100, NFS, UAV123, and LaSOT, the results were averaged over 5 runs. For VOT2018, 15 runs were used as per the VOT protocol. As TrackingNet results are obtained using the online evaluation server, only a single run was used for TrackingNet. For GOT-10k, 3 runs are used as per protocol.
The success plots for our trained models on the standard tracking datasets are shown below.