Skip to content
master
Switch branches/tags
Go to file
Code

Latest commit

 

Git stats

Files

Permalink
Failed to load latest commit information.
Type
Name
Latest commit message
Commit time
 
 
 
 
 
 
 
 
 
 
 
 

README.md

UPDATE: A new challenging subset is added!

We released a newly collected extension subset of 15 categories with 150 videos (very challenging!!!) for one-shot evaluation of tracking algorithms. Check the description in this paper. More details including the data, complete evaluation toolkit and results of 48 trackers are available at this project.

LaSOT_Evaluation_Toolkit

This toolkit is utilized for evaluating trackers' performance on a large-scale benchmark LaSOT (http://vision.cs.stonybrook.edu/~lasot/).

Notification (Downloading dataset and tracking results)

There is a problem with the data sever. Please use the following links to download dataset:}

In order to download the tracking results, please directly use the following link (including toolkit and complete results):

Usage

  • Download the repository, unzip it to your computer
  • Download tracking result, unzip it to folder tracking_results/ (if this is not working, use the above link)
  • Run run_tracker_performance_evaluation.m in Matlab

Notes

In the file run_tracker_performance_evaluation.m, you can

  • change evaluation_dataset_type (line 25) for evaluation on all 1,400 sequences or 280 testing sequences
  • change norm_dst (line 28) for precision or normalized precision plots

In the file utils/plot_draw_save.m

  • change the plotting settings to get the appropriate plots

Citation

If you use LaSOT and this evaluation toolkit for you researches, please consider citing our paper:

Contact

If you have any questions on LaSOT, please feel free to contact Heng Fan at hefan@cs.stonybrook.edu.

About

Evaluation of trackers on a large-scale benchmark LaSOT.

Resources

License

Releases

No releases published

Packages

No packages published

Languages