Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Eval tracker #205

Closed
wants to merge 4 commits into from
Closed

Eval tracker #205

wants to merge 4 commits into from

Conversation

kangben258
Copy link

@kangben258 kangben258 commented Aug 25, 2023

Scripts are provided for evaling the tracker.

Refer to tools/eval/README.md for usage.

A note on scripts

At the end of the test, two metrics will be obtained, success(0-1)and precision(0-1), success represents the tracker's score of successfully tracking the target, the larger the value of success means the tracker is more robust and better. precision represents the tracker's accuracy score, the larger the value of precision means the tracker is more accurate.

Here are the test results for DaSiamRPN

model success precision
DaSiamRPN 0.322 0.409

Copy link
Member

@zihaomu zihaomu left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Please clean the meanfulless comment code.

tools/eval/eval.py Outdated Show resolved Hide resolved
tools/eval/eval.py Outdated Show resolved Hide resolved
tools/eval/datasets/otb.py Outdated Show resolved Hide resolved
tools/eval/datasets/otb.py Outdated Show resolved Hide resolved
@kangben258
Copy link
Author

Scripts are provided for evaling the tracker.
Refer to tools/eval/README.md for usage.
At the end of the test, two metrics will be obtained, success(0-1) and precision(0-1), success represents the tracker's score of successfully tracking the target, the larger the value of success means the tracker is more robust and better. precision represents the tracker's accuracy score, the larger the value of precision means the tracker is more accurate.
The scores of DaSiamRPN are 0.322 for success and 0.409 for precision.

@kangben258 kangben258 reopened this Aug 28, 2023
@fengyuentau fengyuentau self-assigned this Sep 18, 2023
@fengyuentau fengyuentau added GSoC Google Summer of Code projected related evaluation adding tools for evaluation or bugs of eval scripts labels Sep 18, 2023
@fengyuentau fengyuentau self-requested a review September 18, 2023 06:38
### Prepare data

Please visit [here](https://drive.google.com/drive/folders/1DZvtlnG9U94cgLD6Yi3eU7r6QZJkjdl-?usp=sharing) to download the OTB dataset and the json file. Organize files as follow:
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this link provided by otb official or your google drive?

@fengyuentau
Copy link
Member

At the end of the test, two metrics will be obtained, success(0-1) and precision(0-1), success represents the tracker's score of successfully tracking the target, the larger the value of success means the tracker is more robust and better. precision represents the tracker's accuracy score, the larger the value of precision means the tracker is more accurate.

You can place this along with dataset information in tools/eval/readme.md.

The scores of DaSiamRPN are 0.322 for success and 0.409 for precision.

Please also add this in the model information in models/object_tracking_dasiamrpn/readme.md.

@ryan1288
Copy link
Contributor

ryan1288 commented Mar 3, 2024

@fengyuentau I'm thinking of taking this incomplete evaluation to the finish line. If that sounds good, I'll understand the current change before proposing work to finish the PR.

@fengyuentau
Copy link
Member

@fengyuentau I'm thinking of taking this incomplete evaluation to the finish line. If that sounds good, I'll understand the current change before proposing work to finish the PR.

Sure, feel free to take this.

@fengyuentau fengyuentau added the stale Issues or PRs marked as stale will be closed in 7 days label Mar 19, 2024
@fengyuentau fengyuentau closed this Jun 4, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
evaluation adding tools for evaluation or bugs of eval scripts GSoC Google Summer of Code projected related stale Issues or PRs marked as stale will be closed in 7 days
Projects
None yet
Development

Successfully merging this pull request may close these issues.

4 participants