Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

About evaluation protocols. #8

Closed
wjun0830 opened this issue Dec 3, 2022 · 2 comments
Closed

About evaluation protocols. #8

wjun0830 opened this issue Dec 3, 2022 · 2 comments

Comments

@wjun0830
Copy link

wjun0830 commented Dec 3, 2022

Hello.

Thanks for the interesting work.

Besides, I was curious about the design choice of the evaluation metric regarding 10 random trials of unknown class selection.

What's the difference between using a whole hmdb51/MiT-v2 dataset as open set?

Since the training is conducted with the whole UCF101 dataset, I was thinking that using hmdb51/MIT dataset as a whole would make no difference to your evaluation protocols.

Thank you for the great work.

@Cogito2012
Copy link
Owner

The purpose of 10 random trials of unknown classes is to show stable/convicing evaluation result w.r.t. corresponding openness of the osr testing.

You can definitely use the whole hmdb/mit dataset as the unknown so that no random trial is needed, but note that this only reports the performance under a single openness value.

@wjun0830
Copy link
Author

wjun0830 commented Dec 4, 2022

I get it. Thank you.

@wjun0830 wjun0830 closed this as completed Dec 4, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants