You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Besides, I was curious about the design choice of the evaluation metric regarding 10 random trials of unknown class selection.
What's the difference between using a whole hmdb51/MiT-v2 dataset as open set?
Since the training is conducted with the whole UCF101 dataset, I was thinking that using hmdb51/MIT dataset as a whole would make no difference to your evaluation protocols.
Thank you for the great work.
The text was updated successfully, but these errors were encountered:
The purpose of 10 random trials of unknown classes is to show stable/convicing evaluation result w.r.t. corresponding openness of the osr testing.
You can definitely use the whole hmdb/mit dataset as the unknown so that no random trial is needed, but note that this only reports the performance under a single openness value.
Hello.
Thanks for the interesting work.
Besides, I was curious about the design choice of the evaluation metric regarding 10 random trials of unknown class selection.
What's the difference between using a whole hmdb51/MiT-v2 dataset as open set?
Since the training is conducted with the whole UCF101 dataset, I was thinking that using hmdb51/MIT dataset as a whole would make no difference to your evaluation protocols.
Thank you for the great work.
The text was updated successfully, but these errors were encountered: