Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Some questions regarding the dataset #39

Closed
MrAccelerator opened this issue Sep 7, 2022 · 2 comments
Closed

Some questions regarding the dataset #39

MrAccelerator opened this issue Sep 7, 2022 · 2 comments

Comments

@MrAccelerator
Copy link

I would like to ask a question about the dataset. I looked at some papers, and I found that some papers are testing the modal performance on the ActivityNet Captions validation set, while some papers are testing the modal performance on the ActivityNet Captions test split. Is there the difference between ActivityNet Captions validation set and ActivityNet Captions test split?

@ttengwang
Copy link
Owner

The validation set and test set are both from the official splits[1] but the annotations of the test set is held by the ActivityNet Challenge organizers. Some early papers report performance on test split by evaluating on the online test server. However, the online server is only accessible during the Challenge, so some recent papers report the results on validation set as a compromise.

[1] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. C. Niebles, “Dense-captioning events in videos,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 706-715.

@MrAccelerator
Copy link
Author

Thanks for your quick response, this solved my confusion.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants