You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I would like to ask a question about the dataset. I looked at some papers, and I found that some papers are testing the modal performance on the ActivityNet Captions validation set, while some papers are testing the modal performance on the ActivityNet Captions test split. Is there the difference between ActivityNet Captions validation set and ActivityNet Captions test split?
The text was updated successfully, but these errors were encountered:
The validation set and test set are both from the official splits[1] but the annotations of the test set is held by the ActivityNet Challenge organizers. Some early papers report performance on test split by evaluating on the online test server. However, the online server is only accessible during the Challenge, so some recent papers report the results on validation set as a compromise.
[1] R. Krishna, K. Hata, F. Ren, L. Fei-Fei, and J. C. Niebles, “Dense-captioning events in videos,” in Proc. IEEE Int. Conf. Comput. Vis., 2017, pp. 706-715.
I would like to ask a question about the dataset. I looked at some papers, and I found that some papers are testing the modal performance on the ActivityNet Captions validation set, while some papers are testing the modal performance on the ActivityNet Captions test split. Is there the difference between ActivityNet Captions validation set and ActivityNet Captions test split?
The text was updated successfully, but these errors were encountered: