New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Reproducibility of ActivityNet #16
Comments
Hi, thanks for your interest in our work. To reproduce our results with TSP features on ActivityNet, some parameters need to be modified. For example, num_queries should be 50, set_cost_class should be 2. Besides, several tricks (such as soft-NMS, using external video classification labels from CUHK's winning model at ActivityNet 2017) from BSN/BMN are required to boost performance. |
Really thanks for your reply. This helps me a lot. I have sent you a email for code request. |
Is Soft-NMS beneficial for this query-based method? |
Hi @takfate, here is my response to your questions:
if nms_mode == 'nms' and not (config.TEST_SLICE_OVERLAP > 0 and self.dataset_name == 'thumos14'):
# On THUMOS14, when config.TEST_SLICE_OVERLAP > 0,
# we only apply nms after all predictions have been collected
dets = apply_nms(input_dets,
nms_thr=config.NMS_THR,
use_soft_nms=self.dataset_name in ['activitynet'])
else:
sort_idx = input_dets[:, 2].argsort()[::-1]
dets = input_dets[sort_idx, :] |
Hi, first thanks for your great work.
I am trying to reproduce your results in ActivityNet. I follow the operations in your paper. Using TSP features and add some codes in Dataset module. I can run through whole process in ActivityNet but i just cannot get results as good as you present in the paper. For me, the results drop all about 3-4%.
I am wondering whether you have planning to open source the train code for ActivityNet?
The text was updated successfully, but these errors were encountered: