Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Reproducibility of ActivityNet #16

Closed
yyccli opened this issue Oct 8, 2022 · 4 comments
Closed

Reproducibility of ActivityNet #16

yyccli opened this issue Oct 8, 2022 · 4 comments

Comments

@yyccli
Copy link

yyccli commented Oct 8, 2022

Hi, first thanks for your great work.
I am trying to reproduce your results in ActivityNet. I follow the operations in your paper. Using TSP features and add some codes in Dataset module. I can run through whole process in ActivityNet but i just cannot get results as good as you present in the paper. For me, the results drop all about 3-4%.
I am wondering whether you have planning to open source the train code for ActivityNet?

@xlliu7
Copy link
Owner

xlliu7 commented Oct 13, 2022

Hi, thanks for your interest in our work. To reproduce our results with TSP features on ActivityNet, some parameters need to be modified. For example, num_queries should be 50, set_cost_class should be 2. Besides, several tricks (such as soft-NMS, using external video classification labels from CUHK's winning model at ActivityNet 2017) from BSN/BMN are required to boost performance.
I do plan to release the code for ActivityNet. But it might be one or two months later. If you need it recently, you can write an email to me for code request.

@yyccli
Copy link
Author

yyccli commented Oct 13, 2022

Really thanks for your reply. This helps me a lot. I have sent you a email for code request.

@yyccli yyccli closed this as completed Oct 13, 2022
@takfate
Copy link

takfate commented Oct 15, 2022

Is Soft-NMS beneficial for this query-based method?
How does it work?

@xlliu7
Copy link
Owner

xlliu7 commented Oct 26, 2022

Hi @takfate, here is my response to your questions:
Soft-NMS is beneficial on ActivityNet but harmful on THUMOS14, and HACS, according to my experience. The code already includes the implementation of Soft-NMS. To enable that, you need to:

  1. change the argument nms_mode from ['raw'] to ['raw', 'nms'] in line 115 of engine.py:
  2. change line 151~156 of datasets/tad_eval.py to
if nms_mode == 'nms' and not (config.TEST_SLICE_OVERLAP > 0 and self.dataset_name == 'thumos14'):  
    # On THUMOS14, when config.TEST_SLICE_OVERLAP > 0, 
    # we only apply nms after all predictions have been collected
    dets = apply_nms(input_dets,
                     nms_thr=config.NMS_THR,
                     use_soft_nms=self.dataset_name in ['activitynet'])
else:
    sort_idx = input_dets[:, 2].argsort()[::-1]
    dets = input_dets[sort_idx, :]

@xlliu7 xlliu7 reopened this Oct 26, 2022
@xlliu7 xlliu7 closed this as completed Oct 31, 2022
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants