You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Thank you for the excellent code. I am wondering if you could share the training config on ActivityNet dataset.
You can refer to the code of ACMNet (https://github.com/ispc-lab/ACM-Net) and transplant our method on their codebase. The config for ActivityNet is as follows:
"dropout":0.7,
"lr":2e-5,
"weight_decay":0.001,
"inp_feat_num":2048,
"out_feat_num":2048,
"scale_factor":10.0, # scaling factor for calculating classification scores
"n_mu":8, # the number of GMM centers
'em_iter':2, # the number of EM iterations
"o_weight":0.8, # the weight of main branch for TCAM fusion during testing
"m_weight":0.2, # the weight of intra-video branch for TCAM fusion during testing
"lambda_b":0.1, # weight of the multiple instance learning head of the classification head (the weight of the Class-agnostic attention head is set as 1.0)
"lambda_att":0.1, # weight of attention normalization loss
"lambda_spl":1.0, # weight of pseudo label loss
Thank you for the excellent code. I am wondering if you could share the training config on ActivityNet dataset.
The text was updated successfully, but these errors were encountered: