New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
question about testing AUC-shuffled #9
Comments
@hkkevinhf Thanks for your information. Could you please be more specific or let us re-run your experiments? "AUC-shuffled score is much lower than that reported in the paper on UCF dataset." Which paper do you mean? It's hard to figure out the reason. We do not encounter a similar issue before. |
@wenguanwang hi, the paper concerned is "Revisiting Video Saliency: A Large-scale Benchmark and a New Model". The detailed information is below: I add a line in demo_ours.m in order to see the overall metrics on a dataset. Other files all remain unchanged. The demo_ours.m I used is shown below: `**%% Demo.m % load global parameters, you should set up the "ROOT_DIR" to your own path CACHE = ['./cache/']; Metrics{1} = 'AUC_Judd'; %%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%%% results = zeros(300,1); mean_results{1}=zeros(1,5); for k =1:1 % indexing methods
end**` |
@hkkevinhf Many thanks for your detailed information and quick response. I will let my intern carefully check this issue. It will take some time. Thanks for your understanding! |
@wenguanwang Thanks. Look forward to your reply. |
@hkkevinhf could you please offer all five scores for the output saliency maps? |
@wenguanwang yes. As for the UCF test set, the five scores (AUC-J, SIM, S-AUC, CC, NSS) are 0.8977, 0.4058, 0.5619, 0.5070, 2.5413 respectively. The s-AUC scores for Hollywood2 test set and DHF1K val set also seem strange, but I didn't record them. If you need, I will evaluate them once more. |
@hkkevinhf , we rechecked our evaluation code and found the inconsistency of the S-AUC is caused by the sampling strategy of the reference fixation map (only using the fixations of the same video). This only happens in the released evaluation code. Not to worry, as the evaluation code in the server is still the correct version. We uploaded an updated version in "code_for_Metrics.zip". Note that the S-AUC will have some variations due to the sampling strategy. Many thanks for your reminder. |
@wenguanwang ,received. Thanks for your effort and kind reply. |
When using the evaluation code in this package, AUC-shuffled score is much lower than that reported in the paper on UCF dataset. I was wondering if it is anything wrong with the evaluation code, or if I missed some important details.
The text was updated successfully, but these errors were encountered: