Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Explanation Accuracy on MUTAG #5

Open
yaorong0921 opened this issue Jul 18, 2022 · 2 comments
Open

Explanation Accuracy on MUTAG #5

yaorong0921 opened this issue Jul 18, 2022 · 2 comments

Comments

@yaorong0921
Copy link

Dear authors,

thanks for your interesting work and sharing the code! I have tried the code and encountered the problem when using GraphSVX on the MUTAG dataset. I have only reached the explanation accuracy of 0.1.
I trained a GCN/GNN on MUTAG using your training script, but my model also has lower accuracy (10% lower) than your result reported in the paper. So it could be that my model is not well trained.

Could you please provide the hyperparams that you used for training the model, as well as the evaluating in explanation for MUTAG?

Thank you very much for your help in advance!

Best,

@AlexDuvalinho
Copy link
Owner

AlexDuvalinho commented Jul 27, 2022 via email

@Jerry00917
Copy link

Hi, Thank you for your message. I just went back to my notes to find the good hyper-parameters of MUTAG. Unfortunately, I added MUTAG right before the submission and was not organised enough with my experiments at the time, so I could not find much… What I remember is that MUTAG was trickier to evaluate as ground truth explanations relies on edges (not nodes, unlike the default version of GraphSVX). So I struggled a bit more to get good results on this dataset, unlike the others. I found those settings marked somewhere but I don’t know if these are SOTA results or intermediate ones: python3 script_eval_gt.py --dataset='Mutagenicity' --num_samples=300/500 --S=3 --coal='SmarterSeparate' --hv='compute_pred' --feat=‘All' Training ⇒ 800 epochs, wd = 0.002. Loss = 0.28. Results = 0.88 0.84 0.81 (not exactly SOTA but closer than what you found I believe). I am sorry I am not able to help out more. Good luck ! Best wishes, Alexandre

Dear author,

Thanks for your sharing work,

Could you be more specific on the training hyper-parameters for MUTAG, such as lr, weight clip, weight decay, and batch size. The Best Accuracy I could get from the repo is about 80%.

Thanks!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants