Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

about GCN-Jaccard results #4

Closed
whitley0 opened this issue May 15, 2021 · 1 comment
Closed

about GCN-Jaccard results #4

whitley0 opened this issue May 15, 2021 · 1 comment

Comments

@whitley0
Copy link

we reproduce the GCN-Jaccard utilizing the code provided in deeprobust library. However, we find the results are much larger than the one reported in your paper. For example, we get test accuracy of 0.80+ on citeseer for ptb_rate=5.0 and threshold=0.1. We want to know whether there are some mistakes in GCN-jaccard codes. Thanks!

@ChandlerBang
Copy link
Owner

Hi,

Thanks for the question. I have just taken a look at it and the code of GCN-Jaccard should be correct.

I found that we do get a higher test accuracy if we use threshold=0.1 on nettack. But in my experiments, I tuned the GCN-Jaccard hyper-parameters based on the validation performance instead of test performance (as we cannot access the test data). Further, the hyper-parameters for all methods are tuned only on the weakest perturbation case and we apply the same hyper-parameters for all other perturbation rates. So I end up with using threhold=0.01 instead of 0.1 on nettack ptb_rate=5.0.

In the experiment, the nettack is only applied on the test data. So if we tune the performance based on the test performance it should be much easier to get a higher accuracy. However, in practice, we have no knowledge about test data so we can only tune the parameters on validation set.

Thank you.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants