You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
we reproduce the GCN-Jaccard utilizing the code provided in deeprobust library. However, we find the results are much larger than the one reported in your paper. For example, we get test accuracy of 0.80+ on citeseer for ptb_rate=5.0 and threshold=0.1. We want to know whether there are some mistakes in GCN-jaccard codes. Thanks!
The text was updated successfully, but these errors were encountered:
Thanks for the question. I have just taken a look at it and the code of GCN-Jaccard should be correct.
I found that we do get a higher test accuracy if we use threshold=0.1 on nettack. But in my experiments, I tuned the GCN-Jaccard hyper-parameters based on the validation performance instead of test performance (as we cannot access the test data). Further, the hyper-parameters for all methods are tuned only on the weakest perturbation case and we apply the same hyper-parameters for all other perturbation rates. So I end up with using threhold=0.01 instead of 0.1 on nettack ptb_rate=5.0.
In the experiment, the nettack is only applied on the test data. So if we tune the performance based on the test performance it should be much easier to get a higher accuracy. However, in practice, we have no knowledge about test data so we can only tune the parameters on validation set.
we reproduce the GCN-Jaccard utilizing the code provided in deeprobust library. However, we find the results are much larger than the one reported in your paper. For example, we get test accuracy of 0.80+ on citeseer for ptb_rate=5.0 and threshold=0.1. We want to know whether there are some mistakes in GCN-jaccard codes. Thanks!
The text was updated successfully, but these errors were encountered: