-
Notifications
You must be signed in to change notification settings - Fork 17
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Can't reach the accuracy of leaderboard #8
Comments
Hi! Could you please specify for which model in which dataset you can't reproduce? If it is common, there must be something wrong with your experiment settings (environment, hyper-parameters, etc.). |
For example, I tried to reproduce basic models with AT and I use pipeline provided by grb github. I didn't change any hyper-parameters but adjusted the model files dir, here: train_pipeline:
injection_attack_pipeline:
leaderboard_pipeline: and I get the result, it can't match the leaderboard on the website, please help me! and I also have a question about attack and defense model. Are ProGNN and Metattack, Nettack, random not scalable to large dataset so GRB doesn't included them? |
Hi, I'll check if the hyper-parameters are correct. For your question, the answer is yes, these methods are not scalable to large datasets. |
For example, when I simply train GCN and attack it with FGSM, it can't meet the result of the leaderboard on the website even if I have checked every hyper-parameters in pipeline/configs or paper shown. import os def main():
if name == 'main': I think it would be convenient for you to check this python file and find the problem. Thanks. |
Hi, I tried to use the pipeline to reproduce the result of GRB leaderboard but can't reach the accuracy given by the paper and grb website. There is always a 2-5% gap between the paper and my experiment. Could you please provide the full code for reproducing?
The text was updated successfully, but these errors were encountered: