New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Benchmark Results of Attack Performance #3
Comments
I've updated the PGD attack and you can try it again :) |
I tried the new PGD attack and the result is promising!
|
Can you please tell for which dataset you are getting this performance? For the Cora dataset, I am getting the following performance: Before attack After evasion attack The accuracy reduces by only 6% but in your case the reduction is ~26%. |
It's Cora, with a perturbation rate 0.2. |
Hi, thanks for sharing the awesome repo with us! I recently run the attack sample code but the result
pgd_attack.py
andrandom_attack.py
underexamples/attack/untargeted
, but the accuracies of both evasion and poison attack seem not to decrease.I'm pretty confused by the attack results. For CV models, pgd attack easily decreases the accuracy to nearly random guesses, but the results of GreatX seem not to consent with CV models. Is it because the number of the perturbed edges is too small?
Here are the results of
pgd_attack.py
Here are the results of
random_attack.py
The text was updated successfully, but these errors were encountered: