You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
I'm trying to generate the Table 4, Row 1 results of the paper.
Set up is a white box, Model A, FGSM attack 0.3, No defense, 50 epochs, 1e-3 lr, MNIST, Adam optimizer. While the paper reports 99.7 classifier accuracy and 0.217 in case of no defense, the code produces around 99.4 classifier accuracy and 0.16 in the case of no defense.
Can you please tell us the changes to get close to original numbers. the only difference that I can notice is complete data (60K) is used in the code without any validation data
The text was updated successfully, but these errors were encountered:
@krishnakanthnakka unfortunately this code is very different than the one used for the paper due to various time constraints at different times. It's hard to exactly know where the difference is coming from, maybe it's due to simply not fixing the random seeds at that time. Feel free to report the numbers that are produced by the current code in your papers. The point of the repo was mostly to flesh out the algorithmic technical details of the implementation.
I'm trying to generate the Table 4, Row 1 results of the paper.
Set up is a white box, Model A, FGSM attack 0.3, No defense, 50 epochs, 1e-3 lr, MNIST, Adam optimizer. While the paper reports 99.7 classifier accuracy and 0.217 in case of no defense, the code produces around 99.4 classifier accuracy and 0.16 in the case of no defense.
Can you please tell us the changes to get close to original numbers. the only difference that I can notice is complete data (60K) is used in the code without any validation data
The text was updated successfully, but these errors were encountered: