Skip to content

karandwivedi42/adversarial

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

4 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Training with pytorch

python main.py -a -v

Accuracy (WIP)

Model Acc.
VGG16 --.--%
ResNet18 51.99%
ResNet50 --.--%
ResNet101 --.--%
MobileNetV2 --.--%
ResNeXt29(32x4d) --.--%
ResNeXt29(2x64d) --.--%
DenseNet121 --.--%
PreActResNet18 --.--%
DPN92 --.--%

Learning rate adjustment

I manually change the lr during training:

  • 0.1 for epoch [0,50)
  • 0.01 for epoch [50,60)

Resume the training with python main.py -r --lr=0.01 -a -v

References

  1. Authors' code: MadryLab/cifar10_challenge

  2. Baseline code: kuangliu/pytorch-cifar

Notes

To read more about Projected Gradient Descent (PGD) attack, you can read the following papers:

  1. Towards Deep Learning Models Resistant to Adversarial Attacks

  2. Adversarially Robust Generalization Requires More Data

About

Pytorch - Adversarial Training

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages