Skip to content

m1k2zoo/RobustDG

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

6 Commits
 
 
 
 

Repository files navigation

Generalizability of Adversarial Robustness Under Distribution Shifts

This repository contains the PyTorch implementation of the paper "Generalizability of Adversarial Robustness Under Distribution Shifts" published at the Transactions on Machine Learning Research (TMLR) journal (with a Featured Certification award). The paper investigates the interplay between adversarial robustness and domain generalization, and shows that both empirical and certified robustness generalize to unseen domains, even in a real-world medical application.

Requirements

The code requires the following packages:

Usage

Empirical Robustness

To train the models and evaluate the generalization of empirical robustness, run the following command:

python -m domainbed.scripts.train_empirical --data_dir ./datasets/ --dataset PACS --algorithm ERM --test_env 0 --steps 300 --output_dir ./logs/

This would load the data from ./datasets/PACS/ and does standard ERM training, with environment 0 being the test environment and trains for 300 iterations/steps, the results will be saved in ./logs/ where you will find the best model checkpoint along with clean and robust accuracy (PGD and AutoAttack).

algorithm could be any of those: 'ERM', 'PGDLinf', 'TradesLinf', 'PGDL2', 'TradesL2'.

The default parameters for the adversarial training are:

eps = 2 / 255
step = eps / 4
num_steps = 10
beta = 3.0

which could be replaced in ./domainbed/algorithms.py

Certified Robustness

To train the smoothed models and evaluate the generalization of certified robustness, we follow the implementation in the DeformRS repository.

References

If you use this code or the results in your research, please cite the following paper:

@article{alhamoud2023generalizability,
  title={Generalizability of Adversarial Robustness Under Distribution Shifts},
  author={Alhamoud, Kumail and Hammoud, Hasan Abed Al Kader and Alfarra, Motasem and Ghanem, Bernard},
  journal={Transactions on Machine Learning Research},
  year={2023}
}

Contact:

About

explores the generalizability of adversarial robustness under distribution shifts

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages