Skip to content

Muzammal-Naseer/CDA

Repository files navigation

Cross-Domain Transferable Perturbations

Pytorch Implementation of "Cross-Domain Transferability of Adversarial Perturbations" (NeurIPS 2019) arXiv link.

Table of Contents

  1. Highlights
  2. Usage
  3. Pretrained-Generators
  4. How to Set-Up Data
  5. Training/Eval
  6. Create-Adversarial-Dataset
  7. Citation

Highlights

  1. The transferability of adversarial examples makes real-world attacks possible in black-box settings, where the attacker is forbidden to access the internal parameters of the model. we propose a framework capable of launching highly transferable attacks that crafts adversarial patterns to mislead networks trained on different domains. The core of our proposed adversarial function is a generative network that is trained using a relativistic supervisory signal that enables domain-invariant perturbation.
  2. We mainly focus on image classfication task but you can use our pretrained adversarial generators to test robustness of your model regardless of the task (Image classification, Segmentation, Object Detection etc.)
  3. You don't need any particular setup (label etc.) to generate adversaries using our method. You can generate adversarial images of any size for any image dataset of your choice (see how to set-up data directory below).

Learning Algo

Usage

Dependencies

  1. Install pytorch.
  2. Install python packages using following command:
pip install -r requirements.txt

Clone the repository.

git clone https:https://github.com/Muzammal-Naseer/Cross-domain-perturbations.git
cd Cross-domain-perturbations

Pretrained-Generators

Download pretrained adversarial generators from here to 'saved_models' folder.

Adversarial generators are trained against following four models.

  • ResNet152
  • Inceptionv3
  • VGG19
  • VGG16

These models are trained on ImageNet and available in Pytorch.

Datasets

  • Training data:

  • Evaluations data:

    • ImageNet Validation Set (50k images).
    • Subset of ImageNet validation set (5k images).
    • NeurIPS dataset (1k images).
    • You can try your own dataset as well.
  • Directory structure should look like this:

   |Root
       |ClassA
               img1
               img2
               ...
       |ClassB
               img1
               img2
               ...

Training

Run the following command

  python train.py --model_type res152 --train_dir paintings --eps 10 

This will start trainig a generator trained on Paintings (--train_dir) against ResNet152 (--model_type) under perturbation budget 10 (--eps) with relativistic supervisory signal.

Evaluations

Run the following command

  python eval.py --model_type res152 --train_dir imagenet --test_dir ../IN/val --epochs 0 --model_t vgg19 --eps 10 --measure_adv --rl

This will load a generator trained on ImageNet (--train_dir) against ResNet152 (--model_type) and evaluate clean and adversarial accuracy of VGG19 (--model_t) under perturbation budget 10 (--eps).

Create-Adversarial-Dataset

If you need to save adversaries for visualization or adversarial training, run the following command:

 python generate_and_save_adv.py --model_type incv3 --train_dir paintings --test_dir 'your_data/' --eps 255

You should see beautiful images (unbounded adversaries) like this: unbounded adversaries

Citation

If you find our work, this repository and pretrained adversarial generators useful. Please consider giving a star ⭐ and citation.

@article{naseer2019cross,
  title={Cross-domain transferability of adversarial perturbations},
  author={Naseer, Muhammad Muzammal and Khan, Salman H and Khan, Muhammad Haris and Shahbaz Khan, Fahad and Porikli, Fatih},
  journal={Advances in Neural Information Processing Systems},
  volume={32},
  pages={12905--12915},
  year={2019}
}

Contact

Muzammal Naseer - muzammal.naseer@anu.edu.au
Suggestions and questions are welcome!

About

Official repository for "Cross-Domain Transferability of Adversarial Perturbations" (NeurIPS 2019)

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages