Skip to content
/ CA2 Public

code for "Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks"

Notifications You must be signed in to change notification settings

mesunhlf/CA2

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

7 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Overview

The code is repository for "Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks" (Pattern Recognition).

Prerequisites

python 3.6
tensorflow 1.14

Pipeline

Dataset

We select 1000 images from ImageNet validation dataset. All tested images can be correctly classified by vanilla models, and thereby treated as the standard benchmark to be collected in the SACP2019 adversarial competition (Tianchi Security AI Challenger Program Competition).

The download link is here.

Run the Code

The standalone CA2: CA2.py.
The strongest combination CA2-SIM*: CA2-SIM.py.

Experimental Results

We attack four normally trained models to generate adversarial examples, and test the transferability against ten defense models.

Standalone Experiment

Ensemble Experiment

Citation

If you find this project is useful for your research, please consider citing:

@article{huang2022cyclical,
  title={Cyclical adversarial attack pierces black-box deep neural networks},
  author={Huang, Lifeng and Wei, Shuxin and Gao, Chengying and Liu, Ning},
  journal={Pattern Recognition},
  volume={131},
  pages={108831},
  year={2022},
  publisher={Elsevier}
}

About

code for "Cyclical Adversarial Attack Pierces Black-box Deep Neural Networks"

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages