Skip to content

No.5 solution to non-targeted attack in IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019))

License

Notifications You must be signed in to change notification settings

jiangyangzhou/Non-targeted-Attack-IJCAI2019-ColdRiver

Repository files navigation

Non-targeted-Attack-IJCAI2019-ColdRiver

No.5 solution to non-targeted attack in IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019))
IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019)): https://tianchi.aliyun.com/competition/entrance/231701/introduction
We attend IJCAI-2019 Alibaba Adversarial AI Challenge, and get the 5th place in the non-targted attack track. Our method is gradiend-based attack method.
I use lots of tricks to improve the attack ability and transferability.

Currently only the scripts are released.

Scripts of Repo

  1. attack_tijiao2.py: Main script for attack.
  2. test_search.py: script for test the attack method
  3. gen_attack.py: script to generate adversarial data for following training
  4. train_ads.py: script to train adversarial model.

requirement

Python 3
pytorch 0.4 +
other necessary package used in script

Usage

To attack a model and generate adversarial images.

python attack_tijiao2.py --input_dir=/path/to/your/input_images --output_dir=/path/to/your/output_dir 

You need to replace the pretrained weights in the attack_tijiao2.py, and place the dev.csv in the input_images.

To test the adversarial image

python test_search.py --input_dir=/path/to/your/input_images --output_dir=/path/to/your/output_dir --if_attack=0

To search for parameters of attacking, you can use the script test_search.py

To attack model, you need pretrained model weight for the dataset.
You need to put your weight in the right dir according to the path in attack_tijiao.py.

Our method

You can find all the tricks in the attack_tijiao2.py
Our method is gradient-based attack method.
Thanks to previous work, our method based on Evading Defenses to Transferable Adversarial Examples by Translation-Invariant Attacks. And we add lots of our tricks, and I believe they do work.

  1. Iterative gradient ascend. (Loss function is CrossEntropyLoss)
  2. Add Gaussian kernel convolution (Key point in the paper )
  3. Add input diversity (resize and padding for picture) (It seems it doesn't work sometimes)
  4. Add Class Activation Map Mask for noise.
  5. Add Reverse Cross Entropy Loss to the original Loss function.
  6. Multiply pixel norm of noise to noise
  7. Ensemble model, and apply different weight for different models according to the model prediction during attack iterations.
  8. Just make the noise in the edge equal to zero (may work)

I'm not sure these tricks always work, I also test in imagenet(NIPS 2017 adversarial competition test dataset). But the result still not clear.

Author

Jiang Yangzhou jiangyangzhou@sjtu.edu.cn

About

No.5 solution to non-targeted attack in IJCAI-2019 Alibaba Adversarial AI Challenge (AAAC 2019))

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages