This repository contains the code for Adversarial Stickers introduced in the following paper Adversarial Stickers: A Stealthy Attack Method in the Physical World (TPAMI 2022)
This project is tested under the following environment settings:
- OS: Ubuntu 18.04
- GPU: Geforce 2080 Ti
- Python: 3.8.11
- PyTorch: 1.7.1+cu110
- Torchvision: 0.8.2+cu110
The directory structure example is:
datasets
-datasets name
--person 1
---pic001
---pic002
---pic003
- stickers
Prepare the pre-defined stickers and place them in
./stickers/
.
Tool models (FaceNet, CosFace, SphereFace) should be placed in ./models/
The corresponding ./utils/predict.py
should be changed as needed.
Hyperparameter settings: ./utils/config.py
Running this command for attacks:
python attack_single.py
If you find our methods useful, please consider citing:
@article{wei2022adversarial,
title={Adversarial Sticker: A Stealthy Attack Method in the Physical World},
author={Wei, Xingxing and Guo, Ying and Yu, Jie},
journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
year={2022},
publisher={IEEE}
}