Skip to content

jinyugy21/Adv-Stickers_RHDE

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

3 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Adversarial Stickers: A Stealthy Attack Method in the Physical World

This repository contains the code for Adversarial Stickers introduced in the following paper Adversarial Stickers: A Stealthy Attack Method in the Physical World (TPAMI 2022)

Preparation

Environment Settings:

This project is tested under the following environment settings:

  • OS: Ubuntu 18.04
  • GPU: Geforce 2080 Ti
  • Python: 3.8.11
  • PyTorch: 1.7.1+cu110
  • Torchvision: 0.8.2+cu110

Data Preparation:

  • face Please download the dataset (LFW, CelebA) and place it in ./datasets/.

The directory structure example is:

datasets
-datasets name
 --person 1
   ---pic001
   ---pic002
   ---pic003  
  • stickers Prepare the pre-defined stickers and place them in ./stickers/.

Model Preparation:

Tool models (FaceNet, CosFace, SphereFace) should be placed in ./models/

The corresponding ./utils/predict.py should be changed as needed.

Other Necessary Tools:

  • Python tools for 3D face
  • BFM Data: ./BFM/BFM.mat
  • Shape predictor for face landmarks (68, 81)

Quick Start

Hyperparameter settings: ./utils/config.py

Running this command for attacks:

python attack_single.py

Citation

If you find our methods useful, please consider citing:

@article{wei2022adversarial,
  title={Adversarial Sticker: A Stealthy Attack Method in the Physical World},
  author={Wei, Xingxing and Guo, Ying and Yu, Jie},
  journal={IEEE Transactions on Pattern Analysis and Machine Intelligence},
  year={2022},
  publisher={IEEE}
}

About

Adversarial Stickers: A Stealthy Attack Method in the Physical World (TPAMI 2022)

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages