Skip to content

This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.

Notifications You must be signed in to change notification settings

Chengxiao-Luo/Untargeted-Backdoor-Attack-against-Object-Detection

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

5 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Untargeted Backdoor Attack against Object Detection

This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection, accepted by the IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP), 2023. This research project is developed based on Python 3 and Pytorch, created by Chengxiao Luo and Yiming Li

Reference

If our work or this repo is useful for your research, please cite our paper as follows:

@inproceedings{luo2023untargeted,
  title={Untargeted Backdoor Attack against Object Detection},
  author={Luo, Chengxiao and Li, Yiming and Jiang, Yong and Xia, Shu-Tao},
  booktitle={ICASSP},
  year={2023}
}

Pipeline

Pipeline

Requirements

To install requirements:

pip install -v -e .
pip install -r requirements.txt

Make sure the directory follows:

backdoor_attack_against_object_detection
├── configs
│   ├── faster_rcnn
│   ├── sparse_rcnn
│   ├── tood
│   └── ...
├── data
│   ├── coco
│   └── ...
├── mmdet 
│   
├── requirements
│   
├── tools
│   ├── train.py
│   ├── test.py
│   └── ...
|

Dataset Preparation

Download the zipped files of coco dataset and unzip them.

Make sure the directory data follows:

data
├── coco
│   ├── annotations
│   ├── train2017
│   ├── val2017
│   └── ...
├── ...  

📋 Data Download Link:
train2017

val2017

annotations

Train Backdoor Model

Train a backdoor model of Faster-RCNN:

CONFIG_FILE=configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_poisoned_type=1_scale=0.1_rate=0.05_location=center.py
WORK_DIR=logs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_poisoned_type=1_scale=0.1_rate=0.05_location=center

python tools/train.py ${CONFIG_FILE} --gpu-id 0 --work-dir ${WORK_DIR} --seed ${SEED} --auto-scale-lr 

Test Backdoor Model

On Poisoned Datasets:

CONFIG_FILE=configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_poisoned_type=1_scale=0.1_rate=0.05_location=center.py
CHECKPOINT_FILE=logs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_poisoned_type=1_scale=0.1_rate=0.05_location=center/latest.pth

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --eval bbox --gpu-id 1

On Benign Datasets:

CONFIG_FILE=configs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco.py
CHECKPOINT_FILE=logs/faster_rcnn/faster_rcnn_r50_fpn_1x_coco_poisoned_type=1_scale=0.1_rate=0.05_location=center/latest.pth

python tools/test.py ${CONFIG_FILE} ${CHECKPOINT_FILE} --eval bbox --gpu-id 1

Acknowledgements

This code is based on mmdetection.

About

This is the official implementation of our paper Untargeted Backdoor Attack against Object Detection.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published