Skip to content

JunfengGo/SCALE-UP

Repository files navigation

This is the official implementation of our paper 'SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency', accepted in ICLR 2023. This research project is developed based on Python 3 and Pytorch, created by Junfeng Guo and Yiming Li.

Reference

If our work or this repo is useful for your research, please cite our paper as follows:

@inproceedings{guo2023scale,
  title={SCALE-UP: An Efficient Black-box Input-level Backdoor Detection via Analyzing Scaled Prediction Consistency},
  author={Guo, Junfeng and Li, Yiming and Chen, Xun and Guo, Hanqing and Sun, Lichao and Liu, Cong},
  booktitle={ICLR},
  year={2023}
}

Implementation

We release our codes and several models for demonstration. We store the poisoned datasets and poisoned models for BadNets and WaNet in DropBox1 and DropBox2. We also generate several SPC value for different attacks, which are saved in saved_np.

You can run:

python ./test.py 

to reimplement the results for WaNet.

To reimplement other results, you should first download the BadNets, WaNet folder from above links in ./ dictatory. Then you can use

python torch_model_wrapper.py 

to extract SPC scores for different poisoned models. The SPC scores will be stored in the saved_np/ file.

You can change the path in process ("saved_np/WaNet/tiny_bd.npy") to test SCALE-UP for other attacks (e.g., ISSBA, TUAP). You can craft poisoned samples and models using BackdoorBox. In this case, you should save the poisoned dataloader generated by BackdoorBox at first and use (or modify) dataloader2tensor_CIFAR10.py to obtain samples from a given dataloader and save them as a tensor before runing codes of this repo.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages