Code for AISTATS 2024: Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations
The official code for AISTATS 2024 "Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations". Check the PDF for more details.
Run the following command to install all packages.
pip install torchvision seaborn numpy scipy setproctitle matplotlib pandas statsmodels opencv_python torch Pillow python_dateutil setGPU numba open3d cupy-cuda116 tqdm timm transformers
First follow the README Camera Motion Smoothing (Hu et al. 2022) here to download the dataset and unzip in the root path.
To merge all the training set, run python generate_all_training_set.py under the folder dataset_buildup. Download class-unconditional diffusion models from here and put it with the path imagenet_diffusion/256x256_diffusion_uncond.pt.
For the model training, run bash train.sh under folder ./certifiable for ResNet-50 and ResNet-101 architectures for robust model training with different variances and diffusion model denoisers.
For the certification, first we run bash find_required_frames.sh to find the required number of projected frames for each projection. Then to save computational cost, first run bash alias.sh under ./certifiable to general alias calculation for the margin of projection error and save partitioned images.
Then run bash diff_certify.sh under ./certifiable to general predicted certification files under ./certifiable/data/predict/save_all.
For the original camera motion smoothing, run bash cms_new.sh under ./certifiable and the generated predicted files are stored in ./certifiable/predict/cms_new. To get the certified accuracy results, run the bash analyze.sh under folder ./certifiable. More can be found through CMS repo.
For the benign and empirical robust accuracy, run the bash empirical_test.sh under folder ./emperical/benign_emperical and output logs are located in ./emperical/benign_emperical/data. Note that --benign indicates the benign accuracy while the default is empirical robust accuracy. Change --pretrain for correct pretrained models if necessary.
For the average certification time per image, run python time.py.
If you find the repo useful, please cite:
H. Hu, Z. Liu, L. Li, J. Zhu and D. Zhao "Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations", AISTATS 2024
@inproceedings{hu2024pixel,
title={Pixel-wise Smoothing for Certified Robustness against Camera Motion Perturbations},
author={Hu, Hanjiang and Liu, Zuxin and Li, Linyi and Zhu, Jiacheng and Zhao, Ding},
booktitle={International Conference on Artificial Intelligence and Statistics},
pages={217--225},
year={2024},
organization={PMLR}
}
H. Hu, C. Liu, and D. Zhao "Robustness Verification for Perception Models against Camera Motion Perturbations", ICML WFVML 2023
@inproceedings{hu2023robustness,
title={Robustness Verification for Perception Models against Camera Motion Perturbations},
author={Hu, Hanjiang and Liu, Changliu and Zhao, Ding},
booktitle={ICML Workshop on Formal Verification of Machine Learning (WFVML)},
year={2023}
}
H. Hu, Z. Liu, L. Li, J. Zhu and D. Zhao "Robustness Certification of Visual Perception Models via Camera Motion Smoothing", CoRL 2022
@inproceedings{hu2022robustness,
title={Robustness Certification of Visual Perception Models via Camera Motion Smoothing},
author={Hu, Hanjiang and Liu, Zuxin and Li, Linyi and Zhu, Jiacheng and Zhao, Ding},
booktitle={Proceedings of The 6th Conference on Robot Learning},
year={2022}
}