This repository releases the code of our paper "Going Grayscale: The Road to Understanding and Improving Unlearnable Examples".
We first show that grayscale pre-filtering (S1: Reactive Exploiter) can be used to mitigate original unlearnable examples (Huang et al. 2021). Then, we propose adaptive unlearnable examples (S2: Adaptive Defender) that are resistant to grayscale pre-filtering. We also show that unlearnable examples generated using Multi-Layer Perceptrons (MLPs) are effective in the case of complex Convolutional Neural Network (CNN) classifiers.
This figure shows the clean test accuracies of ResNet-18
on CIFAR-10 in S1 and S2 scenarios.
Our code is mainly based on the official code of the original unlearnable exmaple work (Huang et al. 2021).
bash ./scripts/cifar10_poison/cifar10_ULEO/resnet18.sh
bash ./scripts/cifar10_poison/cifar10_ULEO_GRAYAUG/resnet18.sh
bash ./scripts/cifar10_train/standard_exploiter/cifar10_ULEO/resnet18.sh
bash ./scripts/cifar10_train/standard_exploiter/cifar10_ULEO_GRAYAUG/resnet18.sh
bash ./scripts/cifar10_train/gray_exploiter/cifar10_ULEO/resnet18.sh
bash ./scripts/cifar10_train/gray_exploiter/cifar10_ULEO_GRAYAUG/resnet18.sh
ULE perturbations can be found in ./experiments/cifar10_poison/
.
After running the scripts, training results can be found in the folder ./experiments/cifar10_train/
.
bash scripts/cifar10_poison/cifar10_ULEO/mlp.sh
bash ./scripts/cifar10_train/standard_exploiter/cifar10_ULEO/mlp2dense.sh
bash ./scripts/cifar10_train/standard_exploiter/cifar10_ULEO/mlp2resnet.sh
bash ./scripts/cifar10_train/standard_exploiter/cifar10_ULEO/mlp2vgg.sh
After running the scripts, the transferability results can be found in the folder ./experiments/cifar10_cross/mlp2x/
.
This figure shows unlearnable examples generated by the original approach (left) and ours (right) with the order of original image, perturbations, and perturbed images.
Please cite our paper if you use this implementation in your research.
@misc{liu2021going,
title={Going Grayscale: The Road to Understanding and Improving Unlearnable Examples},
author={Zhuoran Liu and Zhengyu Zhao and Alex Kolmus and Tijn Berns and Twan van Laarhoven and Tom Heskes and Martha Larson},
eprint={2111.13244},
archivePrefix={arXiv},
year={2021}
}
@inproceedings{huang2021unlearnable,
title={Unlearnable Examples: Making Personal Data Unexploitable},
author={Hanxun Huang and Xingjun Ma and Sarah Monazam Erfani and James Bailey and Yisen Wang},
booktitle={ICLR},
year={2021}
}