This is the implementation for the project of DRE.
Integrate adversarial training into active learning to produce accurate and adversarial robust deep neural networks.
- Random sampling
- Max Entropy
- DeepGini
- Bayesian active learning by disagreement (BALD)
- Dropout Entropy
- Least confidence (LC)
- Margin sampling
- Multiple-boundary clustring and prioritization (MCP)
- DeepFool active learning (DFAL)
- Expected gradient length (EGL)
- Core-set
- python 3.6.10
- torch 1.6.0
- torchattacks 2.14.2
- torchvision 0.7.0
- foolbox 3.3.0
- scikit-learn 0.23.2
- apex please refer to NVIDA/apex for the installation
Description of Dataset:
-
MNIST, Fashion-MNIST, CIFAR-10: loaded from the corresponding dataset of TorchVision.
-
SVHN: download the "train_32x32.mat, test_32x32.mat" from the site, then take the first 50,000 and 10,000 from each file for training and testing, respectively.
To obtain models using adversarial training:
python main_full.py --dataName mnist --train adv --ite 0
To obtain initial models before starting active learning:
python main_warmUp.py --dataName mnist --ite 0
To perform robust active learning using the random selection as the acquisition function:
python main_al.py --dataName mnist --train adv --metric random --ite 0
python main_evaluate.py --type al --train adv --dataName mnist --attack pgd --metric random --ite 0
[Notice] Be careful with the saving path in config.py
.
More experimental results can be found at our companion site.
If you use this project, please consider citing us:
@article{guo2022dre,
title={DRE: density-based data selection with entropy for adversarial-robust deep learning models},
author={Guo, Yuejun and Hu, Qiang and Cordy, Maxime and Papadakis, Michail and Le Traon, Yves},
journal={Neural Computing and Applications},
pages={1--18},
year={2022},
publisher={Springer},
doi={10.1007/s00521-022-07812-2}
}
Please contact Yuejun Guo (yuejun.guo@list.lu; yuejun.guo@yahoo.com) if you have further questions or want to contribute.