Code base for the Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments paper (Queyrut, Schiavoni & Felber) accepted at ICDCS'23 (open access version soon available).
Code is provided for applying the Pelta defense scheme to an ensemble of Vision Transformer (ViT-L-16) and and Big Transfer Model (BiT-M-R101x3) against the Self-Attention Gradient Attack (original attack code from authors, paper here). The defense provided here works for CIFAR-10 and was coded entirely on PyTorch.
Parameters of the defense can be changed in the env
file through the PELTA
and SHIELDED
parameters (set to True
and BOTH
by default).
- Install the packages listed in the Software Installation Section (see below).
- Download the models from this Kaggle dataset link
- Move both models into the ".\ExtendedPelta\Models" folder
- Run the main in the Python IDE of your choice
We use the following software packages:
- pytorch==1.7.1
- torchvision==0.8.2
- numpy==1.19.2
- opencv-python==4.5.1.48
- python-dotenv==0.21.1
All our defenses were run on one 40GB A100 GPU and system RAM were of 16GB.