Skip to content

queyrusi/Pelta

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

8 Commits
 
 
 
 
 
 

Repository files navigation

Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments

Code base for the Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments paper (Queyrut, Schiavoni & Felber) accepted at ICDCS'23 (open access version soon available).

Code is provided for applying the Pelta defense scheme to an ensemble of Vision Transformer (ViT-L-16) and and Big Transfer Model (BiT-M-R101x3) against the Self-Attention Gradient Attack (original attack code from authors, paper here). The defense provided here works for CIFAR-10 and was coded entirely on PyTorch. Parameters of the defense can be changed in the env file through the PELTA and SHIELDEDparameters (set to True and BOTH by default).

Step by Step Guide

  1. Install the packages listed in the Software Installation Section (see below).
  2. Download the models from this Kaggle dataset link
  3. Move both models into the ".\ExtendedPelta\Models" folder
  4. Run the main in the Python IDE of your choice

Software Installation

We use the following software packages:

  • pytorch==1.7.1
  • torchvision==0.8.2
  • numpy==1.19.2
  • opencv-python==4.5.1.48
  • python-dotenv==0.21.1

System Requirements

All our defenses were run on one 40GB A100 GPU and system RAM were of 16GB.

About

Code for our paper Mitigating Adversarial Attacks in Federated Learning with Trusted Execution Environments.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages