Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation, WACV 2025, arXiv
If you have any issues using this repo, feel free to contact Jiahao @ jiahaox@unr.edu.
All tested datasets are available on torchvision
except the Shakespeare dataset. We provide it for your convenience:https://drive.google.com/file/d/1_FkrOD6YWchxOBXL3mYV9Ila8R9RkCdZ/view?usp=sharing
Generally, to run a case with default settings, you can easily use the following command:
python main.py --attack $attack --defend $defend --dataset $data
Here,
attack = {'agrTailoredTrmean', 'agrAgnosticMinMax', 'agrAgnosticMinSum', 'signflip_attack', 'noise_attack', 'random_attack', 'lie_attack', 'byzmean_attack', 'non_attack'}
defend = {'fedavg', 'signguard', 'dnc', 'lasa', 'bulyan', 'tr_mean', 'multi_krum', 'sparsefed', 'geomed'}
data = {'mnist', 'fmnist', 'femnist', 'sha', 'cifar', 'noniidcifar', 'cifar100', 'noniidcifar100'}
For example, to run LASA defends against ByzMean attack on the IID CIFAR-10 dataset, you can use:
python main.py --attack byzmean_attack --defend lasa --dataset cifar
Results will be recorded in exp_results
folder.
Argument | Type | Description |
---|---|---|
repeat |
int | Number of repeat of training |
num_attackers |
int | How many clients are malicious, use an integer here (e.g., 20 -> 20% of total clients are malicious). |
num_users |
int | How many clients in the FL system |
num_selected_users |
int | The number of clients are selected per round. |
round |
int | Total training rounds |
tau |
int | Local training epochs |
More detailed hyperparameters are presented in the paper and you can find them in config/attack/$data/basee.yaml
as well as the main file.
Argument | Type | Description |
---|---|---|
sparsity |
float | Pre-aggregation sparsification level |
lambda_n |
float | Filtering radius for norm |
lambda_s |
float | Filtering radius for sign |
If you find our repository is useful for your work, please cite our work:
@article{xu2024achieving,
title={Achieving Byzantine-Resilient Federated Learning via Layer-Adaptive Sparsified Model Aggregation},
author={Xu, Jiahao and Zhang, Zikai and Hu, Rui},
journal={arXiv preprint arXiv:2409.01435},
year={2024}
}
We would like to thank the work that helped our paper:
- SignGuard: https://github.com/JianXu95/SignGuard/tree/main.