Here, we provide fundamental algorithm implementations within a security context. These simulators encompass the components detailed in our research paper.
- Requirement: Ubuntu 20.04, Python v3.5+, Pytorch and CUDA environment
- "./Main.py" is about configurations and the basic Federated Learning framework
- "./Sims.py" describes the simulators for clients and central server
- "./Attacks.py" gives the codes about attacking methods
- "./Aggregations.py" shows the realizations of aggregation rules
- "./Utils.py" contains all necessary functions and discuss how to get training and testing data
- Folder "./Models" includes codes for AlexNet, FC and VGG-11
- Folder "./CompFIM" is the package used to compute Fisher Information Matrix (FIM)
- Should use "./Main.py" to run results, the command is '''python3 ./Main.py'''
- Parameters can be configured in "./Main.py", the main parameters are:
Configs["alpha"] = 0.5
Configs["attack"] = "MinMax"
Configs["aggmethod"] = "AFA"
Configs["attkrate"] = 0.125 # 12.5%
Configs["learning_rate"] = 0.01
Configs["wdecay"] = 1e-5
Configs["batch_size"] = 16
Configs["iters"] = 200
If you use the simulator or some results in our paper for a published project, please cite our work by using the following bibtex entry
@inproceedings{yan2023defl,
title={DeFL: Defending Against Model Poisoning Attacks in Federated Learning via Critical Learning Periods Awareness},
author={Gang Yan, Hao Wang, Xu Yuan and Jian Li},
booktitle={Proc. of AAAI},
year={2023}
}