Note: The scripts will be slow without the implementation of parallel computing.
Only experiments on MNIST and EMNIST is produced by far. The kinds of distribution of the dataset are as follow:
-
Prepares IID training datasets for each client
-
Prepares NIID-1 training datasets for each client (Overlapping sample sets)
-
Prepares NIID-2 training datasets for each client (Unequal data distribution)
-
Prepares NIID-1+2 training datasets for each client(Overlapping sample sets + Unequal data distribution)
There are three different models in this project:
- Small
- Medium
- Large
There are three kinds of behaviour of clients:
- Normal clients: client trains model and returns updated parameters
- Freerider: Client does not train model and returns original parameters
- adversarial client: Client returns randomized parameters
python == 3.8
pytorch == 1.8
Federated learning is produced by:
python main_fed.py
See the arguments in parameters.py.
For example:
python main_fed.py --dataset MNIST --distribution_type IID --model_size SMALL