Skip to content

mtuann/personalized-federated-learning-101

Repository files navigation

Benchmark for Personalized Federated Learning

This code is based on the setup for Personalized Federated Learning (pFL) from the repositories:

Environment Preparation

  1. Set up the environment by installing the dependencies from the environment.yml file:
make install
  1. Download the data and pretrained models for MNIST, CIFAR-10, and Tiny-ImageNet from the DBA Github repository.

Paper

Method 🧬

Regular FL Methods

Personalized FL Methods

Easy Run 🏃‍♂️

# partition the CIFAR-10 according to Dir(0.1) for 100 clients
cd data/utils
python run.py -d cifar10 -a 0.1 -cn 100
cd ../../

# run FedAvg under default setting.
cd src/server
python fedavg.py

Run with Customized Settings 🏃‍♂️

# run FedAvg under customized setting.

Monitor 📈 (optional and recommended 👍)

  1. Wandb

Arguments 🔧

About the default values and hyperparameters of advanced FL methods, go check src/config/args.py for full details.

📢 All arguments have default value.

General Argument Description
--dataset, -d The name of dataset that experiment run on.
--model, -m The model backbone experiment used.
--seed Random seed for running experiment.
--join_ratio, -jr Ratio for (client each round) / (client num in total).
--global_epoch, -ge Global epoch, also called communication round.
--local_epoch, -le Local epoch for client local training.
--finetune_epoch, -fe Epoch for clients fine-tunning their models before test.
--test_gap, -tg Interval round of performing test on clients.
--eval_test, -ee Non-zero value for performing evaluation on joined clients' testset before and after local training.
--eval_train, -er Non-zero value for performing evaluation on joined clients' trainset before and after local training.
--local_lr, -lr Learning rate for client local training.
--momentum, -mom Momentum for client local opitimizer.
--weight_decay, -wd Weight decay for client local optimizer.
--verbose_gap, -vg Interval round of displaying clients training performance on terminal.
--batch_size, -bs Data batch size for client local training.
--use_cuda Non-zero value indicates that tensors are in gpu.
--visible Non-zero value for using Visdom to monitor algorithm performance on localhost:8097.
--save_log Non-zero value for saving algorithm running log in FL-bench/out/{$algo}.
--save_model Non-zero value for saving output model(s) parameters in FL-bench/out/{$algo}.
--save_fig Non-zero value for saving the accuracy curves showed on Visdom into a .jpeg file at FL-bench/out/{$algo}.
--save_metrics Non-zero value for saving metrics stats into a .csv file at FL-bench/out/{$algo}.

Supported Datasets 🎨

This benchmark only support algorithms to solve image classification task for now.

Regular Image Datasets

  • MNIST (1 x 28 x 28, 10 classes)

  • CIFAR-10/100 (3 x 32 x 32, 10/100 classes)

  • EMNIST (1 x 28 x 28, 62 classes)

  • FashionMNIST (1 x 28 x 28, 10 classes)

Medical Image Datasets

Acknowledgement 🤗

About

Personalized Federated Learning

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published