Skip to content

zsk66/FLAME

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

10 Commits
 
 
 
 
 
 
 
 

Repository files navigation

FLAME

This repo holds the source code and scripts for reproducing the key experiments of our paper:

"On ADMM in Heterogeneous Federated Learning: Personalization, Robustness, and Fairness".

Authors: Shengkun Zhu, Jinshan Zeng, Sheng Wang, Yuan Sun, Xiaodong Li, Yuan Yao, Zhiyong Peng.

This repository is built based on PyTorch.

Datasets and Models

Datesets # of samples ref. Models
Mnist 70,000 LeCun et al. MLP
Fmnist 70,000 Xiao et al. MLP
Mmnist 58,954 Kaggle CNN1
Cifar10 60,000 Krizhevsky et al. CNN
Femnist 382,705 Leaf CNN2

Start

The default values for various parameters parsed to the experiment are given in options.py. Details are given on some of those parameters:

  • framework: five personalized federated learning frameworks.

  • partition: six data partitioning schemes.

  • num_users: number of users.

  • q: number of data shards of each user.

  • model: SVM, MLP, MLR, CNN for choices.

  • dataset: four datasets for choices.

  • strategy: client selection strategy.

  • frac_candidates: fraction of clients candidates, c/m in our paper.

  • frac: fraction of clients, s/m in our paper.

  • optimizer: type of optimizer, default sgd.

  • momentum: sgd momentum, default 0.

  • epochs: number of communication rounds.

  • local_ep: the number of local iterations.

  • local_bs: local batch size.

  • lr: learning rate.

  • mu: hyperparameter in regularization term.

  • Lambda: hyperparameter in Moreau envelope.

  • rho: hyperparameter in penalty term.

  • iid: data distribution, 0 for non-iid.

  • seed: random seed.

  • eta: learning rate for the global model in pFedMe.

  • eta2: learning rate for the global model in ditto.

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages