Pytorch implementation for federated learning with local and global representations.
Correspondence to:
- Paul Liang (pliang@cs.cmu.edu)
- Terrance Liu (terrancl@cs.cmu.edu)
Think Locally, Act Globally: Federated Learning with Local and Global Representations
Paul Pu Liang*, Terrance Liu*, Liu Ziyin, Ruslan Salakhutdinov, and Louis-Philippe Morency
NeurIPS 2019 Workshop on Federated Learning (distinguished student paper award). (*equal contribution)
If you find this repository useful, please cite our paper:
@article{liang2020think,
title={Think locally, act globally: Federated learning with local and global representations},
author={Liang, Paul Pu and Liu, Terrance and Ziyin, Liu and Salakhutdinov, Ruslan and Morency, Louis-Philippe},
journal={arXiv preprint arXiv:2001.01523},
year={2020}
}
First check that the requirements are satisfied:
Python 3.6
torch 1.2.0
torchvision 0.4.0
numpy 1.18.1
sklearn 0.20.0
matplotlib 3.1.2
Pillow 4.1.1
The next step is to clone the repository:
git clone https://github.com/pliang279/LG-FedAvg.git
We run FedAvg and LG-FedAvg experiments on MNIST (link) and CIFAR10 (link). See our paper for a description how we process and partition the data for federated learning experiments.
Results can be reproduced running the following:
python main_fed.py --dataset mnist --model mlp --num_classes 10 --epochs 1000 --lr 0.05 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 10 --results_save run1
python main_fed.py --dataset cifar10 --model cnn --num_classes 10 --epochs 2000 --lr 0.1 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 50 --results_save run1
Results can be reproduced by first running the above commands for FedAvg and then running the following:
python main_lg.py --dataset mnist --model mlp --num_classes 10 --epochs 200 --lr 0.05 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 10 --num_layers_keep 3 --results_save run1 --load_fed best_400.pt
python main_lg.py --dataset cifar10 --model cnn --num_classes 10 --epochs 200 --lr 0.1 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 50 --num_layers_keep 2 --results_save run1 --load_fed best_1200.pt
Results can be reproduced running the following:
python main_mtl.py --dataset mnist --model mlp --num_classes 10 --epochs 1000 --lr 0.05 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 10 --num_layers_keep 5 --results_save run1
python main_mtl.py --dataset cifar10 --model cnn --num_classes 10 --epochs 2000 --lr 0.1 --num_users 100 --shard_per_user 2 --frac 0.1 --local_ep 1 --local_bs 50 --num_layers_keep 5 --results_save run1
If you use this code, please cite our paper:
@article{liang2019_federated,
title={Think Locally, Act Globally: Federated Learning with Local and Global Representations},
author={Paul Pu Liang and Terrance Liu and Ziyin Liu and Ruslan Salakhutdinov and Louis-Philippe Morency},
journal={ArXiv},
year={2019},
volume={abs/2001.01523}
}
This codebase was adapted from https://github.com/shaoxiongji/federated-learning.