Read this in other languages: English, 简体中文.
Federated learning (FL), proposed by Google at the very beginning, is recently a burgeoning research area of machine learning, which aims to protect individual data privacy in distributed machine learning process, especially in finance, smart healthcare and edge computing. Different from traditional data-centered distributed machine learning, participants in FL setting utilize localized data to train local model, then leverages specific strategies with other participants to acquire the final model collaboratively, avoiding direct data sharing behavior.
To relieve the burden of researchers in implementing FL algorithms and emancipate FL scientists from repetitive implementation of basic FL setting, we introduce highly customizable framework FedLab in this work. FedLab provides the necessary modules for FL simulation, including communication, compression, model optimization, data partition and other functional modules. Users can build FL simulation environment with custom modules like playing with LEGO bricks. For better understanding and easy usage, FL algorithm benchmark implemented in FedLab are also presented.
- Documentation English version|中文版
- Overview of FedLab
- Installation & Setup
- Examples
- Contribute Guideline
- API Reference
- Optimization Algorithms
- FedAvg: Communication-Efficient Learning of Deep Networks from Decentralized Data
- FedAsync: Asynchronous Federated Optimization
- FedProx: Federated Optimization in Heterogeneous Networks
- FedDyn: Federated Learning based on Dynamic Regularization
- Compression Algorithms
- DGC: Deep Gradient Compression: Reducing the Communication Bandwidth for Distributed Training
- QSGD: Communication-Efficient SGD via Gradient Quantization and Encoding
- Datasets
- LEAF: A Benchmark for Federated Settings
- NIID-Bench: Federated Learning on Non-IID Data Silos: An Experimental Study
More FedLab version of FL algorithms are coming soon. For more information, please star our FedLab Benchmark repository.
- [ICLR-DPML 2021] FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks [Paper] [Code]
- [arXiv 2021] Federated Graph Learning -- A Position Paper [Paper]
- [IEEE TKDE 2021] A Survey on Federated Learning Systems: Vision, Hype and Reality for Data Privacy and Protection [Paper]
- [arXiv 2021] A Survey of Fairness-Aware Federated Learning [Paper]
- [Foundations and Trends in Machine Learning 2021] Advances and Open Problems in Federated Learning [Paper]
- [arXiv 2020] Towards Utilizing Unlabeled Data in Federated Learning: A Survey and Prospective [Paper]
- [IEEE Signal Processing Magazine 2020] Federated Learning: Challenges, Methods, and Future Directions [Paper]
- [IEEE Communications Surveys & Tutorials 2020] Federated Learning in Mobile Edge Networks A Comprehensive Survey [Paper]
- [IEEE TIST 2019] Federated Machine Learning: Concept and Applications [Paper]
- FedLab: Code, FedLab-benchmarks, Doc (zh-CN-Doc), Paper
- Flower: Code, Homepage, Doc, Paper
- FedML: Code, Doc, Paper
- FedLearn: Code, Paper
- PySyft: Code, Doc, Paper
- TensorFlow Federated (TFF): Code, Doc
- FEDn: Code, Paper
- FATE: Code, Homepage, Doc, Paper
- PaddleFL: Code, Doc
- Fedlearner: Code
- OpenFL: Code, Doc, Paper
- FedLab-benchmarks: Code
- [ACM TIST 2022] The OARF Benchmark Suite: Characterization and Implications for Federated Learning Systems [Code] [Paper]
- [IEEE ICDE 2022] Federated Learning on Non-IID Data Silos: An Experimental Study [Paper] [Official Code] [FedLab Tutorial]
- [ICLR-DPML 2021] FedGraphNN: A Federated Learning System and Benchmark for Graph Neural Networks [Paper] [Code]
- [arXiv 2018] LEAF: A Benchmark for Federated Settings [Homepage] [Official tensorflow] [Unofficial PyTorch] [Paper]
- [ICLR 2021] Federated Semi-supervised Learning with Inter-Client Consistency & Disjoint Learning [Paper] [Code]
- [arXiv 2021] SemiFL: Communication Efficient Semi-Supervised Federated Learning with Unlabeled Clients [Paper]
- [IEEE BigData 2021] Improving Semi-supervised Federated Learning by Reducing the Gradient Diversity of Models [Paper]
- [arXiv 2020] Benchmarking Semi-supervised Federated Learning [Paper]] [Code]
- [arXiv 2022] Sky Computing: Accelerating Geo-distributed Computing in Federated Learning [Paper] [Code]
- [ACM HPDC 2020] TiFL: A Tier-based Federated Learning System [Paper] [Video]
- chaoyanghe/Awesome-Federated-Learning
- weimingwill/awesome-federated-learning
- tushar-semwal/awesome-federated-computing
- ZeroWangZY/federated-learning
- innovation-cat/Awesome-Federated-Machine-Learning
- huweibo/Awesome-Federated-Learning-on-Graph-and-GNN-papers
You're welcome to contribute to this project through Pull Request.
- By contributing, you agree that your contributions will be licensed under Apache License, Version 2.0
- Docstring and code should follow Google Python Style Guide: 中文版|English
- The code should provide test cases using
unittest.TestCase
Please cite FedLab in your publications if it helps your research:
@article{smile2021fedlab,
title={FedLab: A Flexible Federated Learning Framework},
author={Dun Zeng, Siqi Liang, Xiangjing Hu and Zenglin Xu},
journal={arXiv preprint arXiv:2107.11621},
year={2021}
}
Project Investigator: Prof. Zenglin Xu (xuzenglin@hit.edu.cn).
For technical issues reated to FedLab development, please contact our development team through Github issues or email:
- Dun Zeng: zengdun@foxmail.com
- Siqi Liang: zszxlsq@gmail.com