Skip to content

基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型

Notifications You must be signed in to change notification settings

shaneson0/attacking_federate_learning

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

Federate Learning 攻防复现篇

更新日期截止2020年5月22日,项目定期维护和更新,维护各种SOTA的Federated Learning的攻防模型。(更新中。。)

论文 (Defend)

  1. (Krum): Machine Learning with Adversaries: Byzantine Tolerant Gradient Descent【NIPS 2017】
  2. (trimmed_mean): D. Yin, Y. Chen, K. Ramchandran, and P. Bartlett. Byzantine-robust distributed learning: Towards optimal statistical rates. In Proceedings of the International Conference on Machine Learning (ICML), 2018.
  3. (bulyan): E. M. El Mhamdi, R. Guerraoui, and S. Rouault. The hidden vulnerability of distributed learning in Byzantium. In Proceedings of the 35th International Conference on Machine Learning (ICML), pages 3521–3530, 2018.

论文 (Attack)

  1. A Little Is Enough: Circumventing Defenses For Distributed Learning【NIPS 2019】

代码运行

mkdir logs
python main.py

About

基于《A Little Is Enough: Circumventing Defenses For Distributed Learning》的联邦学习攻击模型

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published

Languages