Skip to content
Code for "Analyzing Federated Learning through an Adversarial Lens" https://arxiv.org/abs/1811.12470
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
interpret_utils
README.md
agents.py
census_utils.py
dist_train_w_attack.py
dist_utils.py first commit May 14, 2019
eval_utils.py
fmnist.py
global_vars.py
image_utils.py first commit May 14, 2019
interpret_module.py
io_utils.py
l2dist_calc.py
malicious_agent.py first commit May 14, 2019
mnist.py
weights_analysis.py

README.md

Model Poisoning Attacks

This code accompanies the paper 'Analyzing Federated Learning through an Adversarial Lens' which has been accepted at ICML 2019. It assumes that the Fashion MNIST data and Census data have been downloaded to /home/data/ on the user's machine.

Dependencies: Tensorflow-1.8, keras, numpy, scipy, scikit-learn

To run federated training with 10 agents, use

python dist_train_w_attack.py --dataset=fMNIST --k=10 --C=1.0 --E=5 --T=40 --train --model_num=0

To run the basic targeted model poisoning attack, use

python dist_train_w_attack.py --dataset=fMNIST --k=10 --C=1.0 --E=5 --T=40 --train --model_num=0 --mal --mal_obj=single --mal_strat=converge

The other attacks can be found in the file malicious_agent.py.

You can’t perform that action at this time.