Topology-Independent Robustness of the Weighted Mean under Label Poisoning Attacks in Heterogeneous Decentralized Learning
- Download the dependant packages:
- python 3.9.18
- pytorch 1.13.1
- matplotlib 3.5.0
- networkx 2.6.3
- Download the dataset to the directory
./datasetand create a directory named./record. The experiment outputs will be stored in./record.
The main programs can be found in the following files:
ByrdLab: main codesmain DSGD(-xxx).py, : program entrymain DSGD.py: compute classification accuracies of different aggregators (Fig. 1, 3, 5, 6)main DSGD-hetero-disturb.py: compute heterogeneity of regular gradients and disturbances of poisoned gradients (Fig. 4)
draw_decentralized_multi_fig: directories containing the codes that draw the figures in paper
python "main DSGD.py" --aggregation <aggregation-name> --attack <attack-name> --data-partition <data-partition> --graph <graph-name> --gpu <gpu-id>
# ========================
# e.g.
# python "main DSGD.py" --aggregation mean --attack label_flipping --data-partition noniid --graph TwoCastle --gpu 0The arguments can be
<aggregation-name>:
- mean
- trimmed-mean
- faba
- cc
- scc
- ios
- lfighter
<attack-name>:
- label_flipping (which executes static label flipping attacks)
- dynamic_label_flipping (which executes dynamic label flipping attacks)
<data-partition>:
- iid
- dirichlet_mild
- noniid
<graph-name>:
- TwoCastle
- UnconnectedRegularLine
- Fan
- Lollipop
python run-toyexample.py
cd draw_decentralized_multi_fig
python draw-Decentral-toyexample.py
cd draw_decentralized_multi_fig
python draw-Decentral-topology.py
python run-experiments.py
cd draw_decentralized_multi_fig
python draw-Decentral-MultiFig.py
python run-hetero-disturb.py
cd draw_decentralized_multi_fig
python draw-Decentral-A-xi.py