Our experiments are based on the implementation of Federated Learning Based on Dynamic Regularization.
Please install the required packages. The code is compiled with Python 3.7 dependencies in a virtual environment via:
pip install -r requirements.txt
For different configurations, use the following commands:
-
MTGC:
python run_MTGC.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
-
Group Correction:
python run_MTGC_Y.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
-
Local Correction:
python run_MTGC_Z.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
-
FedDyn:
python run_FedDyn.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
-
FedProx:
python run_FedProx.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
-
HFedAvg:
python run_HFL.py \ --rule 'noniid' \ --rule_arg 0.1 \ --com_amount 100 \ --epoch 2 \ --E 30
The training logs are recorded in the training_log
directory.
- Rule:
'noniid'
: Both Group and Client Non-IID'Dirichlet'
: Group IID and Client Non-IID'Mix2'
: Group Non-IID and Client IID
- Rule Argument:
Dirichlet
parameter as shown in the manuscript.
- com_amount: Number of global communication rounds.
- E: Group aggregation period.
- Relationship between H and the # epoch: ( H = \frac{\text{number of samples at local dataset}}{\text{batch size}} \times\text{epoch} )
- Please refer to
utils_options.py
for more parameters.
If you find our work helpful, please consider citing our paper:
@inproceedings{fang2024hierarchical, title={Hierarchical Federated Learning with Multi-Timescale Gradient Correction}, author={Wenzhi Fang, Dong-Jun Han, Evan Chen, Shiqiang Wang, and Christopher G. Brinton}, booktitle={The Thirty-eighth Annual Conference on Neural Information Processing Systems}, year={2024} }