Features | Tutorial | Structure | Paper | ArXiv | References
FLASH-RL framework offers the following features:
- A FL system built from scratch, enabling the simulation of a server and several clients.
- A client selection in FL based on RL and more specifically on an adapted Double Deep Q Learning (DDQL) algorithm. This project marks the first release of a source code for this problematic.
- Multiple data division scenarios created for the MobiAct private dataset.
- Simulation of a heterogeneous environment in terms of the edge hardware equipment.
- Use of a reputation-based utility function to compute the reward attributed to each client.
- An adapted DDQL algorithm that allows multiple actions to be selected.
FLASH-RL's paper has been accepted in the 41st IEEE International Conference on Computer Design (ICCD 2023). Please refer to the arXiv version [here] (https://arxiv.org/abs/2311.06917) for the full paper.
FLASH-RL has been implemented and tested with the following versions:
- Python (v3.11.3).
- Pytorch (v2.0.0).
- Scikit-Learn (v1.2.2).
- Scipy (v1.10.1).
- FedLab (v1.3.0).
- NumPy (v1.24.3).
FLASH-RL/
├── RL/ --- Scripts for the RL module implementantaton.
| ├── DQL.py --- Contains the adapted DDQL implementation.
| └── MLP.py --- The neural network structure used for the DDQL agent.
|
├── clientFL/ --- Defining the FL client class.
├── data_division/ --- Creating and storing different non-iid data divisions.
| ├── MobiAct/MobiAct_divisions.py --- Script for creating the MobiAct divisions.
├── data_manipulation/ --- Enabling the creation of structured non-iid data divisions among the clients for CIFAR-10 and MNIST.
├── data_preprocessing/ --- Contains a script that pre-processes MobiAct data.
├── models/ --- Contains the different neural networks used for each dataset.
└──serverFL/
├── Server_FAVOR.py --- Contains the FAVOR implementation.
├── Server_FLASHRL.py --- Contains the **FLASH-RL** implementation.
└── Server_FedProx.py --- Contains the FedProx and FedAVG implementation.
The following table summarizes the results we obtained by comparing FLASH-RL with FedAVG and FAVOR, based on accuracy (%) and latency (s).
This results highlights the effectiveness of our method in striking a desirable balance between maximizing accuracy and minimizing end-to-end latency.
The following figure shows the progression of the F1 score for the global model and end-to-end latency for each MobiAct division.
The Figure highlights FLASH-RL’s ability to find a compromise between maximizing the F1-score of the overall model and minimizing end-to-end latency
FLASH-RL has been developed by Sofiane Bouaziz, Hadjer Benmeziane, Youcef Imine, Leila Hamdad, Smail Niar and Hamza Ouarnoughi.
You can contact us by opening a new issue in the repository.
In case you are using FLASH-RL for your research, please consider citing our work:
@INPROCEEDINGS{10361025,
author={Bouaziz, Sofiane and Benmeziane, Hadjer and Imine, Youcef and Hamdad, Leila and Niar, Smail and Ouarnoughi, Hamza},
booktitle={2023 IEEE 41st International Conference on Computer Design (ICCD)},
title={FLASH-RL: Federated Learning Addressing System and Static Heterogeneity using Reinforcement Learning},
year={2023},
volume={},
number={},
pages={444-447},
doi={10.1109/ICCD58817.2023.00074}}