Skip to content

peter0201yu/FedSNN-ClientSelection

Repository files navigation

Client Selection for Federated Learning with Spiking Neural Networks

Acknowledgements

Code adapted from Federated Learning with Spiking Neural Networks.

Environment

See environment.yml.

Client selection strategies

In models/client_selection.py, I implemented a few client selection strategies based on previous research papers.

Candidate selection strategies

In models/candidate_selection.py, I implemented a few candidate selection strategies. The reason for selecting candidates is that most client selection algorithms require the local training information of clients. Our simulated federated learning system, however, cannot handle the training of many clients in each round. Therefore, we first select ~20 candidates to train and then select among them for models to be uploaded and aggregated.

  • Random: select candidates randomly.

  • Loop: select candidates in a loop, no candidate overlap in consecutive rounds.

  • Data amount: candidates with more data are more likely to be selected.

  • Reduce collision: candidates that were previously chosen become less likely to be chosen.

Heterogeneous training and weighted FedAvg strategies

In heterogeneous.py, we explore the potential of performing federated learning with SNNs on heterogeneous devices. More specifically, we can adjust the training timesteps based on the device's computing power, and when aggregating the model, we can perform FedAvg with different weight for the models.

The numbers of timesteps of the models are generated using the timestep_mean and timestep_std arguments. I also implemented a few weighted FedAvg strategies (inside the script):

  • timestep_prop: the weighting coefficient of the model is proportional to the number of timesteps used for training the model

  • timestep_inv: the weighting is inversely proportional to the number of timesteps

  • train_loss_prop: the weighting is proportional to the training loss

  • train_loss_inv: the weighting is inversely proportional to the training loss

Experiments

single_model.py trains and evaluates a single model (without federated learning) while using a portion of the total data (to imitate the amount of local training data in federated learning). When trying out a new set of hyperparameters, run this script to separate the strategy's effect on local training and the strategy's effect on federated learning. test_single.sh contains an example that runs the script with arguments.

client_experiment.py contains the components needed to compare client/candidate selection strategies. test_cifar10_clients.sh contains an example that runs the script with arguments.

heterogeneous.py contains the components needed to compare weighted FedAvg strategies in the heterogeneous training scenario. test_cifar_hetero.sh contains an example that runs the script with arguments.

Results - wandb projects

wandb project for client/candidate selection experiments

wandb project for heterogeneous training experiments

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published