Skip to content

noumannahmad/Deep-Federated-Learning

Repository files navigation

Deep-Federated-Learning

Federated Learning (FL) is a distributed machine learning process in which each participant node (or party) retains their data locally and interacts with other participants via a learning protocol. One main driver behind FL is the need to not share data with others due to privacy and confidentially concerns. Another driver is to improve the speed of training a machine learning model by leveraging other participants' training processes.

Setting up such a federated learning system requires setting up a communication infrastructure, converting machine learning algorithms to federated settings and in some cases knowing about the intricacies of security and privacy enabling techniques such as differential privacy and multi-party computation.

Pre-Requisite

pip install --upgrade tensorflow-federated

Federated Learning (FL)

Federated Learning (FL) is a distributed machine learning process in which each participant node (or party) retains their data locally and interacts with other participants via a learning protocol. One main driver behind FL is the need to not share data with others due to privacy and confidentially concerns. Another driver is to improve the speed of training a machine learning model by leveraging other participants' training processes.

Setting up such a federated learning system requires setting up a communication infrastructure, converting machine learning algorithms to federated settings and in some cases knowing about the intricacies of security and privacy enabling techniques such as differential privacy and multi-party computation.

In the above Notebook we use IBM FL to have multiple parties train a classifier to recognise handwritten digits in the MNIST dataset.

For a more technical dive into IBM FL, refer the whitepaper here.

In the following cells, we set up each of the components of a Federated Learning network (See Figure below) wherein all involved parties aid in training their respective local cartpoles to arrive at the upright pendulum state. In this notebook we default to 2 parties, but depending on your resources you may use more parties.

FL_Network

Digit Recognition

MnistExamples

Running the Aggregator Next we pass the configuration parameters set in the previous cell to instantiate the Aggregator object. Finally, we start() the Aggregator process.

arch_party

Starting Parties Now that we have Aggregator running, next we go to Parties' notebooks (keras_classifier_p0.ipynb and keras_classifier_p1.ipynb) to start and register them with the Aggregator. Once all the parties are done with registration, we will move to next step to start training.

Training and Evaluation

Now that our network has been set up, we begin training the model by invoking the Aggregator's start_training() method.

This could take some time, depending on your system specifications. Feel free to get your doze of coffee meanwhile ☕

About

No description, website, or topics provided.

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published