Federated Learning – Firefox addon
Federated Learning is a new subarea of machine learning where the training process is distributed among many users. Instead of sharing their data, users only have to provide weight updates to the server.
This is the first draft of the Firefox addon that implements the client-side part of a Federated Learning system. Everytime users perform searches in the awesome bar, the model's predictions are compared to the actual user behaviour and weight updates are computed. These updates are collected using Telemetry.
Installing the addon
- Go to
- Go to
about:debugging, click Load Temporary Add-on and select
The addon was built for a beta version of Firefox.
treatment: The full optimization process is performed, weights change after every iteration and the ranking is recomputed
control: Search works exactly the same way it currently does in Firefox, we only collect additional statistics
control-no-decay: In the current algorithm, frecency scores are decayed over time.
treatmentloses this effect since scores are recomputed all the time. To see if the decaying is useful and to make a fairer comparison, this variation only removes the decaying effect
After the study was installed, the variation can be changed by updating the
federated-learning.frecency.variation pref in
The new value needs to be one of the three values listed above.
After the pref was changed, the browser has to be restarted so that the change is taken into account.
frecency: For interacting with the
moz_placestable and recalculating / changing frecency scores
awesomeBar: For observing interactions with the awesome bar. The required information for history / bookmark searches is retrieved (number of typed characters, selected suggestion, features of other suggestions)
prefs: For reading and writing preferences. This is just used to update the weights
telemetry: For sending back updates and meta information
shield-studies-addon-utilsfor study related helpers
synchronization: Everything related to the federated learning protocol. Currently that means sending weight updates back using Telemetry and reading the current model from S3
optimization: For computing model updates
studySetupis adapted from
shield-studies-addon-utilsand configures the study
Building the addon
$ npm run build
Depending on what should be done with the build, it still needs to be signed by someone else.