Skip to content
David edited this page Jun 25, 2024 · 14 revisions

Nerlnet is a distributed machine learning framework for research and deployment of distributed machine learning clusters on IoT devices.​
It is highly configurable and allows user to control software entities and allocate them across devices. The management is through a convenient python API that communicates with Nerlnet cluster.

Nerlnet is based on the following open-source repositories and projects:
- Erlang: programming language that implements the distributed system backbone of Nerlnet.
- Cowboy: an Erlang HTTP server.
- OpenNN, Neural network library, implemented in C++.
- nifpp, C++11 Wrapper for Erlang NIF API.

Nerlnet architecture:

The left side of the figure describes communication between API Python server and Nerlnet Main-Server.
The ApiServer loads distributed configurations of experiment from a file (dc_.json file) and sends configurations to Nerlnet’s MainServer that spreads it through devices. Each device builds its entities and notifies the ApiServer through the MainServer that it is ready for running experiment phases.

On the right side of the figure there are the MainServer and distributed ML cluster that consists of entities as defined in JSON configuration file (Routers, Sources, and Workers – Edge Compute Devices).
Red communication lines are dedicated for monitoring and statistics collection from the distributed ML cluster.
Blue arrows are communication lines of the distributed network.
Distributed network components such as router, edge compute unit and sensors (or data generators) can be deployed on any hardware and even multiple components per hardware, depends on model complexity and compute constraints of the edge device.