Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Efficient ML for Edge and Endpoint IoT Devices and Other Resource-constrained Scenarios.
Our objective is to develop a library of efficient machine learning algorithms that can run on severely resource-constrained edge and endpoint IoT devices ranging from the Arduino to the Raspberry Pi.
The Internet of Things (IoT) is poised to revolutionize our world. Billions of microcontrollers and sensors have already been deployed for predictive maintenance, connected cars, precision agriculture, personalized fitness and wearables, smart housing, cities, healthcare, etc. The dominant paradigm in these applications is that the IoT device is dumb – it just senses its environment and transmits the sensor readings to the cloud where all the intelligence resides and the decision making happens.
We envision an alternative paradigm where even tiny, resource-constrained IoT devices can run machine learning algorithms locally without necessarily connecting to the cloud. This enables a number of critical scenarios, beyond the pale of the traditional paradigm, where it is not desirable to send data to the cloud due to concerns about latency, connectivity, energy, privacy and security.
We are therefore currently releasing tree and k-nearest neighbour based algorithms, called Bonsai and ProtoNN respectively, for classification, regression, ranking and other common IoT tasks. Bonsai and ProtoNN can be trained on the cloud, or on a laptop, but can then make predictions locally on the tiniest of microcontrollers without needing cloud connectivity.
We have deployed Bonsai and ProtoNN on the Arduino Uno (8 bit ATmega328P microcontroller operating at 16 MHz without floating point support, 2 KB RAM and 32 KB read-only flash memory) and found that they can accurately make predictions within a few milliseconds. An introduction to the algorithms we have developed and instructions to use them can be found in our Algorithms page and detailed experiments and prediction cost-accuracy trade-offs can be found in our ICML 2017 papers.