ESP (Example-based Sensor Prediction) system leverages expert examples to support users in applying machine learning to a wide range of real-time sensor-based applications. Machine learning pipelines are specified in code using the Gesture Recognition Toolkit. This code generates a user-interface for iteratively collecting and refining training data and tuning the pipeline for particular projects. (We use openFrameworks to render the interface.)
We've built some applications to demonstrate the possibilities of the ESP system:
- gesture recognition using an accelerometer
- speaker identification, i.e. using a microphone to tell who is talking
- color recognition, e.g. for recognizing different objects based on their color
- a Touché-like example for detecting the way someone is touching or holding an object
- walk detection using an accelerometer
We've also written up some notes on other possible applications.
Input and Output
ESP supports a variety of input and output streams, including Arduino boards reading from sensors, TCP connections, and laptop microphone.
For installation instructions, see the main README.
Also, check out the API documentation