TinyTrainable is an Arduino library, part of the project Tiny Trainable Instruments https://github.com/montoyamoraga/tiny-trainable-instruments, created by Aarón Montoya-Moraga, a research assistant at MIT Media Lab's Opera of the Future and Future Sketches research groups.
This library is being written between December 2020 and August 2021, with help from undergrad researchers Peter Tone and Maxwell Wang.
This Arduino library is available on this repository https://github.com/montoyamoraga/TinyTrainable, and can also be installed on the Arduino IDE.
The dependencies of this library are specified on the library.properties file, which include:
- Adafruit GFX Library: for output with screen.
- Adafruit SSD1306: for output with screen.
- Adafruit Thermal Printer Library: for outputs with Adafruit Thermal Printer.
- Arduino_APDS9960: APDS9960 sensor, to read gestures, color, proximity.
- Arduino_KNN: for machine learning with K-Nearest Neighbors algorithm.
- Arduino_LSM9DS1: LSM9DS1 IMU sensor, to read accelerometer, magnetometer, gyroscope.
- Servo: output with servo motors.
There is an additional dependency that must be installed manually:
- Arduino_TensorFlowLite: for machine learning with TensorFlow.
This repository is distributed in two branches:
- gh-pages: static website documentation generated with Doxygen, deployed at https://montoyamoraga.github.io/TinyTrainable.
- main: source code, code examples, and Ardiuino files for distribution.
The contents of the main branch are:
- assets/: additional assets for the library, including machine learning trained models
- examples/: code
- src/
- .gitignore
- ./CODE_OF_CONDUCT.md
- CONTRIBUTING.md
- Doxyfile
- keywords.txt
- library.properties
- LICENSE
- README.md
- README.pdf
The source code is distributed in the following files and folders:
- TinyTrainable.h and TinyTrainable.cpp: base classes for the library.
- inputs/: base classes for each input.
- outputs/: base classes for each output.
- tensorflow_speech: additional code for speech recognition, authored by the TensorFlow team.
The examples are distributed in 4 folders, ordered alphabetically and in terms of complexity:
- check: no input, intended to check the wiring of the instruments.
- color: color input.
- gesture: gesture input.
- speech: speech input.
Each of these folders contains one example for each of the available outputs:
- buzzer
- led
- midi
- printer
- screen
- serial
- servo
There is one additional helper example called get_gesture_data, for capturing gesture data for creating your own gesture database.
This library is intended to be used with the microcontroller Arduino Nano 33 BLE Sense, and the bill of materials available at https://github.com/montoyamoraga/tiny-trainable-instruments/blob/main/docs/0-bill-of-materials.md
- v0.0.1: 2020 December 07, placeholder alpha release, for testing the Arduino library ecosystem.
- v0.1.0: 2021 January 12, alpha release with first draft versions of each class, and some examples.
- v0.2.0: 2021 Sunday June 20, beta release for workshops.
MIT