Builds on the Deploy TinyML course on edX. I used the great tools provided below to build my own dataset and you can see my application of the machine learning part to the Arduino here.
-
[TF]-Course-Baseline: This is a boiled down version of the Custom Dataset Notebook from the Deploying TinyML course. It leaves out all annotations and assumes that all data (including custom datasets) are prepared and in an unified location.
-
[Keras]-Preprocess-Functional: Uses keras.utils.Squence to build a dataset and keras Functional API to build a model. It allows easy adjustment of the dataloading process including augmentation, rapid model development and transfer learning. Most of the functionality is kept as in the Tensorflow example for speech commands (in particular: input_data.py, models.py, train.py) but avoids TF1 style / graphs / sessions.
Note: When using transfer learning you should use a model that is bigger than the
tiny_conv
model, because the number of parameters that are trained and carried over to be fine tuned is very small, so very little improvement is expected. -
[Keras]KeywordDataset_Demo: Demo of the usage of KeywordDataset as a module.
- Speech Commands: Collection of 1s .wav files of different keywords.
- Open Speech Recording Record short audio clips as .ogg.
- Extract Loudest Section Extract the loudest part of a .wav file.