Skip to content
Keyword spotting on Arm Cortex-M Microcontrollers
C C++ Objective-C Python Makefile
Branch: master
Clone or download
Naveen Suda
Latest commit 8151349 Sep 7, 2018
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
Deployment updated conv function path in k64f makefile Sep 7, 2018
Pretrained_models Initial commit Dec 13, 2017
.gitignore Added example codes for running KWS on Cortex-M Jan 25, 2018
LICENSE Initial commit Dec 13, 2017
README.md Added example codes for running KWS on Cortex-M Jan 25, 2018
fold_batchnorm.py added ds_cnn model quantization Jun 22, 2018
freeze.py Initial commit Dec 13, 2017
input_data.py added DS_CNN model and reorganized the code Apr 25, 2018
label_wav.py Initial commit Dec 13, 2017
models.py added ds_cnn model quantization Jun 22, 2018
quant_models.py
quant_test.py added ds_cnn model quantization Jun 22, 2018
silence.wav added DS_CNN model and reorganized the code Apr 25, 2018
test.py added ds_cnn model quantization Jun 22, 2018
test_pb.py added ds_cnn model quantization Jun 22, 2018
train.py added ds_cnn model quantization Jun 22, 2018
train_commands.txt Initial commit Dec 13, 2017

README.md

Keyword spotting for Microcontrollers

This repository consists of the tensorflow models and training scripts used in the paper: Hello Edge: Keyword spotting on Microcontrollers. The scripts are adapted from Tensorflow examples and some are repeated here for the sake of making these scripts self-contained.

To train a DNN with 3 fully-connected layers with 128 neurons in each layer, run:

python train.py --model_architecture dnn --model_size_info 128 128 128 

The command line argument --model_size_info is used to pass the neural network layer dimensions such as number of layers, convolution filter size/stride as a list to models.py, which builds the tensorflow graph based on the provided model architecture and layer dimensions. For more info on model_size_info for each network architecture see models.py. The training commands with all the hyperparameters to reproduce the models shown in the paper are given here.

To run inference on the trained model from a checkpoint on train/val/test set, run:

python test.py --model_architecture dnn --model_size_info 128 128 128 --checkpoint 
<checkpoint path>

To freeze the trained model checkpoint into a .pb file, run:

python freeze.py --model_architecture dnn --model_size_info 128 128 128 --checkpoint 
<checkpoint path> --output_file dnn.pb

Pretrained models

Trained models (.pb files) for different neural network architectures such as DNN, CNN, Basic LSTM, LSTM, GRU, CRNN and DS-CNN shown in this arXiv paper are added in Pretrained_models. Accuracy of the models on validation set, their memory requirements and operations per inference are also summarized in the following table.

To run an audio file through the trained model (e.g. a DNN) and get top prediction, run:

python label_wav.py --wav <audio file> --graph Pretrained_models/DNN/DNN_S.pb 
--labels Pretrained_models/labels.txt --how_many_labels 1

Quantization Guide and Deployment on Microcontrollers

A quick guide on quantizing the KWS neural network models is here. The example code for running a DNN model on a Cortex-M development board is also provided here.

You can’t perform that action at this time.