Skip to content

samartzidis/NanoWakeWord

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

27 Commits
 
 
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

NanoWakeWord

NanoWakeWord is a minimal C# port of the Python openWakeWord wake-word detection engine.

It runs efficiently on any platform supporting .NET Standard 2.0, including Windows, Linux, RaspberryPi including the Zero 2/2W (linux-arm64).

Dependencies

It has only one external library dependency, the Microsoft.ML.OnnxRuntime.

Sample Code

Note, the sample code uses the PvRecorder library for providing the sound recording functionality.

using NanoWakeWord;
using Pv;

var runtime = new WakeWordRuntime(new WakeWordRuntimeConfig { 
    DebugAction = (model, probability, detected) => { 
        if (detected) 
            Console.WriteLine($"*** {model} {probability:F5}"); 
        else 
            Console.WriteLine($"{model} {probability:F5}"); 
    }, 
    WakeWords = [ new WakeWordConfig { Model = "hey_marvin_v0.1", Threshold = 0.9f } ] 
});

using var recorder = PvRecorder.Create(frameLength: 512);
recorder.Start();

Console.WriteLine($"Using recording device: {recorder.SelectedDevice}");

Console.WriteLine("Listening for wake word.");
while (recorder.IsRecording)
{
    var frame = recorder.Read();

    var result = runtime.Process(frame);
    if (result >= 0)
    {
        Console.WriteLine($"Detected wake word at index: #{result}.");
    }
}

Training Custom Wake-Word Models

NanoWakeWord comes with embedded wake-word models as part of the openWakeWord port: alexa, hey_jarvis, hey_marvin, hey_mycroft.

By following the openWakeWord project instructions, you can train custom models and use them in NanoWakeWord as you would normally do in openWakeWord.

Training Models Locally Using Podman and Python scripts

To facilitate the training process, the scripts folder contains Python scripts for automating model training using a Podman container.

Kick off the Podman Linux container (note - you will need to enable Cuda GPU support in Podman):

podman run --gpus=all --shm-size=50G -p 127.0.0.1:9000:8080 us-docker.pkg.dev/colab-images/public/runtime

Copy the scripts to the /content directory and run the Python scripts in this order:

python setup_environment.py
python download_data.py

Edit train_model.py as needed, then run:

python train_model.py

About

Minimal C# openWakeWord port

Resources

License

Stars

Watchers

Forks

Releases

No releases published

Packages

 
 
 

Contributors