Skip to content

Latest commit

 

History

History
36 lines (23 loc) · 4.44 KB

File metadata and controls

36 lines (23 loc) · 4.44 KB

PetMentor TinyML program for automatic dog clicker

Clicker training, or mark and reward, is a form of positive reinforcement dog training. A clicker is simply a small mechanical noisemaker. The techniques are based on the science of animal learning, which says that behaviors that are rewarded are more likely to be repeated in the future. In our application, moves like sit, jump, and rollover are indicated with different lights and buzzer tones for feedback. Whenever our pet performs the desired trick, a buzzer tone is played to reinforce the behaviour, cookies and treats can also be brought into play.

| |

American Kennel Club - Mark & Reward: Using Clicker Training to Communicate With Your Dog

Building Instructions

The smart dog collar is based on Arduino Nano BLE 33 Sense Development board which runs a faster processor for onboard DSP algorithms and can run TensflowLite binaries as well. The idea is very simple, the first part of device runs Voice command recognition program to listen for 6 pet training commands like sleep, jump, sit, roll, stop, run. We use Edge Impulse Studio to train the Neural Net and optimized binary file for device.

|

The first step is collecting .wav sound files for the utterances of those commands for training in EdgeImpulse Studio. If you have those voice command dataset you can skip sreaming data via edge impulse data uploader for Arduino Nano 33 BLE Sense. Note: Due to time and resource limitation, we are only training device to recognise two command out of six, those are stop and go.

This is how your uploaded data looks like in EdgeImpulse Studio.

Creating the Impulse to train the data on Neural Networks.

With the data set in place you can design an impulse. An impulse takes the raw data, slices it up in smaller windows, uses signal processing blocks to extract features, and then uses a learning block to classify new data. Signal processing blocks always return the same values for the same input and are used to make raw data easier to process, while learning blocks learn from past experiences. For this tutorial we'll use the "MFCC" signal processing block. MFCC stands for Mel Frequency Cepstral Coefficients. We'll then pass this simplified audio data into a Neural Network block, which will learn to distinguish between the three classes of audio.

Training the model with 100 epoch.

The model accuracy comes around 95% and certainly enough to proceed with our PetMentor features.
Programming buzzer tone and vibration motor feedback for the spoken commands into the application.

Note: This is just a test code, our actual version would combine voice command recognition and activty recognition and if the pet performs the desired activity based upon the user's given command then a specific sound and vibration rythm would be generated to make pets learn things faster. Here we hav succeessfully compiled the program too.

Here's the link for our PetMentor voice command classification on edge Impulse studio Edge Impulse PetMentor voice command training