This is a stripped down version of this library (https://github.com/voixen/voixen-vad). Thank you very much!
Voice Activity Detection library
Voice Activity Detection based on the method used in the upcoming WebRTC HTML5 standard. Extracted from Chromium for stand-alone use as a library.
Supported sample rates are:
- 8000Hz
- 16000Hz*
- 32000Hz
- 48000Hz
*recommended sample rate for best performance/accuracy tradeoff
Create a new VAD
object using the given mode.
Analyse the given samples (Buffer
object containing 16bit signed values) and notify the detected voice
event via promise.
Analyse the given samples (Buffer
object containing 32bit normalized float values) and notify the detected voice
event via promise.
Create an stream for voice activation detection.
{
mode: VAD.Mode.NORMAL, // VAD mode, see above
audioFrequency: 16000, // Audiofrequency, see above
debounceTime: 1000 // Time for debouncing speech active state, default 1 second
}
{
time: 14520, // Current seek time in audio
audioData: <Buffer>, // Original audio data
speech: {
state: true, // Current state of speech
start: false, // True on chunk when speech starts
end: false, // True on chunk when speech ends
startTime: 12360, // Time when speech started
duration: 2160 // Duration of current speech block
}
}
Event codes are passed to the processAudio
promises.
Constant for voice detection errors.
Constant for voice detection results with no detected voices.
Constant for voice detection results with detected voice.
Constant for voice detection results with detected noise. Not implemented yet
These contants can be used as the mode
parameter of the VAD
constructor to
configure the VAD algorithm.
Constant for normal voice detection mode. Suitable for high bitrate, low-noise data.
May classify noise as voice, too. The default value if mode
is omitted in the constructor.
Detection mode optimised for low-bitrate audio.
Detection mode best suited for somewhat noisy, lower quality audio.
Detection mode with lowest miss-rate. Works well for most inputs.
The library is designed to work with input streams in mind, that is, sample buffers fed to processAudio
should be
rather short (36ms to 144ms - depending on your needs) and the sample rate no higher than 32kHz. Sample rates higher than
than 16kHz provide no benefit to the VAD algorithm, as human voice patterns center around 4000 to 6000Hz. Minding the
Nyquist-frequency yields sample rates between 8000 and 12000Hz for best results.
See examples folder for a working examples with a sample audio file.
const vad = new VAD(VAD.Mode.NORMAL);
const stream = fs.createReadStream("demo_pcm_s16_16000.raw");
stream.on("data", chunk => {
vad.processAudio(chunk, 16000).then(res => {
switch (res) {
case VAD.Event.ERROR:
console.log("ERROR");
break;
case VAD.Event.NOISE:
console.log("NOISE");
break;
case VAD.Event.SILENCE:
console.log("SILENCE");
break;
case VAD.Event.VOICE:
console.log("VOICE");
break;
}
}).catch(console.error);
});
const inputStream = fs.createReadStream("demo_pcm_s16_16000.raw");
const vadStream = VAD.createStream({
mode: VAD.Mode.NORMAL,
audioFrequency: 16000,
debounceTime: 1000
});
inputStream.pipe(vadStream).on("data", console.log);