An official implementation of the ICASSP 2023 paper: SG-VAD: Stochastic Gates Based Speech Activity Detection
2024-01-03: removed hard-coded dependency on 36 labels exactly. Now you can define any number of labels you want and the code should support it.
- EER=10.40%
- TPR@FPR=0.315 is 0.96
- ROCAUC=0.95
- EER=23.29%
- TPR@FPR=0.315 is 0.91
- ROCAUC=0.83
To train a new model you need to prepare the next dataset: 1. The manifest file with noise audio file paths (the same duration as the audios with words), the label should be "background" 2. The manifest file with spoken words file paths, each word should be labeled with one category
Please note that, as described in the paper, the main VAD model is trained as mask/filter as presented in the next schema:
Once the training is finished, the final model architecture is:
- Prepare your dataset in manifest format supported by NeMo
- Update config file with your paths and hyper-params
- Install NeMo requirements
- Run
train.py
script.
- We publish a pre-trained Pytorch checkpoint (
sgvad.pth
) - To use the published checkpoint as-is you need to calibrate the threshold for model output. All values under the threshold are predicted as Non-speech.
- The default value for the threshold is 3.5, but it may be too aggressive for your application.
- To try it on test audios run
python sgvad.py
We thank NeMo team for their great open-source repo.
@inproceedings{svirsky2023sg,
title={SG-VAD: Stochastic Gates Based Speech Activity Detection},
author={Svirsky, Jonathan and Lindenbaum, Ofir},
booktitle={ICASSP 2023-2023 IEEE International Conference on Acoustics, Speech and Signal Processing (ICASSP)},
pages={1--5},
year={2023},
organization={IEEE}
}