-
-
Notifications
You must be signed in to change notification settings - Fork 182
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
The Google command set has a lot of bad samples #4
Comments
That looks really interesting! You're definitely right about the quality of the dataset, it's pretty bad, but there don't seem to be many options around. I think you are right and creating a good clean dataset for a wake word would improve things considerably - and picking a sensible wake word as you suggest with more synonyms/phoneme would also be very sensible. I'll have to take a look at HMG and do some testing. Thanks for doing this research! |
Maybe we could share a spreadsheet or database that contains cleaned datasets with regional / gender metadata or datasets. As said it is quite easy to create a word timing transcript been searching to find a word extractor to save the dev as guessing there will be one that exists somewhere that can process voice audio. Its surprising that apart from the Google command set there seems an absence of 'keyword' datasets but they can be extracted from ASR ones. |
I loaded up HMG again its still a work in progress but the majority works. |
The Google Command Set has approx 10% badly cut, trimmed and padded words. There are 2 versions of the command set and the specific one was Ver2.0 but presume both are the same in terms of bad samples.
I was playing with https://github.com/linto-ai/linto-desktoptools-hmg which allows you to test your trained data and play failures.
I was shocked how many bad audio files are in the command set and how much that can effect accuracy.
I was using Ver2.0 as said and used the word "visualise" as it has 3 synonyms or phoneme obviously Marvin has 2 but more is better.
With HMG I played back the false positives and negatives and practically they where all junk.
So I deleted them and reran many times and ended up deleting about 10% of "visualise" and a lot of random junk files.
After I did this my recognition accuracy improved massively and the false negatives/positives dropped really low.
The Linto HMG is again just tensorflow but the GUI is really good for capturing those false positives/negatives and listening to see if it is likely a bad sample.
"Hey Marvin" would been a far better as said the more phoneme and unique the better.
With Deepspeech or Kaldi you can output a transcript of word occurrence in a sample and and with sox guessing you could grab "hey" from somewhere and tack it onto "marvin" with a bit of code.
Apart from the Gooogle command set I don't know of another word dataset as they seem to be all ASR sentence datasets but with the code above again you could extract words after running transcript output from Deepspeech or Kaldi.
https://github.com/jim-schwoebel/voice_datasets
I am not sure adding large quantities of words in a much bigger dataset actually increases accuracy for the work entailed to actually making sure what you feed is good.
Really do suggest you give HMG or some other tool and delete the dross out of the Google Command Set as I think you will be surprised how much affect bad samples can have on results.
The text was updated successfully, but these errors were encountered: