Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

The Google command set has a lot of bad samples #4

Closed
StuartIanNaylor opened this issue Nov 9, 2020 · 3 comments
Closed

The Google command set has a lot of bad samples #4

StuartIanNaylor opened this issue Nov 9, 2020 · 3 comments

Comments

@StuartIanNaylor
Copy link

StuartIanNaylor commented Nov 9, 2020

The Google Command Set has approx 10% badly cut, trimmed and padded words. There are 2 versions of the command set and the specific one was Ver2.0 but presume both are the same in terms of bad samples.
I was playing with https://github.com/linto-ai/linto-desktoptools-hmg which allows you to test your trained data and play failures.
I was shocked how many bad audio files are in the command set and how much that can effect accuracy.
I was using Ver2.0 as said and used the word "visualise" as it has 3 synonyms or phoneme obviously Marvin has 2 but more is better.
With HMG I played back the false positives and negatives and practically they where all junk.
So I deleted them and reran many times and ended up deleting about 10% of "visualise" and a lot of random junk files.
After I did this my recognition accuracy improved massively and the false negatives/positives dropped really low.

The Linto HMG is again just tensorflow but the GUI is really good for capturing those false positives/negatives and listening to see if it is likely a bad sample.

"Hey Marvin" would been a far better as said the more phoneme and unique the better.
With Deepspeech or Kaldi you can output a transcript of word occurrence in a sample and and with sox guessing you could grab "hey" from somewhere and tack it onto "marvin" with a bit of code.
Apart from the Gooogle command set I don't know of another word dataset as they seem to be all ASR sentence datasets but with the code above again you could extract words after running transcript output from Deepspeech or Kaldi.
https://github.com/jim-schwoebel/voice_datasets

I am not sure adding large quantities of words in a much bigger dataset actually increases accuracy for the work entailed to actually making sure what you feed is good.
Really do suggest you give HMG or some other tool and delete the dross out of the Google Command Set as I think you will be surprised how much affect bad samples can have on results.

@cgreening
Copy link
Contributor

That looks really interesting!

You're definitely right about the quality of the dataset, it's pretty bad, but there don't seem to be many options around.

I think you are right and creating a good clean dataset for a wake word would improve things considerably - and picking a sensible wake word as you suggest with more synonyms/phoneme would also be very sensible.

I'll have to take a look at HMG and do some testing.

Thanks for doing this research!

@StuartIanNaylor
Copy link
Author

StuartIanNaylor commented Nov 11, 2020

Maybe we could share a spreadsheet or database that contains cleaned datasets with regional / gender metadata or datasets.

As said it is quite easy to create a word timing transcript been searching to find a word extractor to save the dev as guessing there will be one that exists somewhere that can process voice audio.
Last time I did it manually and lost the dataset on a windows reinstall.
HMG is just tensorflow but the GUI is just simple and good helped me a lot and it wasn't until I pruned the bad samples I became aware how much the 'garbage in' syndrome has effect on models, the GUI does make things easier.

Its surprising that apart from the Google command set there seems an absence of 'keyword' datasets but they can be extracted from ASR ones.

@StuartIanNaylor
Copy link
Author

I loaded up HMG again its still a work in progress but the majority works.
Keep the standard MFCC settings add a single hotword and all other to non hotword.
Did with 'marvin' and yeah boy a load of bad samples that really does effect overall accuracy without a prune.
Also if you want to add some of your own record some at 16kHz 16 bit before you start as if you check for duplicates it will remove missing from the dataset but adding yet doesn't seem to be implemented.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants