Isabella is my first foray into voice computing. When I'm using my Apple headphones or the Blue Yeti mic in cardioid mode, Isabella does a fantastic job at recognizing commands, but is a little over-sensitive. When you're working from home, alone, in silence, then occasionally talk to your computer, it works great! When other people are around or if you start talking to yourself, she'll get over-zealous and tell you jokes you didn't ask for.
Once that's done, just
bundle install to install the relevant Ruby libraries.
This is more complicated than it should be. For speech recognition, we have three different files:
This file contains a list of commands that Isabella responds to. Each command contains a regex pattern for matching, then a script that should be run when that command is recognized. For example:
- pattern: 'tell me (?:the|a) joke(?: (?:with|about) ?(?:a|the)? ([\w\s]+))?' script: sayjoke
Each script is assumed to be a
.rb file residing in the
scripts directory in the root of the project.
This file contains a grammer of valid language, in the JSGF format. It defines valid sentences and components that those sentences are made up of. For example:
#JSGF V1.0; grammar isabella; public <sentence> = hey isabella <command>; <command> = tell me a joke | tell me <article> joke <joke-preposition> [<article>] <joke-reference>; <article> = a | the; <joke-preposition> = with | about; <joke-reference> = orion | etymologist | bar | pessimist | funerals;
You can find quite good documentation on the format of this file here. It should also be noted that every word included in this file must be defined in the dictionary file, described in the next section.
This file contains a list of words with pronunciations in North American English, which judging by the source of the library (Carnegie Mellon University) and extremely limited set of phonemes, probably means some kind of north-east US, academic accent that someone decided is the correct/standard/average way for North Americans to speak English. Whatever. It's free!
For some reason, instead of using an existing and more complete list of phonemes for recognizing speech, such as the International Phonetic Alphabet, CMU is using a transcription code called Arpabet, which seems to be designed not for speech recognition, but for speech synthesis. On top of that, CMU supports only a subset of Arpabet's phonemes and also doesn't support its lexical stress markers. Oh well.
Instead of linking to docs showing you how to use this file, I'll save you the time piecing together instructions from several, sometimes contradictory sources and just show you what actually works.
To define the pronunciation for a word, just type the word, then a space, then each phoneme in the word separated by a space. For example:
hey HH EY isabella IH Z AA B EH L AA
This is a complete list of phonemes you can use:
|AH||hut||HH AH T|
|AY||hide||HH AY D|
|CH||cheese||CH IY Z|
|ER||hurt||HH ER T|
|G||green||G R IY N|
|NG||ping||P IH NG|
|R||read||R IY D|
|TH||theta||TH EY T AH|
|UH||hood||HH UH D|
|Y||yield||Y IY L D|
|ZH||seizure||S IY ZH ER|
All alternate pronunciations must be marked with paranthesized serial numbers starting from (2) for the second pronunciation. The marker (1) is omitted. For example:
directing D AY R EH K T IH NG directing(2) D ER EH K T IH NG directing(3) D IH R EH K T IH NG
bundle exec rake listen
That'll tell Isabella to start listening for commands. Just
Ctrl-C when you want her to stop.
Ideas for increasing accuracy
- Figure out a way to implement a test suite. I have a vague idea of how I could right now, but it would involve creating the same wave files I'd need for Sphinxtrain anyway, so I'll probably do that at the same time.
- Use dual-microphone speech extraction to first separate speech from environment noise, then feed the result to Pocketsphinx. I imagine this would require two of the same microphone ideally and looks like yet another thing to research, so again, putting it off.