Join GitHub today
Introduction into states and intents
Understanding the concept of states and intents is the key to developing voice applications with AssistantJS. For clearification, imagine a voice interface for a bus travelling application. A possible conversation with an user might be:
An intent is kind of a formalized compact user expression. A user could say many phrases (or utterances) to reach the same goal. For your application, it is unimportant if your user says "okay", "yes" or "alright" to buy a ticket - it is only important to differ between approval and rejection. In this example, we are able to identify three different intents:
- invokeGenericIntent: Pseudo intent called if the application is invoked. So if the user starts your voice application with "Launch my bus company!", this intent is fired and your application should return the welcoming message.
- busRouteIntent: The user asks for a specific bus route, which means he or she says something like "When does the next bus to train station arrive?".
- yesIntent: The user accepts something, for example by saying "yes" or "okay".
How you should handle a specific intent depends on the state of your voice application. For example, handling a yesIntent does only make sense after you asked your user if he or she wants to buy a ticket. Or the other way around, you possibly don't want the user to fire a new busRouteIntents before he or she answered to your question. So lets try to identify the states of your application:
- MainState: Your introduction state. When your voice application is launched, we are starting in this state. Firing a generic yesIntent does not make sense here, since there is no context yet. But firing a busRouteIntent, which means asking for a specific bus route, is definately practical.
- BusOrderState: If a user fires a busRouteIntent in your MainState, the application's state switches to this state. Here, we are giving the user information about the desired bus route and ask him if he or she wants to buy a ticket. Firing a yesIntent now suddenly makes sense: It means that a ticket should be bought.
Now that we successfully identified states and intents of your voice applications, we can describe your application as a state model:
Of course, multiple states could also handle the same intent. For example, think of a helpIntent, which is fired if the user says something like "What can I do?" or "Can you help me please?". A user could fire a helpIntent at any time, but notice that it makes a difference if he or she fires it in MainState or in BusOrderState: While he or she expects an introduction to your voice application in MainState, he or she possibly wants more ticketing information in BusOrderState! In conclusion, you possibly want to handle the helpIntent in all of your states with different implementations.
Build a bus travelling application
The following videos describe how to build the bus travelling application introduced above.
- Installed assistant.js (
npm i --global assistant-source)
Step 1: Setup AssistantJS and integrate Alexa
Introductionary video tutorial to build an AssistantJS app and connect it with Amazon Alexa. This video is based on AssistantJS version 0.2.x, so you possibly want to check out this version before you start.
Step 2: Explore AssistantJS's core concepts
Let's go on with the second step of our AssistantJS tutorial, showing you AssistantJS specific functions to improve the voice experience of your application, translate it into different languages, handling and validating entites and inheriting intents. This video is based on AssistantJS version 0.2.x, so you possibly want to check out this version before you start.
Step 3: Integrate Api.ai and Google Assistant
In the third step of our AssistantJS tutorial, we show you how to integrate Api.ai and Google Assistant into our AssistantJS voice application. This video is based on AssistantJS version 0.2.x, so you possibly want to check out this version before you start.
Recap: Initial folder structure
To recap, running
assistant new project-name results in the following folder structure:
|Folder / File||Purpose|
|app||Your main folder. This is where your app really lives.|
|app/states||Contains all states your app wants to register. States in this folder are registered automatically.|
|app/states/mixins||Contains your state mixins, if any. State mixins are useful for reducing duplicate code.|
|app/states/application.ts||Your ApplicationState. This should be your base state for all other states.|
|app/states/main.ts||Your main state. The initial state to be called.|
|builds||For each set of built utterances and intent schemes, a subfolder in here is created.|
|config||For all configuration options of your app|
|config/components.ts||Enables you to configure all your assistant configurations. Keeping them all in one file is just a suggestion.|
|config/locales||Has subfolders for each supported locale, giving you i18n support|
|config/locales/en||Locale files for English (you could have any other language)|
|config/locales/en/translation.json||Translation file for everything the assistant says|
|config/locales/en/utterances.json||Lists all your utterances in English|
|spec||Contains your tests|
|spec/helpers||Contains some useful helper and setup scripts which are auto loaded by jasmine|
|spec/support||Contains your test supporters, like shared examples|
|spec/support/jasmine.json||Your jasmine configuration file|
|index.ts||Main entrance for assistant.js into your app if using
|package.json||Your package.json, listing all your assistant and npm dependencies. Already contains a testing command.|
|tsconfig.json||Your tsconfig.json for typescript compilation.|
|README.md||Your project's readme - currently containing a friendly link to this repository :-)|