Skip to content
This repository has been archived by the owner on Feb 22, 2023. It is now read-only.

Commit

Permalink
Merge 06ed15d into da13ac4
Browse files Browse the repository at this point in the history
  • Loading branch information
jperata committed Nov 15, 2017
2 parents da13ac4 + 06ed15d commit 432d32f
Show file tree
Hide file tree
Showing 4 changed files with 33 additions and 35 deletions.
27 changes: 13 additions & 14 deletions docs/commands/intend.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ The intend command generates intent requests for your service as if they were co

It works in a manner very similar to the Alexa simulator available via the Alexa developer console.

To start using it, you will need a local file that contains your Intent Schema and Sample Utterances.
By default, we have adopted the pattern used by the Alexa Skills Sample projects (see [here](https://github.com/alexa/skill-sample-nodejs-hello-world)).
To start using it, you will need to have your Interaction model, it could be written as a single file or separated as an Intent Schema and Sample Utterances.
By default, we have adopted the pattern used by the Alexa Skills Sample projects, we support the [Interaction model pattern](https://github.com/alexa/skill-sample-nodejs-fact) and also the [Intent Schema and Sample Utterances](https://github.com/alexa/skill-sample-nodejs-hello-world/) one.

That is, we look for the Interaction Model files inside a folder called speechAssets located off the source root.
That is, we look for the Interaction Model files inside a folder called models or speechAssets (if you're using the older style) located off the source root.

You can specify an alternative location via options to the command-line.

Expand All @@ -28,29 +28,28 @@ The intend command will return the full request and response of the interaction

By default, the system will:

* Use the Intent Model and Sample Utterances in the speechAssets folder under the current working directory
* Use the Interaction Model in the models folder under the current working directory
* If there's no Interaction Model, it will use the Intent Model and Sample Utterances in the speechAssets folder under the current working directory
* Use the service currently running via the `bst proxy` command

If no service is currently running via bst proxy, and HTTP endpoint can be specified with the `--url` option:
```
$ bst intend HelloIntent --url https://my.skill.com/skill/path
```

## Speech Asset Format and Location
If your speech assets (Intent Model and Sample Utterances) are not stored under ./speechAssets, you can use an option to specify another location.
## Interaction Model Format and Location
If your Interaction Model is not stored under ./models, or you have multiple locales, you can use an option to specify another location.

By default, we look for:

* `./speechAssets/IntentSchema.json`
* `./speechAssets/SampleUtterances.txt`
* `./models/en-US.json`

"Example With Alternative Locale:"

Example:
```
$ bst intend HelloIntent -i interactions/IntentSchema.json -s interactions/SampleUtterances.txt
$ bst intend HelloIntent -m models/en-UK.json
```

The format of these files is the same as they are entered in the Alexa Skill configuration.

The Intent Schema is a JSON file. Samples utterances is a space-delimited text file.
These files are JSON, and typically defined by the ASK CLI tool from Amazon.

An example of these files can be found [here](https://github.com/alexa/skill-sample-nodejs-hello-world/tree/master/speechAssets).
An example of these file can be found [here](https://github.com/alexa/skill-sample-nodejs-fact/blob/en-US/models/en-US.json).
33 changes: 14 additions & 19 deletions docs/commands/utter.md
Original file line number Diff line number Diff line change
Expand Up @@ -5,10 +5,10 @@ The utter command takes an utterance ("play next song") and turns into a JSON pa

It works in a manner very similar to the Alexa simulator available via the Alexa developer console.

To start using it, you will need a local file that contains your Intent Schema and Sample Utterances.
By default, we have adopted the pattern used by the Alexa Skills Sample projects (see [here](https://github.com/alexa/skill-sample-nodejs-hello-world/)).
To start using it, you will need to have your Interaction model, it could be written as a single file or separated as an Intent Schema and Sample Utterances.
By default, we have adopted the pattern used by the Alexa Skills Sample projects, we support the [Interaction model pattern](https://github.com/alexa/skill-sample-nodejs-fact) and also the [Intent Schema and Sample Utterances](https://github.com/alexa/skill-sample-nodejs-hello-world/) one.

That is, we look for the Interaction Model files inside a folder called speechAssets located off the source root.
That is, we look for the Interaction Model files inside a folder called models or speechAssets (if you're using the older style) located off the source root.

You can specify an alternative location via options to the command-line.

Expand All @@ -28,39 +28,34 @@ The utter command will return the full request and response of the interaction w

By default, the system will:

* Use the Intent Model and Sample Utterances in the speechAssets folder under the current working directory
* Use the Interaction Model in the models folder under the current working directory
* If there's no Interaction Model, it will use the Intent Model and Sample Utterances in the speechAssets folder under the current working directory
* Use the service currently running via the `bst proxy` command

If no service is currently running via bst proxy, and HTTP endpoint can be specified with the `--url` option:
```
$ bst utter Hello World --url https://my.skill.com/skill/path
```

## Speech Asset Format and Location
If your speech assets (Intent Model and Sample Utterances) are not stored under ./speechAssets, you can use an option to specify another location.
## Interaction Model Format and Location
If your Interaction Model is not stored under ./models, or you have multiple locales, you can use an option to specify another location.

By default, we look for:

* `./speechAssets/IntentSchema.json`
* `./speechAssets/SampleUtterances.txt`
* `./models/en-US.json`

Example:
"Example With Alternative Locale:"
```
$ bst utter Hello World -i interactions/IntentSchema.json -s interactions/SampleUtterances.txt
$ bst utter Hello World -m models/en-UK.json
```

The format of these files is the same as they are entered in the Alexa Skill configuration.
These files are JSON, and typically defined by the ASK CLI tool from Amazon.

The Intent Schema is a JSON file. Samples utterances is a space-delimited text file.

An example of these files can be found [here](https://github.com/alexa/skill-sample-nodejs-hello-world/tree/master/speechAssets).
An example of these file can be found [here](https://github.com/alexa/skill-sample-nodejs-fact/blob/en-US/models/en-US.json).

## Working With Slots

Slot handling is a bit tricky. To send an utterance that uses slots, surround the slot variables like so:
```
{MySlot}
```
Slot handling is automatic - we check for defined slots and samples and extract them. To send an utterance that uses slots, just write it as you would say it.

For example, if the sample utterance was defined as:
```
Expand All @@ -69,7 +64,7 @@ HelloWorld Hello world, my name is {Name}

Then the utter command would be:
```
$ bst utter Hello World, my name is {John}
$ bst utter Hello World, my name is John
```

The value `John` will then be automatically placed in the Name slot for the utterance on the request.
Expand Down
6 changes: 5 additions & 1 deletion lib/client/bst-alexa.ts
Original file line number Diff line number Diff line change
Expand Up @@ -42,7 +42,11 @@ export class BSTAlexaEvents {
}

/**
* Programmatic interface for interacting with the Bespoken Alexa emulator.
* @Deprecated
* Programmatic interface for interacting with the Bespoken Alexa emulator. <br />
* If you're interested in working with an Alexa Emulator for testing or other purposes, please check
* [Bespoken Virtual Alexa](https://github.com/bespoken/virtual-alexa) <br />
* This class will be removed in future versions to be replaced by calls to that library <br />
*
* Overview on usage can be found [here](../index.html). NodeJS tutorial [here](../../tutorials/tutorial_bst_emulator_nodejs)
*
Expand Down
2 changes: 1 addition & 1 deletion package.json
Original file line number Diff line number Diff line change
Expand Up @@ -22,7 +22,7 @@
"request-promise-native": "^1.0.5",
"silent-echo-sdk": "^0.3.5",
"uuid": "3.0.0",
"virtual-alexa": "^0.3.7",
"virtual-alexa": "^0.3.8",
"winston": "^2.4.0"
},
"devDependencies": {
Expand Down

0 comments on commit 432d32f

Please sign in to comment.