Adaptive dialog: Anatomy and runtime behavior
Anatomy: Adaptive dialog
Adaptive dialogs at the core comprise of 4 main concepts -
Recognizers help understand and extract meaningful pieces of information from user's input. All recognizers emit events - of specific interest is the 'recognizedIntent' event that fires when the recognizer picks up an intent (or extracts entities) from given user utterance. See here to learn more about supported recognizers and their usage.
Rules enable you to catch and respond to events. The broadest rule is the EventRule that allows you to catch and attach a set of steps to execute when a specific event is emitted by any sub-system. Adaptive dialogs supports couple of other specialized rules to wrap common events that your bot would handle. See here to learn more about supported Rules and their usage.
Steps help put together the flow of conversation when a specific event is captured via a Rule. Note: unlike Waterfall dialog where each step is a function, each step in an Adaptive dialog is in itself a dialog. This enables adaptive dialogs by design to:
- have a simple way to handle interruptions.
- branch conditionally based on context or current state. See here to learn more about supported steps and their usage.
Inputs are wrappers around Bot Builder prompts that you can use in an adaptive dialog step to ask and collect a piece of input from user, validate and accept the input into memory. Inputs include these pre-built features:
- Performs existential checks before prompting, to avoid prompting for information the bot already has.
- Grounds input to the specified property if the input from user matches the type of entity expected.
- Accepts constraints - min, max, etc.
See here to learn more about supported Inputs and their usage.
Runtime behavior: Adaptive dialog
To help illustrate this, let's take a scenario based walk through of the runtime behavior of Adaptive dialogs.
Travel agent bot
User: I’d like to book a flight Bot: Sure. What is your destination city? User: How’s the weather in Seattle? Bot: Its 72 and sunny in Seattle ... ...
For this scenario, we have three adaptive dialogs -
- rootDialog which is an adaptive dialog with its own 'LUIS' model and a set of rules and steps
- bookFlightDialog which is an adaptive dialog that can handle conversations about booking a flight
- weatherDialog which is an adaptive dialog that can handle conversations about getting weather information.
Here's the flow when user says
I'd like to book a flight
The bot's end user can provdie any type of answer, and here's the flow when user says
How's the weather in Seattle?
Using Adaptive dialog and Inputs, the bot propogate the handling of this up the conversation stack, up to through all the calling dialogs. In this case just one top dialog, the rootDialog. The rootDialog has a rule to handle the weather intent, which then call BeginDialog step to call to the weather dialog. Once the Weather dialog ends, the bot returns to the conversation, before the weather interuption, and prompt the user again for the destination city.
- Each dialog's recognizer is run
- if there are no active dialog (remember each step is also a dialog) .or.
- if the active dialog initiates a consultation
- Each dialog's rules are executed
- when a new event is raised. All sub-systems (including your own set of steps) can raise events with a payload. e.g. recognizer raises the 'IntentRecognized' event with intents and entities as possible payload.