You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Direct Line Speech channel is a new protocol and it works very differently than Direct Line channel.
For example, how Adaptive Cards and OAuth works in existing Direct Line channel, may change drastically compare to Direct Line Speech channel.
We need to write up a doc talking about the difference of expectations and user stories both channel fit in. And suggests the user how to select between two protocols.
What benefits will Direct Line Speech bring?
Why it is not a flip of switch?
What features will be missing from Direct Line Speech?
On the user story side, why these features are not important for Direct Line Speech clients?
How to tweak the user story so Direct Line Speech will continue to work?
How to enable Direct Line Speech?
Reference to the Direct Line Speech documentation for setting up the bot
The bot should send speak property with SSML tags inside, otherwise, it will not be synthesized
Draft
Why and how
Direct Line Speech is designed for Voice Assistant scenario (smart display, automotive dashboard, navigation system, etc with low-latency requirement) on single-page application and progressive web apps (PWA). It is not focusing on transcript-based UI in traditional websites. Thus, some features required by Web Chat's traditional transcript-based UI will be missing (conversation history, reconnection)
Our 13.b.smart-display sample show how to use Direct Line Speech in its designed scenario.
Known issues
User ID behaviors
If user ID is not specified
conversationUpdate/membersAdded will be sent without any member ID
Please file a bug to us if this is an issue for you
message activity will be sent with a GUID-based random user ID
If user ID is specified
conversationUpdate/membersAdded will be sent with the specified user ID
message activity will be sent with the specified user ID
conversationUpdate/membersAdded will be sent on every connect and reconnect
Please file a bug to us if this is an issue for you
Bot is now responsible for handling the output of speech recognition
For example, bot should handle different text normalization mode (ITN, masked ITN, etc)
Text normalization is used for recognizing numbers, e.g. "two four pieces of chicken nuggets", instead of "24 pieces of chicken nuggets".
We do not support reconnection after page refresh
Different conversation ID on every page refresh
Please file a bug to us if this is an issue for you
We do not keep/resend conversation history
Please file a bug to us if this is an issue for you
For speech input, we do not support piggybacking additional data
Mechanism outlined in this sample will not work for speech input
It will continue to work with keyboard input
Speech recognition language cannot be switched on-the-fly
Please file a bug to us if this is an issue for you
Proactive message is not supported
Please file a bug to us if this is an issue for you
Emulator do not support Direct Line Speech protocol
Please file a bug to us if this is an issue for you
Aborting recognition is not supported
Please file a bug to us if this is an issue for you
A lot of features in Cognitive Services is not supported in Direct Line Speech, refer to the matrix in DIRECT_LINE_SPEECH.md for details
Please file a bug to us if this is an issue for you
[Enhancement]
The text was updated successfully, but these errors were encountered:
Feature Request
Direct Line Speech channel is a new protocol and it works very differently than Direct Line channel.
For example, how Adaptive Cards and OAuth works in existing Direct Line channel, may change drastically compare to Direct Line Speech channel.
We need to write up a doc talking about the difference of expectations and user stories both channel fit in. And suggests the user how to select between two protocols.
speak
property with SSML tags inside, otherwise, it will not be synthesizedDraft
Why and how
Direct Line Speech is designed for Voice Assistant scenario (smart display, automotive dashboard, navigation system, etc with low-latency requirement) on single-page application and progressive web apps (PWA). It is not focusing on transcript-based UI in traditional websites. Thus, some features required by Web Chat's traditional transcript-based UI will be missing (conversation history, reconnection)
Our
13.b.smart-display
sample show how to use Direct Line Speech in its designed scenario.Known issues
conversationUpdate/membersAdded
will be sent without any member IDmessage
activity will be sent with a GUID-based random user IDconversationUpdate/membersAdded
will be sent with the specified user IDmessage
activity will be sent with the specified user IDconversationUpdate/membersAdded
will be sent on every connect and reconnectDIRECT_LINE_SPEECH.md
for details[Enhancement]
The text was updated successfully, but these errors were encountered: