Skip to content

Commit

Permalink
Merge branch 'master' into test-focus-sink
Browse files Browse the repository at this point in the history
  • Loading branch information
corinagum committed Sep 4, 2019
2 parents aa76082 + 7d813c6 commit 6c54542
Show file tree
Hide file tree
Showing 15 changed files with 176 additions and 35 deletions.
1 change: 1 addition & 0 deletions LOCALIZATION.md
Original file line number Diff line number Diff line change
Expand Up @@ -6,6 +6,7 @@ If you want to help to translate Web Chat to different language, please submit a

| Language code | Translator |
| ------------- | ---------------------------------------------------------- |
| bg-bg | @kalin.krustev |
| cs-cz | @msimecek |
| da-dk | @Simon_lfr, Thomas Skødt Andersen |
| de-de | @matmuenzel |
Expand Down
12 changes: 5 additions & 7 deletions README.md
Original file line number Diff line number Diff line change
Expand Up @@ -132,6 +132,10 @@ export default class extends React.Component {
See a working sample of [Web Chat rendered via React](https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/03.a.host-with-react/).

## Integrate with Cognitive Services Speech Services

You can use Cognitive Services Speech Services to add bi-directional speech functionality to Web Chat. Please refer to this article about [using Cognitive Services Speech Services](https://github.com/microsoft/BotFramework-WebChat/blob/master/SPEECH.md) for details.

# Customize Web Chat UI

Web Chat is designed to be customizable without forking the source code. The table below outlines what kind of customizations you can achieve when you are importing Web Chat in different ways. This list is not exhaustive.
Expand All @@ -155,13 +159,7 @@ Please refer to [`ACTIVITYTYPES.md`](https://github.com/microsoft/BotFramework-W

## Speech changes in Web Chat 4.5

> This is a breaking change on behavior expectations regarding speech in Web Chat.
In issue [#2022](https://github.com/microsoft/BotFramework-WebChat/issues/2022), it was brought to the Web Chat team's attention that the speech behavior of v3 and v4 of Web Chat do not match. In the 4.5 release, the expected behavior of a speech bot has been modified in order to bring parity to v3 behavior regarding [input hint](https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-add-input-hints?view=azure-bot-service-3.0). This means the following:
- Expecting input will now be respected by Web Chat and open the microphone during a speech conversation. This is assuming that the user has given permission for the browser to use the mic.
- Accepting input **will no longer** open the mic after the bot has responded to a speech activity from the user. Instead, the user will have to press the microphone button again in order to further interact with the bot.
- Ignoring input will continue to **not** open the mic after a speech activity has been sent from the bot.

There is a breaking change on behavior expectations regarding speech and input hint in Web Chat. Please refer to this section on [input hint behavior before 4.5.0](https://github.com/microsoft/BotFramework-WebChat/blob/master/SPEECH.md#input-hint-behavior-before-4-5-0) for details.

# Samples list

Expand Down
23 changes: 22 additions & 1 deletion SPEECH.md
Original file line number Diff line number Diff line change
Expand Up @@ -85,6 +85,7 @@ After adding the ponyfill factory, you should be able to see the microphone butt
These features are for improving the overall user experience while using speech in Web Chat.

- [Using Speech Synthesis Markup Language](#using-speech-synthesis-markup-language)
- [Using input hint](#using-input-hint)
- [Selecting voice](#selecting-voice)
- [Custom Speech](#custom-speech)
- [Custom Voice](#custom-voice)
Expand Down Expand Up @@ -120,6 +121,24 @@ When the bot sends the activity, include the SSML in the `speak` property.
With "mstts" extension, you can also [add speaking style](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup#adjust-speaking-styles) (e.g. cheerful) and [background audio](https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup#add-background-audio) to your synthesized speech.

### Using input hint

The bot can set input hint when sending activity to the user to indicate whether the bot is anticipating user input. This can be used re-open the microphone if the last message was sent through microphone. You can set it to either `expectingInput`, `acceptingInput`, and `ignoringInput`. If it is not defined, it will default to `acceptingInput`.

- `"expectingInput"`: Web Chat will open the microphone after the bot's message is spoken and the last message was sent through microphone
- `"acceptingInput"`: Web Chat will do nothing after the bot's message is spoken
- `"ignoringInput"`: Web Chat will explicitly close the microphone

For more details, please follow this article on [adding input hints to messages][Add input hints to messages].

#### Input hint behavior before 4.5.0

In issue [#2022](https://github.com/microsoft/BotFramework-WebChat/issues/2022), it was brought to the Web Chat team's attention that the speech behavior of v3 and v4 of Web Chat do not match. In the 4.5.0 release, the expected behavior of a speech bot has been modified in order to bring parity to v3 behavior regarding [input hint](https://docs.microsoft.com/en-us/azure/bot-service/dotnet/bot-builder-dotnet-add-input-hints?view=azure-bot-service-3.0). This means the following:

- Expecting input will now be respected by Web Chat and open the microphone during a speech conversation. This is assuming that the user has given permission for the browser to use the mic.
- Accepting input **will no longer** open the mic after the bot has responded to a speech activity from the user. Instead, the user will have to press the microphone button again in order to further interact with the bot.
- Ignoring input will continue to **not** open the mic after a speech activity has been sent from the bot.

### Selecting voice

Different voices can be selected based on the synthesizing activity.
Expand Down Expand Up @@ -353,6 +372,7 @@ Using this approach, you can also combine two polyfills of different types. For
- [List of browsers which support Web Audio API][Web Audio API support]
- [List of browsers which support WebRTC API][WebRTC API Support]
- [Speech Synthesis Markup Language (SSML)][Speech Synthesis Markup Language]
- [Add input hints to messages]
- [Get started with Custom Voice]
- [What is Custom Speech]
- [Sample: Integrating with Cognitive Services Speech Services]
Expand All @@ -362,8 +382,9 @@ Using this approach, you can also combine two polyfills of different types. For
[Get started with Custom Voice]: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-voice
[Sample: Integrating with Cognitive Services Speech Services]: https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/06.c.cognitive-services-speech-services-js
[Sample: Using hybrid speech engine]: https://github.com/microsoft/BotFramework-WebChat/tree/master/samples/06.f.hybrid-speech
[Speech Synthesis Markup Language]: (https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup)
[Speech Synthesis Markup Language]: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/speech-synthesis-markup
[Try Cognitive Services]: https://azure.microsoft.com/en-us/try/cognitive-services/my-apis/#speech
[Web Audio API support]: https://caniuse.com/#feat=audio-api
[WebRTC API Support]: https://caniuse.com/#feat=rtcpeerconnection
[What is Custom Speech]: https://docs.microsoft.com/en-us/azure/cognitive-services/speech-service/how-to-custom-speech
[Add input hints to messages]: https://docs.microsoft.com/en-us/azure/bot-service/rest-api/bot-framework-rest-connector-add-input-hints?view=azure-bot-service-4.0
7 changes: 6 additions & 1 deletion packages/component/src/Localization/Localize.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,7 @@
import connectToWebChat from '../connectToWebChat';
import getLocaleString from './getLocaleString';

import bgBG from './bg-BG';
import csCZ from './cs-CZ';
import daDK from './da-DK';
import deDE from './de-DE';
Expand Down Expand Up @@ -32,7 +33,9 @@ import zhYUE from './zh-YUE';
function normalizeLanguage(language) {
language = language.toLowerCase();

if (language.startsWith('cs')) {
if (language.startsWith('bg')) {
return 'bg-BG';
} else if (language.startsWith('cs')) {
return 'cs-CZ';
} else if (language.startsWith('da')) {
return 'da-DK';
Expand Down Expand Up @@ -85,6 +88,8 @@ function normalizeLanguage(language) {

function getStrings(language) {
switch (normalizeLanguage(language || '')) {
case 'bg-BG':
return bgBG;
case 'cs-CZ':
return csCZ;
case 'da-DK':
Expand Down
95 changes: 95 additions & 0 deletions packages/component/src/Localization/bg-BG.js
Original file line number Diff line number Diff line change
@@ -0,0 +1,95 @@
/* eslint no-magic-numbers: ["error", { "ignore": [1, 5, 24, 48, 60000, 3600000] }] */

import getLocaleString from './getLocaleString';

function xMinutesAgo(dateStr) {
const date = new Date(dateStr);
const dateTime = date.getTime();

if (isNaN(dateTime)) {
return dateStr;
}

const now = Date.now();
const deltaInMs = now - dateTime;
const deltaInMinutes = Math.floor(deltaInMs / 60000);
const deltaInHours = Math.floor(deltaInMs / 3600000);

if (deltaInMinutes < 1) {
return 'Сега';
} else if (deltaInMinutes === 1) {
return 'Преди минута';
} else if (deltaInHours < 1) {
return `Преди ${deltaInMinutes} минути`;
} else if (deltaInHours === 1) {
return `Преди час`;
} else if (deltaInHours < 5) {
return `Преди ${deltaInHours} часа`;
} else if (deltaInHours <= 24) {
return `Днес`;
} else if (deltaInHours <= 48) {
return `Вчера`;
}
return getLocaleString(date, 'bg-BG');
}

function botSaidSomething(avatarInitials, text) {
return `${avatarInitials} каза, ${text}`;
}

function downloadFileWithFileSize(downloadFileText, fileName, size) {
// Full text should read: "Download file <filename> of size <filesize>"
return `${downloadFileText} ${fileName} с размер ${size}`;
}

function uploadFileWithFileSize(fileName, size) {
return `${fileName} с рамер ${size}`;
}

function userSaidSomething(avatarInitials, text) {
return `${avatarInitials} каза, ${text}`;
}

export default {
CONNECTED_NOTIFICATION: 'Свързан',
FAILED_CONNECTION_NOTIFICATION: 'Не може да се свърже.',
INITIAL_CONNECTION_NOTIFICATION: 'Свързване…',
INTERRUPTED_CONNECTION_NOTIFICATION: 'Прекъсване на мрежата. Повторно свързване…',
RENDER_ERROR_NOTIFICATION: 'Грешка при изобразяване. Проверете конзолата или се свържете с разработчика.',
// Do not localize {Retry}; it is a placeholder for "Retry". English translation should be, "Send failed. Retry."
SEND_FAILED_KEY: `Неуспешно изпращане. {Retry}.`,
SLOW_CONNECTION_NOTIFICATION: 'Свързването отнема необикновено дълго време.',
'Bot said something': botSaidSomething,
'User said something': userSaidSomething,
'X minutes ago': xMinutesAgo,
// '[File of type '%1']': '[File of type '%1']",
// '[Unknown Card '%1']': '[Unknown Card '%1']',
'Adaptive Card parse error': 'Грешка при обработка на адаптивна картичка',
'Adaptive Card render error': 'Грешка при показване на адаптивна картичка',
BotSent: 'Изпрати: ',
Chat: 'Разговор',
'Download file': 'Сваляне на файл',
DownloadFileWithFileSize: downloadFileWithFileSize,
ErrorMessage: 'Съобщение за грешка',
'Microphone off': 'Микрофон изключен',
'Microphone on': 'Микрофон включен',
Left: 'Ляво',
'Listening…': 'Слушане…',
'New messages': 'Ново съобщение',
Retry: 'Отново',
Right: 'Дясно',
Send: 'Изпрати',
Sending: 'Изпращане',
SendStatus: 'Статус: ',
SentAt: 'Изпратено на: ',
Speak: 'Говор',
'Starting…': 'Стартиране…',
Tax: 'Данък',
Total: 'Общо',
'Type your message': 'Въведете вашето съобщение',
TypingIndicator: 'Показване на индикатор за писане',
'Upload file': 'Прикачване на файл',
UploadFileWithFileSize: uploadFileWithFileSize,
UserSent: 'Потребителят изпрати: ',
VAT: 'ДДС'
};
9 changes: 6 additions & 3 deletions packages/core/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

1 change: 1 addition & 0 deletions packages/embed/src/locale.js
Original file line number Diff line number Diff line change
Expand Up @@ -24,6 +24,7 @@ const AZURE_LOCALE_PATTERN = /^(([a-z]{2})(-[a-z]{2,})?)\.([a-z]{2})/;
const JAVASCRIPT_LOCALE_PATTERN = /^([a-z]{2})-([A-Z]{2,})?$/;

const AZURE_LOCALE_MAPPING = {
bg: 'bg-BG',
cs: 'cs-CZ',
de: 'de-DE',
en: 'en-US',
Expand Down
4 changes: 4 additions & 0 deletions packages/embed/src/locale.spec.js
Original file line number Diff line number Diff line change
Expand Up @@ -4,6 +4,10 @@ test('Normalizing "en.en-us"', () => {
expect(normalize('en.en-us')).toBe('en-US');
});

test('Normalizing "bg.bg-bg"', () => {
expect(normalize('bg.bg-bg')).toBe('bg-BG');
});

test('Normalizing "cs.cs-cz"', () => {
expect(normalize('cs.cs-cz')).toBe('cs-CZ');
});
Expand Down
1 change: 1 addition & 0 deletions packages/playground/src/App.js
Original file line number Diff line number Diff line change
Expand Up @@ -444,6 +444,7 @@ const App = ({ store }) => {
Language
<select onChange={handleLanguageChange} value={language}>
<option value="">Default ({window.navigator.language})</option>
<option value="bg-BG">Bulgarian</option>
<option value="zh-HK">Chinese (Hong Kong)</option>
<option value="zh-YUE">Chinese (Hong Kong, Yue)</option>
<option value="zh-HANS">Chinese (Simplifies Chinese)</option>
Expand Down
13 changes: 8 additions & 5 deletions samples/13.customization-speech-ui/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

23 changes: 13 additions & 10 deletions samples/14.customization-piping-to-redux/package-lock.json

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

Some generated files are not rendered by default. Learn more about how customized files appear on GitHub.

2 changes: 1 addition & 1 deletion samples/19.c.single-sign-on-for-teams-apps/README.md
Original file line number Diff line number Diff line change
Expand Up @@ -59,7 +59,7 @@ To host this demo, you will need to clone the code and run locally.
- In `/web/.env`:
- Write `OAUTH_REDIRECT_URI=https://a1b2c3d4.ngrok.io/api/oauth/callback`
- When Azure Active Directory completes the authorization flow, it will send the browser to this URL. This URL must be accessible by the browser from the end-user machine
- Write `PROXY_URL=http://localhost:3978`
- Write `PROXY_BOT_URL=http://localhost:3978`
- This will forward all traffic from https://a1b2c3d4.ngrok.io/api/messages to http://localhost:3978/api/messages, where your bot is listening to

## Setup OAuth via Azure Active Directory
Expand Down
Loading

0 comments on commit 6c54542

Please sign in to comment.