Skip to content

codemaker2015/react-speech-recognition-demo

Folders and files

NameName
Last commit message
Last commit date

Latest commit

 

History

2 Commits
 
 
 
 
 
 
 
 
 
 
 
 

Repository files navigation

react-speech-recognition

A React hook that converts speech from the microphone to text and makes it available to your React components.

How it works

useSpeechRecognition is a React hook that gives a component access to a transcript of speech picked up from the user's microphone.

SpeechRecognition manages the global state of the Web Speech API, exposing functions to turn the microphone on and off.

Under the hood, it uses Web Speech API. Note that browser support for this API is currently limited, with Chrome having the best experience - see supported browsers for more information.

This version requires React 16.8 so that React hooks can be used. If you're used to version 2.x of react-speech-recognition or want to use an older version of React, you can see the old README here. If you want to migrate to version 3.x, see the migration guide here.

Useful links

Installation

To install:

npm install --save react-speech-recognition

To import in your React code:

import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

Basic example

The most basic example of a component using this hook would be:

import React from 'react'
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

const Dictaphone = () => {
  const { transcript, resetTranscript } = useSpeechRecognition()

  if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
    return null
  }

  return (
    <div>
      <button onClick={SpeechRecognition.startListening}>Start</button>
      <button onClick={SpeechRecognition.stopListening}>Stop</button>
      <button onClick={resetTranscript}>Reset</button>
      <p>{transcript}</p>
    </div>
  )
}
export default Dictaphone

Detecting browser support for Web Speech API

Currently, this feature is not supported in all browsers, with the best experience being available on desktop Chrome. However, it fails gracefully on other browsers. It is recommended that you render some fallback content if it is not supported by the user's browser:

if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
  // Render some fallback content
}

Supported browsers

As of June 2020, the following browsers support the Web Speech API:

  • Chrome (desktop): this is by far the smoothest experience
  • Microsoft Edge
  • Chrome (Android): a word of warning about this platform, which is that there can be an annoying beeping sound when turning the microphone on. This is part of the Android OS and cannot be controlled from the browser
  • Android webview
  • Samsung Internet

For all other browsers, you can render fallback content using the SpeechRecognition.browserSupportsSpeechRecognition function described above.

Controlling the microphone

Before consuming the transcript, you should be familiar with SpeechRecognition, which gives you control over the microphone. The state of the microphone is global, so any functions you call on this object will affect all components using useSpeechRecognition.

Turning the microphone on

To start listening to speech, call the startListening function.

SpeechRecognition.startListening()

This is an asynchronous function, so it will need to be awaited if you want to do something after the microphone has been turned on.

Turning the microphone off

To turn the microphone off, but still finish processing any speech in progress, call stopListening.

SpeechRecognition.stopListening()

To turn the microphone off, and cancel the processing of any speech in progress, call abortListening.

SpeechRecognition.abortListening()

Consuming the microphone transcript

To make the microphone transcript available in your component, simply add:

const { transcript } = useSpeechRecognition()

Resetting the microphone transcript

To set the transcript to an empty string, you can call the resetTranscript function provided by useSpeechRecognition. Note that this is local to your component and does not affect any other components using Speech Recognition.

const { resetTranscript } = useSpeechRecognition()

Commands

To respond when the user says a particular phrase, you can pass in a list of commands to the useSpeechRecognition hook. Each command is an object with the following properties:

  • command: This is a string or RegExp representing the phrase you want to listen for
  • callback: The function that is executed when the command is spoken. The last argument that this function receives will always be an object containing the following properties:
    • resetTranscript: A function that sets the transcript to an empty string
  • matchInterim: Boolean that determines whether "interim" results should be matched against the command. This will make your component respond faster to commands, but also makes false positives more likely - i.e. the command may be detected when it is not spoken. This is false by default and should only be set for simple commands.
  • isFuzzyMatch: Boolean that determines whether the comparison between speech and command is based on similarity rather than an exact match. Fuzzy matching is useful for commands that are easy to mispronounce or be misinterpreted by the Speech Recognition engine (e.g. names of places, sports teams, restaurant menu items). It is intended for commands that are string literals without special characters. If command is a string with special characters or a RegExp, it will be converted to a string without special characters when fuzzy matching. The similarity that is needed to match the command can be configured with fuzzyMatchingThreshold. isFuzzyMatch is false by default. When it is set to true, it will pass four arguments to callback:
    • The value of command
    • The speech that matched command
    • The similarity between command and the speech
    • The object mentioned in the callback description above
  • fuzzyMatchingThreshold: If the similarity of speech to command is higher than this value when isFuzzyMatch is turned on, the callback will be invoked. You should set this only if isFuzzyMatch is true. It takes values between 0 (will match anything) and 1 (needs an exact match). The default value is 0.8.

Command symbols

To make commands easier to write, the following symbols are supported:

  • Splats: this is just a * and will match multi-word text:
    • Example: 'I would like to order *'
    • The words that match the splat will be passed into the callback, one argument per splat
  • Named variables: this is written :<name> and will match a single word:
    • Example: 'I am :height metres tall'
    • The one word that matches the named variable will be passed into the callback
  • Optional words: this is a phrase wrapped in parentheses ( and ), and is not required to match the command:
    • Example: 'Pass the salt (please)'
    • The above example would match both 'Pass the salt' and 'Pass the salt please'

Example with commands

import React, { useState } from 'react'
import SpeechRecognition, { useSpeechRecognition } from 'react-speech-recognition'

const Dictaphone = () => {
  const [message, setMessage] = useState('')
  const commands = [
    {
      command: 'I would like to order *',
      callback: (food) => setMessage(`Your order is for: ${food}`)
    },
    {
      command: 'The weather is :condition today',
      callback: (condition) => setMessage(`Today, the weather is ${condition}`)
    },
    {
      command: 'My top sports are * and *',
      callback: (sport1, sport2) => setMessage(`#1: ${sport1}, #2: ${sport2}`)
    },
    {
      command: 'Pass the salt (please)',
      callback: () => setMessage('My pleasure')
    },
    {
      command: 'Hello',
      callback: () => setMessage('Hi codemaker!'),
      matchInterim: true
    },
    {
      command: 'Welcome Vishnu',
      callback: (command, spokenPhrase, similarityRatio) => setMessage(`${command} and ${spokenPhrase} are ${similarityRatio * 100}% similar`),
      // If the spokenPhrase is "Benji", the message would be "Beijing and Benji are 40% similar"
      isFuzzyMatch: true,
      fuzzyMatchingThreshold: 0.2
    },
    {
      command: 'clear',
      callback: ({ resetTranscript }) => resetTranscript()
    }
  ]

  const { transcript } = useSpeechRecognition({ commands })

  if (!SpeechRecognition.browserSupportsSpeechRecognition()) {
    return null
  }

  return (
    <div>
      <button onClick={SpeechRecognition.startListening({ continuous: true })}>Start</button>
      <button onClick={SpeechRecognition.stopListening}>Stop</button>
      <p>{message}</p>
      <p>{transcript}</p>
    </div>
  )
}
export default Dictaphone

Continuous listening

By default, the microphone will stop listening when the user stops speaking. This reflects the approach taken by "press to talk" buttons on modern devices.

If you want to listen continuously, set the continuous property to true when calling startListening. The microphone will continue to listen, even after the user has stopped speaking.

SpeechRecognition.startListening({ continuous: true })

Changing language

To listen for a specific language, you can pass a language tag (e.g. 'zh-CN' for Chinese) when calling startListening. See here for a list of supported languages.

SpeechRecognition.startListening({ language: 'zh-CN' })

How to use react-speech-recognition offline?

Unfortunately, speech recognition will not function in Chrome when offline. According to the Web Speech API docs: On Chrome, using Speech Recognition on a web page involves a server-based recognition engine. Your audio is sent to a web service for recognition processing, so it won't work offline.

If you are building an offline web app, you can detect when the browser is offline by inspecting the value of navigator.onLine. If it is true, you can render the transcript generated by React Speech Recognition. If it is false, it's advisable to render offline fallback content that signifies that speech recognition is disabled. The online/offline API is simple to use - you can read how to use it here.

Available Scripts

In the project directory, you can run:

yarn start

Runs the app in the development mode.
Open http://localhost:3000 to view it in the browser.

The page will reload if you make edits.
You will also see any lint errors in the console.

yarn test

Launches the test runner in the interactive watch mode.
See the section about running tests for more information.

yarn build

Builds the app for production to the build folder.
It correctly bundles React in production mode and optimizes the build for the best performance.

The build is minified and the filenames include the hashes.
Your app is ready to be deployed!

API docs

View the API docs here or follow the guide above to learn how to use react-speech-recognition.

About

Speech to text conversion demo app using react js

Resources

Stars

Watchers

Forks

Releases

No releases published

Packages

No packages published