Skip to content
Permalink
Branch: master
Find file Copy path
Find file Copy path
Fetching contributors…
Cannot retrieve contributors at this time
6319 lines (5305 sloc) 333 KB
<Type Name="SpeechRecognitionEngine" FullName="System.Speech.Recognition.SpeechRecognitionEngine">
<TypeSignature Language="C#" Value="public class SpeechRecognitionEngine : IDisposable" />
<TypeSignature Language="ILAsm" Value=".class public auto ansi beforefieldinit SpeechRecognitionEngine extends System.Object implements class System.IDisposable" />
<TypeSignature Language="DocId" Value="T:System.Speech.Recognition.SpeechRecognitionEngine" />
<TypeSignature Language="VB.NET" Value="Public Class SpeechRecognitionEngine&#xA;Implements IDisposable" />
<TypeSignature Language="C++ CLI" Value="public ref class SpeechRecognitionEngine : IDisposable" />
<TypeSignature Language="F#" Value="type SpeechRecognitionEngine = class&#xA; interface IDisposable" />
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Base>
<BaseTypeName>System.Object</BaseTypeName>
</Base>
<Interfaces>
<Interface>
<InterfaceName>System.IDisposable</InterfaceName>
</Interface>
</Interfaces>
<Docs>
<summary>Provides the means to access and manage an in-process speech recognition engine.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
You can create an instance of this class for any of the installed speech recognizers. To get information about which recognizers are installed, use the static <xref:System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers%2A> method.
This class is for running speech recognition engines in-process, and provides control over various aspects of speech recognition, as follows:
- To create an in-process speech recognizer, use one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.%23ctor%2A> constructors.
- To manage speech recognition grammars, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.UnloadGrammar%2A>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.UnloadAllGrammars%2A> methods, and the <xref:System.Speech.Recognition.SpeechRecognitionEngine.Grammars%2A> property.
- To configure the input to the recognizer, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>, or <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A> method.
- To perform speech recognition, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync%2A> method.
- To modify how recognition handles silence or unexpected input, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout%2A>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous%2A> properties.
- To change the number of alternates the recognizer returns, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.MaxAlternates%2A> property. The recognizer returns recognition results in a <xref:System.Speech.Recognition.RecognitionResult> object.
- To synchronize changes to the recognizer, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RequestRecognizerUpdate%2A> method. The recognizer uses more than one thread to perform tasks.
- To emulate input to the recognizer, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize%2A> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync%2A> methods.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine> object is for the sole use of the process that instantiated the object. By contrast, the <xref:System.Speech.Recognition.SpeechRecognizer> shares a single recognizer with any application that wants to use it.
> [!NOTE]
> Always call <xref:System.Speech.Recognition.SpeechRecognitionEngine.Dispose%2A> before you release your last reference to the speech recognizer. Otherwise, the resources it is using will not be freed until the garbage collector calls the recognizer object's `Finalize` method.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. Because this example uses the `Multiple` mode of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync%2A> method, it performs recognition until you close the console window or stop debugging.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (
SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.Grammar" />
<altmember cref="T:System.Speech.Recognition.SpeechRecognizer" />
</Docs>
<Members>
<MemberGroup MemberName=".ctor">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> class.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
You can construct a <xref:System.Speech.Recognition.SpeechRecognitionEngine> instance from any of the following:
- The default speech recognition engine for the system
- A specific speech recognition engine that you specify by name
- The default speech recognition engine for a locale that you specify
- A specific recognition engine that meets the criteria that you specify in a <xref:System.Speech.Recognition.RecognizerInfo> object.
Before the speech recognizer can begin recognition, you must load at least one speech recognition grammar and configure the input for the recognizer.
To load a grammar, call the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
]]></format>
</remarks>
</Docs>
</MemberGroup>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public SpeechRecognitionEngine ();" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor() cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.#ctor" />
<MemberSignature Language="VB.NET" Value="Public Sub New ()" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; SpeechRecognitionEngine();" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters />
<Docs>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> class using the default speech recognizer for the system.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Before the speech recognizer can begin speech recognition, you must load at least one recognition grammar and configure the input for the recognizer.
To load a grammar, call the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
]]></format>
</remarks>
</Docs>
</Member>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public SpeechRecognitionEngine (System.Globalization.CultureInfo culture);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor(class System.Globalization.CultureInfo culture) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.#ctor(System.Globalization.CultureInfo)" />
<MemberSignature Language="VB.NET" Value="Public Sub New (culture As CultureInfo)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; SpeechRecognitionEngine(System::Globalization::CultureInfo ^ culture);" />
<MemberSignature Language="F#" Value="new System.Speech.Recognition.SpeechRecognitionEngine : System.Globalization.CultureInfo -&gt; System.Speech.Recognition.SpeechRecognitionEngine" Usage="new System.Speech.Recognition.SpeechRecognitionEngine culture" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters>
<Parameter Name="culture" Type="System.Globalization.CultureInfo" />
</Parameters>
<Docs>
<param name="culture">The locale that the speech recognizer must support.</param>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> class using the default speech recognizer for a specified locale.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Microsoft Windows and the System.Speech API accept all valid language-country codes. To perform speech recognition using the language specified in the `CultureInfo` argument, a speech recognition engine that supports that language-country code must be installed. The speech recognition engines that shipped with Microsoft Windows 7 work with the following language-country codes.
- en-GB. English (United Kingdom)
- en-US. English (United States)
- de-DE. German (Germany)
- es-ES. Spanish (Spain)
- fr-FR. French (France)
- ja-JP. Japanese (Japan)
- zh-CN. Chinese (China)
- zh-TW. Chinese (Taiwan)
Two-letter language codes such as "en", "fr", or "es" are also permitted.
Before the speech recognizer can begin recognition, you must load at least one speech recognition grammar and configure the input for the recognizer.
To load a grammar, call the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
## Examples
The following example shows part of a console application that demonstrates basic speech recognition, and initializes a speech recognizer for the en-US locale.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (
SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentException">None of the installed speech recognizers support the specified locale, or <paramref name="culture" /> is the invariant culture.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="Culture" /> is <see langword="null" />.</exception>
</Docs>
</Member>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public SpeechRecognitionEngine (System.Speech.Recognition.RecognizerInfo recognizerInfo);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor(class System.Speech.Recognition.RecognizerInfo recognizerInfo) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.#ctor(System.Speech.Recognition.RecognizerInfo)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; SpeechRecognitionEngine(System::Speech::Recognition::RecognizerInfo ^ recognizerInfo);" />
<MemberSignature Language="F#" Value="new System.Speech.Recognition.SpeechRecognitionEngine : System.Speech.Recognition.RecognizerInfo -&gt; System.Speech.Recognition.SpeechRecognitionEngine" Usage="new System.Speech.Recognition.SpeechRecognitionEngine recognizerInfo" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters>
<Parameter Name="recognizerInfo" Type="System.Speech.Recognition.RecognizerInfo" />
</Parameters>
<Docs>
<param name="recognizerInfo">The information for the specific speech recognizer.</param>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> using the information in a <see cref="T:System.Speech.Recognition.RecognizerInfo" /> object to specify the recognizer to use.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
You can create an instance of this class for any of the installed speech recognizers. To get information about which recognizers are installed, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers%2A> method.
Before the speech recognizer can begin recognition, you must load at least one speech recognition grammar and configure the input for the recognizer.
To load a grammar, call the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
## Examples
The following example shows part of a console application that demonstrates basic speech recognition, and initializes a speech recognizer that supports the English language.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Select a speech recognizer that supports English.
RecognizerInfo info = null;
foreach (RecognizerInfo ri in SpeechRecognitionEngine.InstalledRecognizers())
{
if (ri.Culture.TwoLetterISOLanguageName.Equals("en"))
{
info = ri;
break;
}
}
if (info == null) return;
// Create the selected recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(info))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
</Docs>
</Member>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public SpeechRecognitionEngine (string recognizerId);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor(string recognizerId) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.#ctor(System.String)" />
<MemberSignature Language="VB.NET" Value="Public Sub New (recognizerId As String)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; SpeechRecognitionEngine(System::String ^ recognizerId);" />
<MemberSignature Language="F#" Value="new System.Speech.Recognition.SpeechRecognitionEngine : string -&gt; System.Speech.Recognition.SpeechRecognitionEngine" Usage="new System.Speech.Recognition.SpeechRecognitionEngine recognizerId" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters>
<Parameter Name="recognizerId" Type="System.String" />
</Parameters>
<Docs>
<param name="recognizerId">The token name of the speech recognizer to use.</param>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> class with a string parameter that specifies the name of the recognizer to use.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The token name of the recognizer is the value of the <xref:System.Speech.Recognition.RecognizerInfo.Id%2A> property of the <xref:System.Speech.Recognition.RecognizerInfo> object returned by the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerInfo%2A> property of the recognizer. To get a collection of all the installed recognizers, use the static <xref:System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers%2A> method.
Before the speech recognizer can begin recognition, you must load at least one speech recognition grammar and configure the input for the recognizer.
To load a grammar, call the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
## Examples
The following example shows part of a console application that demonstrates basic speech recognition, and creates an instance of the Speech Recognizer 8.0 for Windows (English - US).
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Create an instance of the Microsoft Speech Recognizer 8.0 for
// Windows (English - US).
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine("MS-1033-80-DESK"))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized += new EventHandler(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentException">No speech recognizer with that token name is installed, or <paramref name="recognizerId" /> is the empty string ("").</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="recognizerId" /> is <see langword="null" />.</exception>
</Docs>
</Member>
<Member MemberName="AudioFormat">
<MemberSignature Language="C#" Value="public System.Speech.AudioFormat.SpeechAudioFormatInfo AudioFormat { get; }" />
<MemberSignature Language="ILAsm" Value=".property instance class System.Speech.AudioFormat.SpeechAudioFormatInfo AudioFormat" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioFormat" />
<MemberSignature Language="VB.NET" Value="Public ReadOnly Property AudioFormat As SpeechAudioFormatInfo" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property System::Speech::AudioFormat::SpeechAudioFormatInfo ^ AudioFormat { System::Speech::AudioFormat::SpeechAudioFormatInfo ^ get(); };" />
<MemberSignature Language="F#" Value="member this.AudioFormat : System.Speech.AudioFormat.SpeechAudioFormatInfo" Usage="System.Speech.Recognition.SpeechRecognitionEngine.AudioFormat" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.AudioFormat.SpeechAudioFormatInfo</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets the format of the audio being received by the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" />.</summary>
<value>The format of audio at the input to the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> instance, or <see langword="null" /> if the input is not configured or set to the null input.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
To configure the audio input, use one of the following methods:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToAudioStream%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToDefaultAudioDevice%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveFile%2A>
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToWaveStream%2A>
## Examples
The example below uses <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioFormat%2A> to obtain and display audio format data.
```
static void DisplayAudioDeviceFormat(Label label, SpeechRecognitionEngine recognitionEngine)
{
if (recognitionEngine != null && label != null)
{
label.Text = String.Format("Encoding Format: {0}\n" +
"AverageBytesPerSecond {1}\n" +
"BitsPerSample {2}\n" +
"BlockAlign {3}\n" +
"ChannelCount {4}\n" +
"SamplesPerSecond {5}",
recognitionEngine.AudioFormat.EncodingFormat.ToString(),
recognitionEngine.AudioFormat.AverageBytesPerSecond,
recognitionEngine.AudioFormat.BitsPerSample,
recognitionEngine.AudioFormat.BlockAlign,
recognitionEngine.AudioFormat.ChannelCount,
recognitionEngine.AudioFormat.SamplesPerSecond);
}
}
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.AudioFormat.SpeechAudioFormatInfo" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognizer.AudioFormat" />
</Docs>
</Member>
<Member MemberName="AudioLevel">
<MemberSignature Language="C#" Value="public int AudioLevel { get; }" />
<MemberSignature Language="ILAsm" Value=".property instance int32 AudioLevel" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevel" />
<MemberSignature Language="VB.NET" Value="Public ReadOnly Property AudioLevel As Integer" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property int AudioLevel { int get(); };" />
<MemberSignature Language="F#" Value="member this.AudioLevel : int" Usage="System.Speech.Recognition.SpeechRecognitionEngine.AudioLevel" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Int32</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets the level of the audio being received by the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" />.</summary>
<value>The audio level of the input to the speech recognizer, from 0 through 100.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The value 0 represents silence, and 100 represents the maximum input volume.
]]></format>
</remarks>
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevelUpdated" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognizer.AudioLevel" />
</Docs>
</Member>
<Member MemberName="AudioLevelUpdated">
<MemberSignature Language="C#" Value="public event EventHandler&lt;System.Speech.Recognition.AudioLevelUpdatedEventArgs&gt; AudioLevelUpdated;" />
<MemberSignature Language="ILAsm" Value=".event class System.EventHandler`1&lt;class System.Speech.Recognition.AudioLevelUpdatedEventArgs&gt; AudioLevelUpdated" />
<MemberSignature Language="DocId" Value="E:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevelUpdated" />
<MemberSignature Language="VB.NET" Value="Public Custom Event AudioLevelUpdated As EventHandler(Of AudioLevelUpdatedEventArgs) " />
<MemberSignature Language="C++ CLI" Value="public:&#xA; event EventHandler&lt;System::Speech::Recognition::AudioLevelUpdatedEventArgs ^&gt; ^ AudioLevelUpdated;" />
<MemberSignature Language="F#" Value="member this.AudioLevelUpdated : EventHandler&lt;System.Speech.Recognition.AudioLevelUpdatedEventArgs&gt; " Usage="member this.AudioLevelUpdated : System.EventHandler&lt;System.Speech.Recognition.AudioLevelUpdatedEventArgs&gt; " />
<MemberType>Event</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.EventHandler&lt;System.Speech.Recognition.AudioLevelUpdatedEventArgs&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Raised when the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> reports the level of its audio input.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The <xref:System.Speech.Recognition.SpeechRecognitionEngine> raises this event multiple times per second. The frequency with which the event is raised depends on the computer on which the application is running.
To get the audio level at the time of the event, use the <xref:System.Speech.Recognition.AudioLevelUpdatedEventArgs.AudioLevel%2A> property of the associated <xref:System.Speech.Recognition.AudioLevelUpdatedEventArgs>. To get the current audio level of the input to the recognizer, use the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevel%2A> property.
When you create an <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevelUpdated> delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see [Events and Delegates](https://go.microsoft.com/fwlink/?LinkId=162418).
## Examples
The following example adds a handler for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevelUpdated> event to a <xref:System.Speech.Recognition.SpeechRecognitionEngine> object. The handler outputs the new audio level to the console.
```
private SpeechRecognitionEngine recognizer;
// Initialize the SpeechRecognitionEngine object.
private void Initialize()
{
recognizer = new SpeechRecognitionEngine();
// Add an event handler for the AudioLevelUpdated event.
recognizer.AudioLevelUpdated +=
new EventHandler<AudioLevelUpdatedEventArgs>(recognizer_AudioLevelUpdated);
// Add other initialization code here.
}
// Write the audio level to the console when the AudioLevelUpdated event is raised.
void recognizer_AudioLevelUpdated(object sender, AudioLevelUpdatedEventArgs e)
{
Console.WriteLine("The audio level is now: {0}.", e.AudioLevel);
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.AudioLevelUpdatedEventArgs" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevel" />
</Docs>
</Member>
<Member MemberName="AudioPosition">
<MemberSignature Language="C#" Value="public TimeSpan AudioPosition { get; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.TimeSpan AudioPosition" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioPosition" />
<MemberSignature Language="VB.NET" Value="Public ReadOnly Property AudioPosition As TimeSpan" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property TimeSpan AudioPosition { TimeSpan get(); };" />
<MemberSignature Language="F#" Value="member this.AudioPosition : TimeSpan" Usage="System.Speech.Recognition.SpeechRecognitionEngine.AudioPosition" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.TimeSpan</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets the current location in the audio stream being generated by the device that is providing input to the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" />.</summary>
<value>The current location in the audio stream being generated by the input device.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioPosition%2A> property references the input device's position in its generated audio stream. By contrast, the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerAudioPosition%2A> property references the recognizer's position within its audio input. These positions can be different. For example, if the recognizer has received input for which it has not yet generated a recognition result then the value of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerAudioPosition%2A> property is less than the value of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioPosition%2A> property.
## Examples
In the following example, the in-process speech recognizer uses a dictation grammar to match speech input. A handler for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected> event writes to the console the <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioPosition%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerAudioPosition%2A>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioLevel%2A> when the speech recognizer detects speech at its input.
```
using System;
using System.Speech.Recognition;
namespace SampleRecognition
{
class Program
{
private static SpeechRecognitionEngine recognizer;
public static void Main(string[] args)
{
// Initialize an in-process speech recognition engine for US English.
using (recognizer = new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
recognizer.SetInputToDefaultAudioDevice();
// Create a grammar for finding services in different cities.
Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });
Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });
GrammarBuilder findServices = new GrammarBuilder("Find");
findServices.Append(services);
findServices.Append("near");
findServices.Append(cities);
// Create a Grammar object from the GrammarBuilder and load it to the recognizer.
Grammar servicesGrammar = new Grammar(findServices);
recognizer.LoadGrammarAsync(servicesGrammar);
// Add handlers for events.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(recognizer_SpeechDetected);
// Start asynchronous recognition.
recognizer.RecognizeAsync();
Console.WriteLine("Starting asynchronous recognition...");
// Keep the console window open.
Console.ReadLine();
}
}
// Gather information about detected speech and write it to the console.
static void recognizer_SpeechDetected(object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine();
Console.WriteLine("Speech detected:");
Console.WriteLine(" Audio level: " + recognizer.AudioLevel);
Console.WriteLine(" Audio position at the event: " + e.AudioPosition);
Console.WriteLine(" Current audio position: " + recognizer.AudioPosition);
Console.WriteLine(" Current recognizer audio position: " +
recognizer.RecognizerAudioPosition);
}
// Write the text of the recognition result to the console.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("\nSpeech recognized: " + e.Result.Text);
// Add event handler code here.
}
}
}
```
]]></format>
</remarks>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerAudioPosition" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognizer.AudioPosition" />
</Docs>
</Member>
<Member MemberName="AudioSignalProblemOccurred">
<MemberSignature Language="C#" Value="public event EventHandler&lt;System.Speech.Recognition.AudioSignalProblemOccurredEventArgs&gt; AudioSignalProblemOccurred;" />
<MemberSignature Language="ILAsm" Value=".event class System.EventHandler`1&lt;class System.Speech.Recognition.AudioSignalProblemOccurredEventArgs&gt; AudioSignalProblemOccurred" />
<MemberSignature Language="DocId" Value="E:System.Speech.Recognition.SpeechRecognitionEngine.AudioSignalProblemOccurred" />
<MemberSignature Language="VB.NET" Value="Public Custom Event AudioSignalProblemOccurred As EventHandler(Of AudioSignalProblemOccurredEventArgs) " />
<MemberSignature Language="C++ CLI" Value="public:&#xA; event EventHandler&lt;System::Speech::Recognition::AudioSignalProblemOccurredEventArgs ^&gt; ^ AudioSignalProblemOccurred;" />
<MemberSignature Language="F#" Value="member this.AudioSignalProblemOccurred : EventHandler&lt;System.Speech.Recognition.AudioSignalProblemOccurredEventArgs&gt; " Usage="member this.AudioSignalProblemOccurred : System.EventHandler&lt;System.Speech.Recognition.AudioSignalProblemOccurredEventArgs&gt; " />
<MemberType>Event</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.EventHandler&lt;System.Speech.Recognition.AudioSignalProblemOccurredEventArgs&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Raised when the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> detects a problem in the audio signal.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
To get which problem occurred, use the <xref:System.Speech.Recognition.AudioSignalProblemOccurredEventArgs.AudioSignalProblem%2A> property of the associated <xref:System.Speech.Recognition.AudioSignalProblemOccurredEventArgs>.
When you create an <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioSignalProblemOccurred> delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see [Events and Delegates](https://go.microsoft.com/fwlink/?LinkId=162418).
## Examples
The following example defines an event handler that gathers information about an <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioSignalProblemOccurred> event.
```
private SpeechRecognitionEngine recognizer;
// Initialize the speech recognition engine.
private void Initialize()
{
recognizer = new SpeechRecognitionEngine();
// Add a handler for the AudioSignalProblemOccurred event.
recognizer.AudioSignalProblemOccurred +=
new EventHandler<AudioSignalProblemOccurredEventArgs>(
recognizer_AudioSignalProblemOccurred);
}
// Gather information when the AudioSignalProblemOccurred event is raised.
void recognizer_AudioSignalProblemOccurred(object sender, AudioSignalProblemOccurredEventArgs e)
{
StringBuilder details = new StringBuilder();
details.AppendLine("Audio signal problem information:");
details.AppendFormat(
" Audio level: {0}" + Environment.NewLine +
" Audio position: {1}" + Environment.NewLine +
" Audio signal problem: {2}" + Environment.NewLine +
" Recognition engine audio position: {3}" + Environment.NewLine,
e.AudioLevel, e.AudioPosition, e.AudioSignalProblem,
e.recoEngineAudioPosition);
// Insert additional event handler code here.
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.AudioSignalProblem" />
<altmember cref="T:System.Speech.Recognition.AudioSignalProblemOccurredEventArgs" />
</Docs>
</Member>
<Member MemberName="AudioState">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.AudioState AudioState { get; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.Speech.Recognition.AudioState AudioState" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioState" />
<MemberSignature Language="VB.NET" Value="Public ReadOnly Property AudioState As AudioState" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property System::Speech::Recognition::AudioState AudioState { System::Speech::Recognition::AudioState get(); };" />
<MemberSignature Language="F#" Value="member this.AudioState : System.Speech.Recognition.AudioState" Usage="System.Speech.Recognition.SpeechRecognitionEngine.AudioState" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.Recognition.AudioState</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets the state of the audio being received by the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" />.</summary>
<value>The state of the audio input to the speech recognizer.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioState%2A> property represents the audio state with a member of the <xref:System.Speech.Recognition.AudioState> enumeration.
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.AudioState" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognizer.AudioState" />
</Docs>
</Member>
<Member MemberName="AudioStateChanged">
<MemberSignature Language="C#" Value="public event EventHandler&lt;System.Speech.Recognition.AudioStateChangedEventArgs&gt; AudioStateChanged;" />
<MemberSignature Language="ILAsm" Value=".event class System.EventHandler`1&lt;class System.Speech.Recognition.AudioStateChangedEventArgs&gt; AudioStateChanged" />
<MemberSignature Language="DocId" Value="E:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged" />
<MemberSignature Language="VB.NET" Value="Public Custom Event AudioStateChanged As EventHandler(Of AudioStateChangedEventArgs) " />
<MemberSignature Language="C++ CLI" Value="public:&#xA; event EventHandler&lt;System::Speech::Recognition::AudioStateChangedEventArgs ^&gt; ^ AudioStateChanged;" />
<MemberSignature Language="F#" Value="member this.AudioStateChanged : EventHandler&lt;System.Speech.Recognition.AudioStateChangedEventArgs&gt; " Usage="member this.AudioStateChanged : System.EventHandler&lt;System.Speech.Recognition.AudioStateChangedEventArgs&gt; " />
<MemberType>Event</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.EventHandler&lt;System.Speech.Recognition.AudioStateChangedEventArgs&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Raised when the state changes in the audio being received by the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" />.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
To get the audio state at the time of the event, use the <xref:System.Speech.Recognition.AudioStateChangedEventArgs.AudioState%2A> property of the associated <xref:System.Speech.Recognition.AudioStateChangedEventArgs>. To get the current audio state of the input to the recognizer, use the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioState%2A> property. For more information about audio state, see the <xref:System.Speech.Recognition.AudioState> enumeration.
When you create an <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged> delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see [Events and Delegates](https://go.microsoft.com/fwlink/?LinkId=162418).
## Examples
The following example uses a handler for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged> event to write the recognizer's new <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioState%2A> to the console each time it changes, using a member of the <xref:System.Speech.Recognition.AudioState> enumeration.
```
using System;
using System.Speech.Recognition;
namespace SampleRecognition
{
class Program
{
static void Main(string[] args)
// Initialize an in-process speech recognition engine.
{
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
{
// Create and load a grammar.
Choices animals = new Choices(new string[] { "cow", "pig", "goat" });
GrammarBuilder farm = new GrammarBuilder("On this farm he had a");
farm.Append(animals);
Grammar farmAnimals = new Grammar(farm);
farmAnimals.Name = "Farm";
recognizer.LoadGrammar(farmAnimals);
// Attach event handlers.
recognizer.AudioStateChanged +=
new EventHandler<AudioStateChangedEventArgs>(recognizer_AudioStateChanged);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
recognizer.LoadGrammarCompleted +=
new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
// Set the input to the recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start recognition.
recognizer.RecognizeAsync();
// Keep the console window open.
Console.ReadLine();
}
}
// Handle the LoadGrammarCompleted event.
static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
{
Console.WriteLine("Grammar loaded: " + e.Grammar.Name);
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
if (e.Result != null && e.Result.Text != null)
{
Console.WriteLine();
Console.WriteLine(" Recognized text = {0}", e.Result.Text);
Console.WriteLine();
}
else
{
Console.WriteLine(" Recognized text not available.");
}
Console.WriteLine();
Console.WriteLine("Done.");
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
// Handle the AudioStateChanged event.
static void recognizer_AudioStateChanged(object sender, AudioStateChangedEventArgs e)
{
Console.WriteLine("The new audio state is: " + e.AudioState);
}
}
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.AudioState" />
<altmember cref="T:System.Speech.Recognition.AudioStateChangedEventArgs" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.AudioState" />
</Docs>
</Member>
<Member MemberName="BabbleTimeout">
<MemberSignature Language="C#" Value="public TimeSpan BabbleTimeout { get; set; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.TimeSpan BabbleTimeout" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<MemberSignature Language="VB.NET" Value="Public Property BabbleTimeout As TimeSpan" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property TimeSpan BabbleTimeout { TimeSpan get(); void set(TimeSpan value); };" />
<MemberSignature Language="F#" Value="member this.BabbleTimeout : TimeSpan with get, set" Usage="System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute>
<AttributeName>System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Advanced)</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.TimeSpan</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets or sets the time interval during which a <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> accepts input containing only background noise, before finalizing recognition.</summary>
<value>The duration of the time interval.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Each speech recognizer has an algorithm to distinguish between silence and speech. The recognizer classifies as background noise any non-silence input that does not match the initial rule of any of the recognizer's loaded and enabled speech recognition grammars. If the recognizer receives only background noise and silence within the babble timeout interval, then the recognizer finalizes that recognition operation.
- For asynchronous recognition operations, the recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> event, where the <xref:System.Speech.Recognition.RecognizeCompletedEventArgs.BabbleTimeout%2A?displayProperty=nameWithType> property is `true`, and the <xref:System.Speech.Recognition.RecognizeCompletedEventArgs.Result%2A?displayProperty=nameWithType> property is `null`.
- For synchronous recognition operations and emulation, the recognizer returns `null`, instead of a valid <xref:System.Speech.Recognition.RecognitionResult>.
If the babble timeout period is set to 0, the recognizer does not perform a babble timeout check. The timeout interval can be any non-negative value. The default is 0 seconds.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition that sets the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties of a <xref:System.Speech.Recognition.SpeechRecognitionEngine> before initiating speech recognition. Handlers for the speech recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> events output event information to the console to demonstrate how the <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties of a <xref:System.Speech.Recognition.SpeechRecognitionEngine> affect recognition operations.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Initialize an in-process speech recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Load a Grammar object.
recognizer.LoadGrammar(CreateServicesGrammar("FindServices"));
// Add event handlers.
recognizer.AudioStateChanged +=
new EventHandler<AudioStateChangedEventArgs>(
AudioStateChangedHandler);
recognizer.RecognizeCompleted +=
new EventHandler<RecognizeCompletedEventArgs>(
RecognizeCompletedHandler);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
recognizer.InitialSilenceTimeout = TimeSpan.FromSeconds(3);
recognizer.BabbleTimeout = TimeSpan.FromSeconds(2);
recognizer.EndSilenceTimeout = TimeSpan.FromSeconds(1);
recognizer.EndSilenceTimeoutAmbiguous = TimeSpan.FromSeconds(1.5);
Console.WriteLine("BabbleTimeout: {0}", recognizer.BabbleTimeout);
Console.WriteLine("InitialSilenceTimeout: {0}", recognizer.InitialSilenceTimeout);
Console.WriteLine("EndSilenceTimeout: {0}", recognizer.EndSilenceTimeout);
Console.WriteLine("EndSilenceTimeoutAmbiguous: {0}", recognizer.EndSilenceTimeoutAmbiguous);
Console.WriteLine();
// Start asynchronous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Single);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Create a grammar and build it into a Grammar object.
static Grammar CreateServicesGrammar(string grammarName)
{
// Create a grammar for finding services in different cities.
Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });
Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });
GrammarBuilder findServices = new GrammarBuilder("Find");
findServices.Append(services);
findServices.Append("near");
findServices.Append(cities);
// Create a Grammar object from the GrammarBuilder.
Grammar servicesGrammar = new Grammar(findServices);
servicesGrammar.Name = ("FindServices");
return servicesGrammar;
}
// Handle the AudioStateChanged event.
static void AudioStateChangedHandler(
object sender, AudioStateChangedEventArgs e)
{
Console.WriteLine("AudioStateChanged ({0}): {1}",
DateTime.Now.ToString("mm:ss.f"), e.AudioState);
}
// Handle the RecognizeCompleted event.
static void RecognizeCompletedHandler(
object sender, RecognizeCompletedEventArgs e)
{
Console.WriteLine("RecognizeCompleted ({0}):",
DateTime.Now.ToString("mm:ss.f"));
string resultText;
if (e.Result != null) { resultText = e.Result.Text; }
else { resultText = "<null>"; }
Console.WriteLine(
" BabbleTimeout: {0}; InitialSilenceTimeout: {1}; Result text: {2}",
e.BabbleTimeout, e.InitialSilenceTimeout, resultText);
if (e.Error != null)
{
Console.WriteLine(" Exception message: ", e.Error.Message);
}
// Start the next asynchronous recognition operation.
((SpeechRecognitionEngine)sender).RecognizeAsync(RecognizeMode.Single);
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentOutOfRangeException">This property is set to less than 0 seconds.</exception>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
</Docs>
</Member>
<MemberGroup MemberName="Dispose">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Disposes the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> object.</summary>
</Docs>
</MemberGroup>
<Member MemberName="Dispose">
<MemberSignature Language="C#" Value="public void Dispose ();" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig newslot virtual instance void Dispose() cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.Dispose" />
<MemberSignature Language="VB.NET" Value="Public Sub Dispose ()" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; virtual void Dispose();" />
<MemberSignature Language="F#" Value="abstract member Dispose : unit -&gt; unit&#xA;override this.Dispose : unit -&gt; unit" Usage="speechRecognitionEngine.Dispose " />
<MemberType>Method</MemberType>
<Implements>
<InterfaceMember>M:System.IDisposable.Dispose</InterfaceMember>
</Implements>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters />
<Docs>
<summary>Disposes the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> object.</summary>
<remarks>To be added.</remarks>
</Docs>
</Member>
<Member MemberName="Dispose">
<MemberSignature Language="C#" Value="protected virtual void Dispose (bool disposing);" />
<MemberSignature Language="ILAsm" Value=".method familyhidebysig newslot virtual instance void Dispose(bool disposing) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.Dispose(System.Boolean)" />
<MemberSignature Language="VB.NET" Value="Protected Overridable Sub Dispose (disposing As Boolean)" />
<MemberSignature Language="C++ CLI" Value="protected:&#xA; virtual void Dispose(bool disposing);" />
<MemberSignature Language="F#" Value="abstract member Dispose : bool -&gt; unit&#xA;override this.Dispose : bool -&gt; unit" Usage="speechRecognitionEngine.Dispose disposing" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="disposing" Type="System.Boolean" />
</Parameters>
<Docs>
<param name="disposing">
<see langword="true" /> to release both managed and unmanaged resources; <see langword="false" /> to release only unmanaged resources.</param>
<summary>Disposes the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> object and releases resources used during the session.</summary>
<remarks>To be added.</remarks>
</Docs>
</Member>
<MemberGroup MemberName="EmulateRecognize">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Emulates input to the speech recognizer, using text in place of audio for synchronous speech recognition.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
These methods bypass the system audio input and provide text to the recognizer as <xref:System.String> objects or as an array of <xref:System.Speech.Recognition.RecognizedWordUnit> objects. This can be helpful when you are testing or debugging an application or grammar. For example, you can use emulation to determine whether a word is in a grammar and what semantics are returned when the word is recognized. Use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A> method to disable audio input to the speech recognition engine during emulation operations.
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated. The recognizer ignores new lines and extra white space and treats punctuation as literal input.
> [!NOTE]
> The <xref:System.Speech.Recognition.RecognitionResult> object generated by the speech recognizer in response to emulated input has a value of `null` for its <xref:System.Speech.Recognition.RecognitionResult.Audio%2A> property.
To emulate asynchronous recognition, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync%2A> method.
]]></format>
</remarks>
</Docs>
</MemberGroup>
<Member MemberName="EmulateRecognize">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.RecognitionResult EmulateRecognize (string inputText);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance class System.Speech.Recognition.RecognitionResult EmulateRecognize(string inputText) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.String)" />
<MemberSignature Language="VB.NET" Value="Public Function EmulateRecognize (inputText As String) As RecognitionResult" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(System::String ^ inputText);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognize : string -&gt; System.Speech.Recognition.RecognitionResult" Usage="speechRecognitionEngine.EmulateRecognize inputText" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute FrameworkAlternate="netframework-4.0">
<AttributeName>System.Runtime.TargetedPatchingOptOut("Performance critical to inline this type of method across NGen image boundaries")</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.Speech.Recognition.RecognitionResult</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="inputText" Type="System.String" />
</Parameters>
<Docs>
<param name="inputText">The input for the recognition operation.</param>
<summary>Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition.</summary>
<returns>The result for the recognition operation, or <see langword="null" /> if the operation is not successful or the recognizer is not enabled.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated.
The recognizers that ship with Vista and Windows 7 ignore case and character width when applying grammar rules to the input phrase. For more information about this type of comparison, see the <xref:System.Globalization.CompareOptions> enumeration values <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> and <xref:System.Globalization.CompareOptions.IgnoreWidth>. The recognizers also ignore new lines and extra white space and treat punctuation as literal input.
## Examples
The code example below is part of a console application that demonstrates emulated input, the associated recognition results, and the associated events raised by the speech recognizer. The example generates the following output.
```
TestRecognize("Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = Smith
...Recognition result text = Smith
TestRecognize("Jones")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Jones; Text = Jones
...Recognition result text = Jones
TestRecognize("Mister")...
SpeechDetected event raised.
SpeechHypothesized event raised.
Grammar = Smith; Text = mister
SpeechRecognitionRejected event raised.
Grammar = <not available>; Text =
...No recognition result.
TestRecognize("Mister Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = mister Smith
...Recognition result text = mister Smith
press any key to exit...
```
```csharp
using System;
using System.Globalization;
using System.Speech.Recognition;
namespace Sre_EmulateRecognize
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new CultureInfo("en-US")))
{
// Load grammars.
recognizer.LoadGrammar(CreateNameGrammar("Smith"));
recognizer.LoadGrammar(CreateNameGrammar("Jones"));
// Disable audio input to the recognizer.
recognizer.SetInputToNull();
// Add handlers for events raised by the EmulateRecognize method.
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(
SpeechDetectedHandler);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(
SpeechHypothesizedHandler);
recognizer.SpeechRecognitionRejected +=
new EventHandler<SpeechRecognitionRejectedEventArgs>(
SpeechRecognitionRejectedHandler);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(
SpeechRecognizedHandler);
// Start four synchronous emulated recognition operations.
TestRecognize(recognizer, "Smith");
TestRecognize(recognizer, "Jones");
TestRecognize(recognizer, "Mister");
TestRecognize(recognizer, "Mister Smith");
}
Console.WriteLine("press any key to exit...");
Console.ReadKey(true);
}
// Create a simple name grammar.
// Set the grammar name to the surname.
private static Grammar CreateNameGrammar(string surname)
{
GrammarBuilder builder = new GrammarBuilder("mister", 0, 1);
builder.Append(surname);
Grammar nameGrammar = new Grammar(builder);
nameGrammar.Name = surname;
return nameGrammar;
}
// Send emulated input to the recognizer for synchronous recognition.
private static void TestRecognize(
SpeechRecognitionEngine recognizer, string input)
{
Console.WriteLine("TestRecognize(\"{0}\")...", input);
RecognitionResult result =
recognizer.EmulateRecognize(input,CompareOptions.IgnoreCase);
if (result != null)
{
Console.WriteLine("...Recognition result text = {0}",
result.Text ?? "<null>");
}
else
{
Console.WriteLine("...No recognition result.");
}
Console.WriteLine();
}
static void SpeechDetectedHandler(
object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" SpeechDetected event raised.");
}
// Handle events.
static void SpeechHypothesizedHandler(
object sender, SpeechHypothesizedEventArgs e)
{
Console.WriteLine(" SpeechHypothesized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void SpeechRecognitionRejectedHandler(
object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine(" SpeechRecognitionRejected event raised.");
if (e.Result != null)
{
string grammarName;
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name ?? "<none>";
}
else
{
grammarName = "<not available>";
}
Console.WriteLine(" Grammar = {0}; Text = {1}",
grammarName, e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" SpeechRecognized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="inputText" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="inputText" /> is the empty string ("").</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
</Docs>
</Member>
<Member MemberName="EmulateRecognize">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.RecognitionResult EmulateRecognize (System.Speech.Recognition.RecognizedWordUnit[] wordUnits, System.Globalization.CompareOptions compareOptions);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance class System.Speech.Recognition.RecognitionResult EmulateRecognize(class System.Speech.Recognition.RecognizedWordUnit[] wordUnits, valuetype System.Globalization.CompareOptions compareOptions) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(cli::array &lt;System::Speech::Recognition::RecognizedWordUnit ^&gt; ^ wordUnits, System::Globalization::CompareOptions compareOptions);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognize : System.Speech.Recognition.RecognizedWordUnit[] * System.Globalization.CompareOptions -&gt; System.Speech.Recognition.RecognitionResult" Usage="speechRecognitionEngine.EmulateRecognize (wordUnits, compareOptions)" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.Recognition.RecognitionResult</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="wordUnits" Type="System.Speech.Recognition.RecognizedWordUnit[]" />
<Parameter Name="compareOptions" Type="System.Globalization.CompareOptions" />
</Parameters>
<Docs>
<param name="wordUnits">An array of word units that contains the input for the recognition operation.</param>
<param name="compareOptions">A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.</param>
<summary>Emulates input of specific words to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.</summary>
<returns>The result for the recognition operation, or <see langword="null" /> if the operation is not successful or the recognizer is not enabled.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated.
The recognizer uses `compareOptions` when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> or <xref:System.Globalization.CompareOptions.IgnoreCase> value is present. The recognizer always ignores the character width and never ignores the Kana type. The recognizer also ignores new lines and extra white space and treats punctuation as literal input. For more information about character width and Kana type, see the <xref:System.Globalization.CompareOptions> enumeration.
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="wordUnits" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="wordUnits" /> contains one or more <see langword="null" /> elements.</exception>
<exception cref="T:System.NotSupportedException">
<paramref name="compareOptions" /> contains the <see cref="F:System.Globalization.CompareOptions.IgnoreNonSpace" />, <see cref="F:System.Globalization.CompareOptions.IgnoreSymbols" />, or <see cref="F:System.Globalization.CompareOptions.StringSort" /> flag.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
</Docs>
</Member>
<Member MemberName="EmulateRecognize">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.RecognitionResult EmulateRecognize (string inputText, System.Globalization.CompareOptions compareOptions);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance class System.Speech.Recognition.RecognitionResult EmulateRecognize(string inputText, valuetype System.Globalization.CompareOptions compareOptions) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.String,System.Globalization.CompareOptions)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Speech::Recognition::RecognitionResult ^ EmulateRecognize(System::String ^ inputText, System::Globalization::CompareOptions compareOptions);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognize : string * System.Globalization.CompareOptions -&gt; System.Speech.Recognition.RecognitionResult" Usage="speechRecognitionEngine.EmulateRecognize (inputText, compareOptions)" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.Recognition.RecognitionResult</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="inputText" Type="System.String" />
<Parameter Name="compareOptions" Type="System.Globalization.CompareOptions" />
</Parameters>
<Docs>
<param name="inputText">The input phrase for the recognition operation.</param>
<param name="compareOptions">A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.</param>
<summary>Emulates input of a phrase to the speech recognizer, using text in place of audio for synchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.</summary>
<returns>The result for the recognition operation, or <see langword="null" /> if the operation is not successful or the recognizer is not enabled.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated.
The recognizer uses `compareOptions` when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> or <xref:System.Globalization.CompareOptions.IgnoreCase> value is present. The recognizer always ignores the character width and never ignores the Kana type. The recognizer also ignores new lines and extra white space and treats punctuation as literal input. For more information about character width and Kana type, see the <xref:System.Globalization.CompareOptions> enumeration.
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="inputText" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="inputText" /> is the empty string ("").</exception>
<exception cref="T:System.NotSupportedException">
<paramref name="compareOptions" /> contains the <see cref="F:System.Globalization.CompareOptions.IgnoreNonSpace" />, <see cref="F:System.Globalization.CompareOptions.IgnoreSymbols" />, or <see cref="F:System.Globalization.CompareOptions.StringSort" /> flag.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
</Docs>
</Member>
<MemberGroup MemberName="EmulateRecognizeAsync">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Emulates input to the speech recognizer, using text in place of audio for asynchronous speech recognition.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
These methods bypass the system audio input and provide text to the recognizer as <xref:System.String> objects or as an array of <xref:System.Speech.Recognition.RecognizedWordUnit> objects. This can be helpful when you are testing or debugging an application or grammar. For example, you can use emulation to determine whether a word is in a grammar and what semantics are returned when the word is recognized. Use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SetInputToNull%2A> method to disable audio input to the speech recognition engine during emulation operations.
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated. When the recognizer completes the asynchronous recognition operation, it raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event. The recognizer ignores new lines and extra white space and treats punctuation as literal input.
> [!NOTE]
> The <xref:System.Speech.Recognition.RecognitionResult> object generated by the speech recognizer in response to emulated input has a value of `null` for its <xref:System.Speech.Recognition.RecognitionResult.Audio%2A> property.
To emulate synchronous recognition, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize%2A> method.
]]></format>
</remarks>
</Docs>
</MemberGroup>
<Member MemberName="EmulateRecognizeAsync">
<MemberSignature Language="C#" Value="public void EmulateRecognizeAsync (string inputText);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void EmulateRecognizeAsync(string inputText) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String)" />
<MemberSignature Language="VB.NET" Value="Public Sub EmulateRecognizeAsync (inputText As String)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; void EmulateRecognizeAsync(System::String ^ inputText);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognizeAsync : string -&gt; unit" Usage="speechRecognitionEngine.EmulateRecognizeAsync inputText" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute FrameworkAlternate="netframework-4.0">
<AttributeName>System.Runtime.TargetedPatchingOptOut("Performance critical to inline this type of method across NGen image boundaries")</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="inputText" Type="System.String" />
</Parameters>
<Docs>
<param name="inputText">The input for the recognition operation.</param>
<summary>Emulates input of a phrase to the speech recognizer, using text in place of audio for asynchronous speech recognition.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated. When the recognizer completes the asynchronous recognition operation, it raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event.
The recognizers that ship with Vista and Windows 7 ignore case and character width when applying grammar rules to the input phrase. For more information about this type of comparison, see the <xref:System.Globalization.CompareOptions> enumeration values <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> and <xref:System.Globalization.CompareOptions.IgnoreWidth>. The recognizers also ignore new lines and extra white space and treat punctuation as literal input.
## Examples
The code example below is part of a console application that demonstrates asynchronous emulated input, the associated recognition results, and the associated events raised by the speech recognizer. The example generates the following output.
```
TestRecognizeAsync("Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = Smith
EmulateRecognizeCompleted event raised.
Grammar = Smith; Text = Smith
Done.
TestRecognizeAsync("Jones")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Jones; Text = Jones
EmulateRecognizeCompleted event raised.
Grammar = Jones; Text = Jones
Done.
TestRecognizeAsync("Mister")...
SpeechDetected event raised.
SpeechHypothesized event raised.
Grammar = Smith; Text = mister
SpeechRecognitionRejected event raised.
Grammar = <not available>; Text =
EmulateRecognizeCompleted event raised.
No recognition result available.
Done.
TestRecognizeAsync("Mister Smith")...
SpeechDetected event raised.
SpeechRecognized event raised.
Grammar = Smith; Text = mister Smith
EmulateRecognizeCompleted event raised.
Grammar = Smith; Text = mister Smith
Done.
press any key to exit...
```
```csharp
using System;
using System.Globalization;
using System.Speech.Recognition;
using System.Threading;
namespace SreEmulateRecognizeAsync
{
class Program
{
// Indicate when an asynchronous operation is finished.
static bool completed;
static void Main(string[] args)
{
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new CultureInfo("en-US")))
{
// Load grammars.
recognizer.LoadGrammar(CreateNameGrammar("Smith"));
recognizer.LoadGrammar(CreateNameGrammar("Jones"));
// Configure the audio input.
recognizer.SetInputToNull();
// Add event handlers for the events raised by the
// EmulateRecognizeAsync method.
recognizer.SpeechDetected +=
new EventHandler<SpeechDetectedEventArgs>(
SpeechDetectedHandler);
recognizer.SpeechHypothesized +=
new EventHandler<SpeechHypothesizedEventArgs>(
SpeechHypothesizedHandler);
recognizer.SpeechRecognitionRejected +=
new EventHandler<SpeechRecognitionRejectedEventArgs>(
SpeechRecognitionRejectedHandler);
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(
SpeechRecognizedHandler);
recognizer.EmulateRecognizeCompleted +=
new EventHandler<EmulateRecognizeCompletedEventArgs>(
EmulateRecognizeCompletedHandler);
// Start four asynchronous emulated recognition operations.
TestRecognizeAsync(recognizer, "Smith");
TestRecognizeAsync(recognizer, "Jones");
TestRecognizeAsync(recognizer, "Mister");
TestRecognizeAsync(recognizer, "Mister Smith");
}
Console.WriteLine("press any key to exit...");
Console.ReadKey(true);
}
// Create a simple name grammar.
// Set the grammar name to the surname.
private static Grammar CreateNameGrammar(string surname)
{
GrammarBuilder builder = new GrammarBuilder("mister", 0, 1);
builder.Append(surname);
Grammar nameGrammar = new Grammar(builder);
nameGrammar.Name = surname;
return nameGrammar;
}
// Send emulated input to the recognizer for asynchronous
// recognition.
private static void TestRecognizeAsync(
SpeechRecognitionEngine recognizer, string input)
{
completed = false;
Console.WriteLine("TestRecognizeAsync(\"{0}\")...", input);
recognizer.EmulateRecognizeAsync(input);
// Wait for the operation to complete.
while (!completed)
{
Thread.Sleep(333);
}
Console.WriteLine(" Done.");
Console.WriteLine();
}
static void SpeechDetectedHandler(
object sender, SpeechDetectedEventArgs e)
{
Console.WriteLine(" SpeechDetected event raised.");
}
static void SpeechHypothesizedHandler(
object sender, SpeechHypothesizedEventArgs e)
{
Console.WriteLine(" SpeechHypothesized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
// Handle events.
static void SpeechRecognitionRejectedHandler(
object sender, SpeechRecognitionRejectedEventArgs e)
{
Console.WriteLine(" SpeechRecognitionRejected event raised.");
if (e.Result != null)
{
string grammarName;
if (e.Result.Grammar != null)
{
grammarName = e.Result.Grammar.Name ?? "<none>";
}
else
{
grammarName = "<not available>";
}
Console.WriteLine(" Grammar = {0}; Text = {1}",
grammarName, e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine(" SpeechRecognized event raised.");
if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text );
}
else
{
Console.WriteLine(" No recognition result available.");
}
}
static void EmulateRecognizeCompletedHandler(
object sender, EmulateRecognizeCompletedEventArgs e)
{
Console.WriteLine(" EmulateRecognizeCompleted event raised.");
if (e.Error != null)
{
Console.WriteLine(" {0} exception encountered: {1}:",
e.Error.GetType().Name, e.Error.Message);
}
else if (e.Cancelled)
{
Console.WriteLine(" Operation cancelled.");
}
else if (e.Result != null)
{
Console.WriteLine(" Grammar = {0}; Text = {1}",
e.Result.Grammar.Name ?? "<none>", e.Result.Text);
}
else
{
Console.WriteLine(" No recognition result available.");
}
completed = true;
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="inputText" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="inputText" /> is the empty string ("").</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted" />
</Docs>
</Member>
<Member MemberName="EmulateRecognizeAsync">
<MemberSignature Language="C#" Value="public void EmulateRecognizeAsync (System.Speech.Recognition.RecognizedWordUnit[] wordUnits, System.Globalization.CompareOptions compareOptions);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void EmulateRecognizeAsync(class System.Speech.Recognition.RecognizedWordUnit[] wordUnits, valuetype System.Globalization.CompareOptions compareOptions) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; void EmulateRecognizeAsync(cli::array &lt;System::Speech::Recognition::RecognizedWordUnit ^&gt; ^ wordUnits, System::Globalization::CompareOptions compareOptions);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognizeAsync : System.Speech.Recognition.RecognizedWordUnit[] * System.Globalization.CompareOptions -&gt; unit" Usage="speechRecognitionEngine.EmulateRecognizeAsync (wordUnits, compareOptions)" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="wordUnits" Type="System.Speech.Recognition.RecognizedWordUnit[]" />
<Parameter Name="compareOptions" Type="System.Globalization.CompareOptions" />
</Parameters>
<Docs>
<param name="wordUnits">An array of word units that contains the input for the recognition operation.</param>
<param name="compareOptions">A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.</param>
<summary>Emulates input of specific words to the speech recognizer, using an array of <see cref="T:System.Speech.Recognition.RecognizedWordUnit" /> objects in place of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the words and the loaded speech recognition grammars.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated. When the recognizer completes the asynchronous recognition operation, it raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event.
The recognizer uses `compareOptions` when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> or <xref:System.Globalization.CompareOptions.IgnoreCase> value is present. The recognizers always ignore the character width and never ignore the Kana type. The recognizers also ignore new lines and extra white space and treat punctuation as literal input. For more information about character width and Kana type, see the <xref:System.Globalization.CompareOptions> enumeration.
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="wordUnits" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="wordUnits" /> contains one or more <see langword="null" /> elements.</exception>
<exception cref="T:System.NotSupportedException">
<paramref name="compareOptions" /> contains the <see cref="F:System.Globalization.CompareOptions.IgnoreNonSpace" />, <see cref="F:System.Globalization.CompareOptions.IgnoreSymbols" />, or <see cref="F:System.Globalization.CompareOptions.StringSort" /> flag.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted" />
</Docs>
</Member>
<Member MemberName="EmulateRecognizeAsync">
<MemberSignature Language="C#" Value="public void EmulateRecognizeAsync (string inputText, System.Globalization.CompareOptions compareOptions);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void EmulateRecognizeAsync(string inputText, valuetype System.Globalization.CompareOptions compareOptions) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String,System.Globalization.CompareOptions)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; void EmulateRecognizeAsync(System::String ^ inputText, System::Globalization::CompareOptions compareOptions);" />
<MemberSignature Language="F#" Value="member this.EmulateRecognizeAsync : string * System.Globalization.CompareOptions -&gt; unit" Usage="speechRecognitionEngine.EmulateRecognizeAsync (inputText, compareOptions)" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="inputText" Type="System.String" />
<Parameter Name="compareOptions" Type="System.Globalization.CompareOptions" />
</Parameters>
<Docs>
<param name="inputText">The input phrase for the recognition operation.</param>
<param name="compareOptions">A bitwise combination of the enumeration values that describe the type of comparison to use for the emulated recognition operation.</param>
<summary>Emulates input of a phrase to the speech recognizer, using text in place of audio for asynchronous speech recognition, and specifies how the recognizer handles Unicode comparison between the phrase and the loaded speech recognition grammars.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events as if the recognition operation is not emulated. When the recognizer completes the asynchronous recognition operation, it raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event.
The recognizer uses `compareOptions` when it applies grammar rules to the input phrase. The recognizers that ship with Vista and Windows 7 ignore case if the <xref:System.Globalization.CompareOptions.OrdinalIgnoreCase> or <xref:System.Globalization.CompareOptions.IgnoreCase> value is present. The recognizers always ignore the character width and never ignore the Kana type. The recognizers also ignore new lines and extra white space and treat punctuation as literal input. For more information about character width and Kana type, see the <xref:System.Globalization.CompareOptions> enumeration.
]]></format>
</remarks>
<exception cref="T:System.InvalidOperationException">The recognizer has no speech recognition grammars loaded, or the recognizer has an asynchronous recognition operation that is not yet complete.</exception>
<exception cref="T:System.ArgumentNullException">
<paramref name="inputText" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="inputText" /> is the empty string ("").</exception>
<exception cref="T:System.NotSupportedException">
<paramref name="compareOptions" /> contains the <see cref="F:System.Globalization.CompareOptions.IgnoreNonSpace" />, <see cref="F:System.Globalization.CompareOptions.IgnoreSymbols" />, or <see cref="F:System.Globalization.CompareOptions.StringSort" /> flag.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted" />
</Docs>
</Member>
<Member MemberName="EmulateRecognizeCompleted">
<MemberSignature Language="C#" Value="public event EventHandler&lt;System.Speech.Recognition.EmulateRecognizeCompletedEventArgs&gt; EmulateRecognizeCompleted;" />
<MemberSignature Language="ILAsm" Value=".event class System.EventHandler`1&lt;class System.Speech.Recognition.EmulateRecognizeCompletedEventArgs&gt; EmulateRecognizeCompleted" />
<MemberSignature Language="DocId" Value="E:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted" />
<MemberSignature Language="VB.NET" Value="Public Custom Event EmulateRecognizeCompleted As EventHandler(Of EmulateRecognizeCompletedEventArgs) " FrameworkAlternate="netframework-3.0;netframework-3.5;netframework-4.0;netframework-4.5;netframework-4.5.1;netframework-4.5.2" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; event EventHandler&lt;System::Speech::Recognition::EmulateRecognizeCompletedEventArgs ^&gt; ^ EmulateRecognizeCompleted;" />
<MemberSignature Language="F#" Value="member this.EmulateRecognizeCompleted : EventHandler&lt;System.Speech.Recognition.EmulateRecognizeCompletedEventArgs&gt; " Usage="member this.EmulateRecognizeCompleted : System.EventHandler&lt;System.Speech.Recognition.EmulateRecognizeCompletedEventArgs&gt; " />
<MemberSignature Language="VB.NET" Value="Public Event EmulateRecognizeCompleted As EventHandler(Of EmulateRecognizeCompletedEventArgs) " FrameworkAlternate="netframework-4.6;netframework-4.6.1;netframework-4.6.2;netframework-4.7;netframework-4.7.1;netframework-4.7.2;netframework-4.8" />
<MemberType>Event</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.EventHandler&lt;System.Speech.Recognition.EmulateRecognizeCompletedEventArgs&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Raised when the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> finalizes an asynchronous recognition operation of emulated input.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Each <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync%2A> method begins an asynchronous recognition operation. The <xref:System.Speech.Recognition.SpeechRecognitionEngine> raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event when it finalizes the asynchronous operation.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync%2A> operation can raise the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events. The <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event is the last such event that the recognizer raises for a given operation.
If emulated recognition was successful, you can access the recognition result using the either of the following:
- The <xref:System.Speech.Recognition.EmulateRecognizeCompletedEventArgs.Result%2A> property in the <xref:System.Speech.Recognition.EmulateRecognizeCompletedEventArgs> object in the handler for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> event.
- <xref:System.Speech.Recognition.RecognitionEventArgs.Result%2A> property in the <xref:System.Speech.Recognition.SpeechRecognizedEventArgs> object in the handler for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> event.
If emulated recognition was not successful, the <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> event is not raised and the <xref:System.Speech.Recognition.EmulateRecognizeCompletedEventArgs.Result%2A> will be null.
<xref:System.Speech.Recognition.EmulateRecognizeCompletedEventArgs> derives from <xref:System.ComponentModel.AsyncCompletedEventArgs>.
<xref:System.Speech.Recognition.SpeechRecognizedEventArgs> derives from <xref:System.Speech.Recognition.RecognitionEventArgs>.
When you create an <xref:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeCompleted> delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see [Events and Delegates](https://go.microsoft.com/fwlink/?LinkId=162418).
## Examples
The following example is part of a console application that loads a speech recognition grammar and demonstrates asynchronous emulated input, the associated recognition results, and the associated events raised by the speech recognizer.
```
using System;
using System.Speech.Recognition;
using System.Threading;
namespace InProcessRecognizer
{
class Program
{
// Indicate whether the asynchronous emulate recognition
// operation has completed.
static bool completed;
static void Main(string[] args)
{
// Initialize an instance of an in-process recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
{
// Create and load a sample grammar.
Grammar testGrammar =
new Grammar(new GrammarBuilder("testing testing"));
testGrammar.Name = "Test Grammar";
recognizer.LoadGrammar(testGrammar);
// Attach event handlers for recognition events.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(SpeechRecognizedHandler);
recognizer.EmulateRecognizeCompleted +=
new EventHandler<EmulateRecognizeCompletedEventArgs>(
EmulateRecognizeCompletedHandler);
completed = false;
// This EmulateRecognizeAsync call matches the grammar
// and generates a SpeechRecognized event.
recognizer.EmulateRecognizeAsync("testing testing");
// Wait for the asynchronous operation to complete.
while (!completed)
{
Thread.Sleep(333);
}
completed = false;
// This EmulateRecognizeAsync call does not match the grammar
// or generate a SpeechRecognized event.
recognizer.EmulateRecognizeAsync("testing one two three");
// Wait for the asynchronous operation to complete.
while (!completed)
{
Thread.Sleep(333);
}
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
// Handle the SpeechRecognized event.
static void SpeechRecognizedHandler(
object sender, SpeechRecognizedEventArgs e)
{
if (e.Result != null)
{
Console.WriteLine("Result of 1st call to EmulateRecognizeAsync = {0}",
e.Result.Text ?? "<no text>");
Console.WriteLine();
}
else
{
Console.WriteLine("No recognition result");
}
}
// Handle the EmulateRecognizeCompleted event.
static void EmulateRecognizeCompletedHandler(
object sender, EmulateRecognizeCompletedEventArgs e)
{
if (e.Result == null)
{
Console.WriteLine("Result of 2nd call to EmulateRecognizeAsync = No result generated.");
}
// Indicate the asynchronous operation is complete.
completed = true;
}
}
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.EmulateRecognizeCompletedEventArgs" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted" />
</Docs>
</Member>
<Member MemberName="EndSilenceTimeout">
<MemberSignature Language="C#" Value="public TimeSpan EndSilenceTimeout { get; set; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.TimeSpan EndSilenceTimeout" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<MemberSignature Language="VB.NET" Value="Public Property EndSilenceTimeout As TimeSpan" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property TimeSpan EndSilenceTimeout { TimeSpan get(); void set(TimeSpan value); };" />
<MemberSignature Language="F#" Value="member this.EndSilenceTimeout : TimeSpan with get, set" Usage="System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute>
<AttributeName>System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Advanced)</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.TimeSpan</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets or sets the interval of silence that the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> will accept at the end of unambiguous input before finalizing a recognition operation.</summary>
<value>The duration of the interval of silence.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer uses this timeout interval when the recognition input is unambiguous. For example, for a speech recognition grammar that supports recognition of either "new game please" or "new game", "new game please" is an unambiguous input, and "new game" is an ambiguous input.
This property determines how long the speech recognition engine will wait for additional input before finalizing a recognition operation. The timeout interval can be from 0 seconds to 10 seconds, inclusive. The default is 150 milliseconds.
To set the timeout interval for ambiguous input, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous%2A> property.
]]></format>
</remarks>
<exception cref="T:System.ArgumentOutOfRangeException">This property is set to less than 0 seconds or greater than 10 seconds.</exception>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
</Docs>
</Member>
<Member MemberName="EndSilenceTimeoutAmbiguous">
<MemberSignature Language="C#" Value="public TimeSpan EndSilenceTimeoutAmbiguous { get; set; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.TimeSpan EndSilenceTimeoutAmbiguous" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<MemberSignature Language="VB.NET" Value="Public Property EndSilenceTimeoutAmbiguous As TimeSpan" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property TimeSpan EndSilenceTimeoutAmbiguous { TimeSpan get(); void set(TimeSpan value); };" />
<MemberSignature Language="F#" Value="member this.EndSilenceTimeoutAmbiguous : TimeSpan with get, set" Usage="System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute>
<AttributeName>System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Advanced)</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.TimeSpan</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets or sets the interval of silence that the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> will accept at the end of ambiguous input before finalizing a recognition operation.</summary>
<value>The duration of the interval of silence.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The speech recognizer uses this timeout interval when the recognition input is ambiguous. For example, for a speech recognition grammar that supports recognition of either "new game please" or "new game", "new game please" is an unambiguous input, and "new game" is an ambiguous input.
This property determines how long the speech recognition engine will wait for additional input before finalizing a recognition operation. The timeout interval can be from 0 seconds to 10 seconds, inclusive. The default is 500 milliseconds.
To set the timeout interval for unambiguous input, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout%2A> property.
]]></format>
</remarks>
<exception cref="T:System.ArgumentOutOfRangeException">This property is set to less than 0 seconds or greater than 10 seconds.</exception>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
</Docs>
</Member>
<Member MemberName="Grammars">
<MemberSignature Language="C#" Value="public System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.Grammar&gt; Grammars { get; }" />
<MemberSignature Language="ILAsm" Value=".property instance class System.Collections.ObjectModel.ReadOnlyCollection`1&lt;class System.Speech.Recognition.Grammar&gt; Grammars" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.Grammars" />
<MemberSignature Language="VB.NET" Value="Public ReadOnly Property Grammars As ReadOnlyCollection(Of Grammar)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property System::Collections::ObjectModel::ReadOnlyCollection&lt;System::Speech::Recognition::Grammar ^&gt; ^ Grammars { System::Collections::ObjectModel::ReadOnlyCollection&lt;System::Speech::Recognition::Grammar ^&gt; ^ get(); };" />
<MemberSignature Language="F#" Value="member this.Grammars : System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.Grammar&gt;" Usage="System.Speech.Recognition.SpeechRecognitionEngine.Grammars" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.Grammar&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets a collection of the <see cref="T:System.Speech.Recognition.Grammar" /> objects that are loaded in this <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> instance.</summary>
<value>The collection of <see cref="T:System.Speech.Recognition.Grammar" /> objects.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Examples
The following example outputs information to the console for each speech recognition grammar that is currently loaded by a speech recognizer.
> [!IMPORTANT]
> Copy the grammar collection to avoid errors if the collection is modified while this method enumerates the elements of the collection.
```csharp
private static void ListGrammars(SpeechRecognitionEngine recognizer)
{
string qualifier;
List<Grammar> grammars = new List<Grammar>(recognizer.Grammars);
foreach (Grammar g in grammars)
{
qualifier = (g.Enabled) ? "enabled" : "disabled";
Console.WriteLine("Grammar {0} is loaded and is {1}.",
g.Name, qualifier);
}
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.Grammar" />
</Docs>
</Member>
<Member MemberName="InitialSilenceTimeout">
<MemberSignature Language="C#" Value="public TimeSpan InitialSilenceTimeout { get; set; }" />
<MemberSignature Language="ILAsm" Value=".property instance valuetype System.TimeSpan InitialSilenceTimeout" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<MemberSignature Language="VB.NET" Value="Public Property InitialSilenceTimeout As TimeSpan" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property TimeSpan InitialSilenceTimeout { TimeSpan get(); void set(TimeSpan value); };" />
<MemberSignature Language="F#" Value="member this.InitialSilenceTimeout : TimeSpan with get, set" Usage="System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Attributes>
<Attribute>
<AttributeName>System.ComponentModel.EditorBrowsable(System.ComponentModel.EditorBrowsableState.Advanced)</AttributeName>
</Attribute>
</Attributes>
<ReturnValue>
<ReturnType>System.TimeSpan</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets or sets the time interval during which a <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> accepts input containing only silence before finalizing recognition.</summary>
<value>The duration of the interval of silence.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Each speech recognizer has an algorithm to distinguish between silence and speech. If the recognizer input is silence during the initial silence timeout period, then the recognizer finalizes that recognition operation.
- For asynchronous recognition operations and emulation, the recognizer raises the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> event, where the <xref:System.Speech.Recognition.RecognizeCompletedEventArgs.InitialSilenceTimeout%2A?displayProperty=nameWithType> property is `true`, and the <xref:System.Speech.Recognition.RecognizeCompletedEventArgs.Result%2A?displayProperty=nameWithType> property is `null`.
- For synchronous recognition operations and emulation, the recognizer returns `null`, instead of a valid <xref:System.Speech.Recognition.RecognitionResult>.
If the initial silence timeout interval is set to 0, the recognizer does not perform an initial silence timeout check. The timeout interval can be any non-negative value. The default is 0 seconds.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. The example sets the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties of a <xref:System.Speech.Recognition.SpeechRecognitionEngine> before initiating speech recognition. Handlers for the speech recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.AudioStateChanged> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> events output event information to the console to demonstrate how the <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties of a <xref:System.Speech.Recognition.SpeechRecognitionEngine> properties affect recognition operations.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Initialize an in-process speech recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Load a Grammar object.
recognizer.LoadGrammar(CreateServicesGrammar("FindServices"));
// Add event handlers.
recognizer.AudioStateChanged +=
new EventHandler<AudioStateChangedEventArgs>(
AudioStateChangedHandler);
recognizer.RecognizeCompleted +=
new EventHandler<RecognizeCompletedEventArgs>(
RecognizeCompletedHandler);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
recognizer.InitialSilenceTimeout = TimeSpan.FromSeconds(3);
recognizer.BabbleTimeout = TimeSpan.FromSeconds(2);
recognizer.EndSilenceTimeout = TimeSpan.FromSeconds(1);
recognizer.EndSilenceTimeoutAmbiguous = TimeSpan.FromSeconds(1.5);
Console.WriteLine("BabbleTimeout: {0}", recognizer.BabbleTimeout);
Console.WriteLine("InitialSilenceTimeout: {0}", recognizer.InitialSilenceTimeout);
Console.WriteLine("EndSilenceTimeout: {0}", recognizer.EndSilenceTimeout);
Console.WriteLine("EndSilenceTimeoutAmbiguous: {0}", recognizer.EndSilenceTimeoutAmbiguous);
Console.WriteLine();
// Start asynchronous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Single);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Create a grammar and build it into a Grammar object.
static Grammar CreateServicesGrammar(string grammarName)
{
// Create a grammar for finding services in different cities.
Choices services = new Choices(new string[] { "restaurants", "hotels", "gas stations" });
Choices cities = new Choices(new string[] { "Seattle", "Boston", "Dallas" });
GrammarBuilder findServices = new GrammarBuilder("Find");
findServices.Append(services);
findServices.Append("near");
findServices.Append(cities);
// Create a Grammar object from the GrammarBuilder.
Grammar servicesGrammar = new Grammar(findServices);
servicesGrammar.Name = ("FindServices");
return servicesGrammar;
}
// Handle the AudioStateChanged event.
static void AudioStateChangedHandler(
object sender, AudioStateChangedEventArgs e)
{
Console.WriteLine("AudioStateChanged ({0}): {1}",
DateTime.Now.ToString("mm:ss.f"), e.AudioState);
}
// Handle the RecognizeCompleted event.
static void RecognizeCompletedHandler(
object sender, RecognizeCompletedEventArgs e)
{
Console.WriteLine("RecognizeCompleted ({0}):",
DateTime.Now.ToString("mm:ss.f"));
string resultText;
if (e.Result != null) { resultText = e.Result.Text; }
else { resultText = "<null>"; }
Console.WriteLine(
" BabbleTimeout: {0}; InitialSilenceTimeout: {1}; Result text: {2}",
e.BabbleTimeout, e.InitialSilenceTimeout, resultText);
if (e.Error != null)
{
Console.WriteLine(" Exception message: ", e.Error.Message);
}
// Start the next asynchronous recognition operation.
((SpeechRecognitionEngine)sender).RecognizeAsync(RecognizeMode.Single);
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentOutOfRangeException">This property is set to less than 0 seconds.</exception>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.Speech.Recognition.RecognizedWordUnit[],System.Globalization.CompareOptions)" />
</Docs>
</Member>
<Member MemberName="InstalledRecognizers">
<MemberSignature Language="C#" Value="public static System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.RecognizerInfo&gt; InstalledRecognizers ();" />
<MemberSignature Language="ILAsm" Value=".method public static hidebysig class System.Collections.ObjectModel.ReadOnlyCollection`1&lt;class System.Speech.Recognition.RecognizerInfo&gt; InstalledRecognizers() cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers" />
<MemberSignature Language="VB.NET" Value="Public Shared Function InstalledRecognizers () As ReadOnlyCollection(Of RecognizerInfo)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; static System::Collections::ObjectModel::ReadOnlyCollection&lt;System::Speech::Recognition::RecognizerInfo ^&gt; ^ InstalledRecognizers();" />
<MemberSignature Language="F#" Value="static member InstalledRecognizers : unit -&gt; System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.RecognizerInfo&gt;" Usage="System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers " />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Collections.ObjectModel.ReadOnlyCollection&lt;System.Speech.Recognition.RecognizerInfo&gt;</ReturnType>
</ReturnValue>
<Parameters />
<Docs>
<summary>Returns information for all of the installed speech recognizers on the current system.</summary>
<returns>A read-only collection of the <see cref="T:System.Speech.Recognition.RecognizerInfo" /> objects that describe the installed recognizers.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
To get information about the current recognizer, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerInfo%2A> property.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. The example uses the collection returned by the <xref:System.Speech.Recognition.SpeechRecognitionEngine.InstalledRecognizers%2A> method to find a speech recognizer that supports the English language.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Select a speech recognizer that supports English.
RecognizerInfo info = null;
foreach (RecognizerInfo ri in SpeechRecognitionEngine.InstalledRecognizers())
{
if (ri.Culture.TwoLetterISOLanguageName.Equals("en"))
{
info = ri;
break;
}
}
if (info == null) return;
// Create the selected recognizer.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(info))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.#ctor(System.Speech.Recognition.RecognizerInfo)" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerInfo" />
</Docs>
</Member>
<Member MemberName="LoadGrammar">
<MemberSignature Language="C#" Value="public void LoadGrammar (System.Speech.Recognition.Grammar grammar);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void LoadGrammar(class System.Speech.Recognition.Grammar grammar) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar(System.Speech.Recognition.Grammar)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; void LoadGrammar(System::Speech::Recognition::Grammar ^ grammar);" />
<MemberSignature Language="F#" Value="member this.LoadGrammar : System.Speech.Recognition.Grammar -&gt; unit" Usage="speechRecognitionEngine.LoadGrammar grammar" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="grammar" Type="System.Speech.Recognition.Grammar" />
</Parameters>
<Docs>
<param name="grammar">The grammar object to load.</param>
<summary>Synchronously loads a <see cref="T:System.Speech.Recognition.Grammar" /> object.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The recognizer throws an exception if the <xref:System.Speech.Recognition.Grammar> object is already loaded, is being asynchronously loaded, or has failed to load into any recognizer. You cannot load the same <xref:System.Speech.Recognition.Grammar> object into multiple instances of <xref:System.Speech.Recognition.SpeechRecognitionEngine>. Instead, create a new <xref:System.Speech.Recognition.Grammar> object for each <xref:System.Speech.Recognition.SpeechRecognitionEngine> instance.
If the recognizer is running, applications must use <xref:System.Speech.Recognition.SpeechRecognitionEngine.RequestRecognizerUpdate%2A> to pause the speech recognition engine before loading, unloading, enabling, or disabling a grammar.
When you load a grammar, it is enabled by default. To disable a loaded grammar, use the <xref:System.Speech.Recognition.Grammar.Enabled%2A> property.
To load a <xref:System.Speech.Recognition.Grammar> object asynchronously, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. The example creates a <xref:System.Speech.Recognition.DictationGrammar> and loads it into a speech recognizer.
```csharp
using System;
using System.Speech.Recognition;
namespace SpeechRecognitionApp
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (
SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Add a handler for the speech recognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start asynchronous, continuous speech recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
while (true)
{
Console.ReadLine();
}
}
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Recognized text: " + e.Result.Text);
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentNullException">
<paramref name="Grammar" /> is <see langword="null" />.</exception>
<exception cref="T:System.InvalidOperationException">
<paramref name="Grammar" /> is not in a valid state.</exception>
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerUpdateReached" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync(System.Speech.Recognition.Grammar)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.UnloadAllGrammars" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.UnloadGrammar(System.Speech.Recognition.Grammar)" />
</Docs>
</Member>
<Member MemberName="LoadGrammarAsync">
<MemberSignature Language="C#" Value="public void LoadGrammarAsync (System.Speech.Recognition.Grammar grammar);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void LoadGrammarAsync(class System.Speech.Recognition.Grammar grammar) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync(System.Speech.Recognition.Grammar)" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; void LoadGrammarAsync(System::Speech::Recognition::Grammar ^ grammar);" />
<MemberSignature Language="F#" Value="member this.LoadGrammarAsync : System.Speech.Recognition.Grammar -&gt; unit" Usage="speechRecognitionEngine.LoadGrammarAsync grammar" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="grammar" Type="System.Speech.Recognition.Grammar" />
</Parameters>
<Docs>
<param name="grammar">The speech recognition grammar to load.</param>
<summary>Asynchronously loads a speech recognition grammar.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
When the recognizer completes loading a <xref:System.Speech.Recognition.Grammar> object, it raises a <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarCompleted> event. The recognizer throws an exception if the <xref:System.Speech.Recognition.Grammar> object is already loaded, is being asynchronously loaded, or has failed to load into any recognizer. You cannot load the same <xref:System.Speech.Recognition.Grammar> object into multiple instances of <xref:System.Speech.Recognition.SpeechRecognitionEngine>. Instead, create a new <xref:System.Speech.Recognition.Grammar> object for each <xref:System.Speech.Recognition.SpeechRecognitionEngine> instance.
If the recognizer is running, applications must use <xref:System.Speech.Recognition.SpeechRecognitionEngine.RequestRecognizerUpdate%2A> to pause the speech recognition engine before loading, unloading, enabling, or disabling a grammar.
When you load a grammar, it is enabled by default. To disable a loaded grammar, use the <xref:System.Speech.Recognition.Grammar.Enabled%2A> property.
To load a speech recognition grammar synchronously, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> method.
]]></format>
</remarks>
<exception cref="T:System.ArgumentNullException">
<paramref name="Grammar" /> is <see langword="null" />.</exception>
<exception cref="T:System.InvalidOperationException">
<paramref name="Grammar" /> is not in a valid state.</exception>
<exception cref="T:System.OperationCanceledException">The asynchronous operation was canceled.</exception>
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarCompleted" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerUpdateReached" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar(System.Speech.Recognition.Grammar)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.UnloadAllGrammars" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.UnloadGrammar(System.Speech.Recognition.Grammar)" />
</Docs>
</Member>
<Member MemberName="LoadGrammarCompleted">
<MemberSignature Language="C#" Value="public event EventHandler&lt;System.Speech.Recognition.LoadGrammarCompletedEventArgs&gt; LoadGrammarCompleted;" />
<MemberSignature Language="ILAsm" Value=".event class System.EventHandler`1&lt;class System.Speech.Recognition.LoadGrammarCompletedEventArgs&gt; LoadGrammarCompleted" />
<MemberSignature Language="DocId" Value="E:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarCompleted" />
<MemberSignature Language="VB.NET" Value="Public Custom Event LoadGrammarCompleted As EventHandler(Of LoadGrammarCompletedEventArgs) " FrameworkAlternate="netframework-3.0;netframework-3.5;netframework-4.0;netframework-4.5;netframework-4.5.1;netframework-4.5.2" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; event EventHandler&lt;System::Speech::Recognition::LoadGrammarCompletedEventArgs ^&gt; ^ LoadGrammarCompleted;" />
<MemberSignature Language="F#" Value="member this.LoadGrammarCompleted : EventHandler&lt;System.Speech.Recognition.LoadGrammarCompletedEventArgs&gt; " Usage="member this.LoadGrammarCompleted : System.EventHandler&lt;System.Speech.Recognition.LoadGrammarCompletedEventArgs&gt; " />
<MemberSignature Language="VB.NET" Value="Public Event LoadGrammarCompleted As EventHandler(Of LoadGrammarCompletedEventArgs) " FrameworkAlternate="netframework-4.6;netframework-4.6.1;netframework-4.6.2;netframework-4.7;netframework-4.7.1;netframework-4.7.2;netframework-4.8" />
<MemberType>Event</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.EventHandler&lt;System.Speech.Recognition.LoadGrammarCompletedEventArgs&gt;</ReturnType>
</ReturnValue>
<Docs>
<summary>Raised when the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> finishes the asynchronous loading of a <see cref="T:System.Speech.Recognition.Grammar" /> object.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method initiates an asynchronous operation. The <xref:System.Speech.Recognition.SpeechRecognitionEngine> raises this event when it completes the operation. To get the <xref:System.Speech.Recognition.Grammar> object that the recognizer loaded, use the <xref:System.Speech.Recognition.LoadGrammarCompletedEventArgs.Grammar%2A> property of the associated <xref:System.Speech.Recognition.LoadGrammarCompletedEventArgs>. To get the current <xref:System.Speech.Recognition.Grammar> objects the recognizer has loaded, use the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.Grammars%2A> property.
If the recognizer is running, applications must use <xref:System.Speech.Recognition.SpeechRecognitionEngine.RequestRecognizerUpdate%2A> to pause the speech recognition engine before loading, unloading, enabling, or disabling a grammar.
When you create a <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarCompleted> delegate, you identify the method that will handle the event. To associate the event with your event handler, add an instance of the delegate to the event. The event handler is called whenever the event occurs, unless you remove the delegate. For more information about event-handler delegates, see [Events and Delegates](https://go.microsoft.com/fwlink/?LinkId=162418).
## Examples
The following example creates an in-process speech recognizer, and then creates two types of grammars for recognizing specific words and for accepting free dictation. The example constructs a <xref:System.Speech.Recognition.Grammar> object from each of the completed speech recognition grammars, then asynchronously loads the <xref:System.Speech.Recognition.Grammar> objects to the <xref:System.Speech.Recognition.SpeechRecognitionEngine> instance. Handlers for the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarCompleted> and <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized> events write to the console the name of the <xref:System.Speech.Recognition.Grammar> object that was used to perform the recognition and the text of the recognition result, respectively.
```
using System;
using System.Speech.Recognition;
namespace SampleRecognition
{
class Program
{
private static SpeechRecognitionEngine recognizer;
public static void Main(string[] args)
{
// Initialize an in-process speech recognition engine and set its input.
recognizer = new SpeechRecognitionEngine();
recognizer.SetInputToDefaultAudioDevice();
// Add a handler for the LoadGrammarCompleted event.
recognizer.LoadGrammarCompleted +=
new EventHandler<LoadGrammarCompletedEventArgs>(recognizer_LoadGrammarCompleted);
// Add a handler for the SpeechRecognized event.
recognizer.SpeechRecognized +=
new EventHandler<SpeechRecognizedEventArgs>(recognizer_SpeechRecognized);
// Create the "yesno" grammar.
Choices yesChoices = new Choices(new string[] { "yes", "yup", "yeah" });
SemanticResultValue yesValue =
new SemanticResultValue(yesChoices, (bool)true);
Choices noChoices = new Choices(new string[] { "no", "nope", "neah" });
SemanticResultValue noValue =
new SemanticResultValue(noChoices, (bool)false);
SemanticResultKey yesNoKey =
new SemanticResultKey("yesno", new Choices(new GrammarBuilder[] { yesValue, noValue }));
Grammar yesnoGrammar = new Grammar(yesNoKey);
yesnoGrammar.Name = "yesNo";
// Create the "done" grammar.
Grammar doneGrammar =
new Grammar(new Choices(new string[] { "done", "exit", "quit", "stop" }));
doneGrammar.Name = "Done";
// Create a dictation grammar.
Grammar dictation = new DictationGrammar();
dictation.Name = "Dictation";
// Load grammars to the recognizer.
recognizer.LoadGrammarAsync(yesnoGrammar);
recognizer.LoadGrammarAsync(doneGrammar);
recognizer.LoadGrammarAsync(dictation);
// Start asynchronous, continuous recognition.
recognizer.RecognizeAsync(RecognizeMode.Multiple);
// Keep the console window open.
Console.ReadLine();
}
// Handle the LoadGrammarCompleted event.
static void recognizer_LoadGrammarCompleted(object sender, LoadGrammarCompletedEventArgs e)
{
string grammarName = e.Grammar.Name;
bool grammarLoaded = e.Grammar.Loaded;
if (e.Error != null)
{
Console.WriteLine("LoadGrammar for {0} failed with a {1}.",
grammarName, e.Error.GetType().Name);
// Add exception handling code here.
}
Console.WriteLine("Grammar {0} {1} loaded.",
grammarName, (grammarLoaded) ? "is" : "is not");
}
// Handle the SpeechRecognized event.
static void recognizer_SpeechRecognized(object sender, SpeechRecognizedEventArgs e)
{
Console.WriteLine("Grammar({0}): {1}", e.Result.Grammar.Name, e.Result.Text);
// Add event handler code here.
}
}
}
```
]]></format>
</remarks>
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.RecognizerUpdateReached" />
<altmember cref="T:System.Speech.Recognition.Grammar" />
<altmember cref="T:System.Speech.Recognition.LoadGrammarCompletedEventArgs" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync(System.Speech.Recognition.Grammar)" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.Grammars" />
</Docs>
</Member>
<Member MemberName="MaxAlternates">
<MemberSignature Language="C#" Value="public int MaxAlternates { get; set; }" />
<MemberSignature Language="ILAsm" Value=".property instance int32 MaxAlternates" />
<MemberSignature Language="DocId" Value="P:System.Speech.Recognition.SpeechRecognitionEngine.MaxAlternates" />
<MemberSignature Language="VB.NET" Value="Public Property MaxAlternates As Integer" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; property int MaxAlternates { int get(); void set(int value); };" />
<MemberSignature Language="F#" Value="member this.MaxAlternates : int with get, set" Usage="System.Speech.Recognition.SpeechRecognitionEngine.MaxAlternates" />
<MemberType>Property</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Int32</ReturnType>
</ReturnValue>
<Docs>
<summary>Gets or sets the maximum number of alternate recognition results that the <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> returns for each recognition operation.</summary>
<value>The number of alternate results to return.</value>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The <xref:System.Speech.Recognition.RecognitionResult.Alternates%2A> property of the <xref:System.Speech.Recognition.RecognitionResult> class contains the collection of <xref:System.Speech.Recognition.RecognizedPhrase> objects that represent possible interpretations of the input.
The default value for <xref:System.Speech.Recognition.SpeechRecognitionEngine.MaxAlternates%2A> is 10.
]]></format>
</remarks>
<exception cref="T:System.ArgumentOutOfRangeException">
<see cref="P:System.Speech.Recognition.SpeechRecognitionEngine.MaxAlternates" /> is set to a value less than 0.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.String)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognizeAsync(System.String)" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
</Docs>
</Member>
<Member MemberName="QueryRecognizerSetting">
<MemberSignature Language="C#" Value="public object QueryRecognizerSetting (string settingName);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance object QueryRecognizerSetting(string settingName) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.QueryRecognizerSetting(System.String)" />
<MemberSignature Language="VB.NET" Value="Public Function QueryRecognizerSetting (settingName As String) As Object" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Object ^ QueryRecognizerSetting(System::String ^ settingName);" />
<MemberSignature Language="F#" Value="member this.QueryRecognizerSetting : string -&gt; obj" Usage="speechRecognitionEngine.QueryRecognizerSetting settingName" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Object</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="settingName" Type="System.String" />
</Parameters>
<Docs>
<param name="settingName">The name of the setting to return.</param>
<summary>Returns the values of settings for the recognizer.</summary>
<returns>The value of the setting.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
Recognizer settings can contain string, 64-bit integer, or memory address data. The following table describes the settings that are defined for a Microsoft Speech API (SAPI)-compliant recognizer. The following settings must have the same range for each recognizer that supports the setting. A SAPI-compliant recognizer is not required to support these settings and can support other settings.
|Name|Description|
|----------|-----------------|
|`ResourceUsage`|Specifies the recognizer's CPU consumption. The range is from 0 to 100. The default value is 50.|
|`ResponseSpeed`|Indicates the length of silence at the end of unambiguous input before the speech recognizer completes a recognition operation. The range is from 0 to 10,000 milliseconds (ms). This setting corresponds to the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout%2A> property. Default = 150ms.|
|`ComplexResponseSpeed`|Indicates the length of silence at the end of ambiguous input before the speech recognizer completes a recognition operation. The range is from 0 to 10,000ms. This setting corresponds to the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous%2A> property. Default = 500ms.|
|`AdaptationOn`|Indicates whether adaptation of the acoustic model is ON (value = `1`) or OFF (value = `0`). The default value is `1` (ON).|
|`PersistedBackgroundAdaptation`|Indicates whether background adaptation is ON (value = `1`) or OFF (value = `0`), and persists the setting in the registry. The default value is `1` (ON).|
To update a setting for the recognizer, use one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.UpdateRecognizerSetting%2A> methods.
## Examples
The following example is part of a console application that outputs the values for a number of the settings defined for the recognizer that supports the en-US locale. The example generates the following output.
```
Settings for recognizer MS-1033-80-DESK:
ResourceUsage is not supported by this recognizer.
ResponseSpeed = 150
ComplexResponseSpeed = 500
AdaptationOn = 1
PersistedBackgroundAdaptation = 1
Press any key to exit...
```
```csharp
using System;
using System.Globalization;
using System.Speech.Recognition;
namespace RecognizerSettings
{
class Program
{
static readonly string[] settings = new string[] {
"ResourceUsage",
"ResponseSpeed",
"ComplexResponseSpeed",
"AdaptationOn",
"PersistedBackgroundAdaptation"
};
static void Main(string[] args)
{
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(new System.Globalization.CultureInfo("en-US")))
{
Console.WriteLine("Settings for recognizer {0}:",
recognizer.RecognizerInfo.Name);
Console.WriteLine();
foreach (string setting in settings)
{
try
{
object value = recognizer.QueryRecognizerSetting(setting);
Console.WriteLine(" {0,-30} = {1}", setting, value);
}
catch
{
Console.WriteLine(" {0,-30} is not supported by this recognizer.",
setting);
}
}
}
Console.WriteLine();
Console.WriteLine("Press any key to exit...");
Console.ReadKey();
}
}
}
```
]]></format>
</remarks>
<exception cref="T:System.ArgumentNullException">
<paramref name="settingName" /> is <see langword="null" />.</exception>
<exception cref="T:System.ArgumentException">
<paramref name="settingName" /> is the empty string ("").</exception>
<exception cref="T:System.Collections.Generic.KeyNotFoundException">The recognizer does not have a setting by that name.</exception>
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.UpdateRecognizerSetting(System.String,System.Int32)" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
</Docs>
</Member>
<MemberGroup MemberName="Recognize">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Starts a synchronous speech recognition operation.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
These methods perform a single, synchronous recognition operation. The recognizer performs this operation against its loaded and enabled speech recognition grammars.
During a call to this method, the recognizer can raise the following events:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>. Raised when the recognizer detects input that it can identify as speech.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>. Raised when input creates an ambiguous match with one of the active grammars.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized>. Raised when the recognizer finalizes a recognition operation.
The recognizer does not raise the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> event when using one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize%2A> methods.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize%2A> methods return a <xref:System.Speech.Recognition.RecognitionResult> object, or `null` if the operation is not successful or the recognizer is not enabled.
A synchronous recognition operation can fail for the following reasons:
- Speech is not detected before the timeout intervals expire for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties, or for the `initialSilenceTimeout` parameter of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize%2A> method.
- The recognition engine detects speech but finds no matches in any of its loaded and enabled <xref:System.Speech.Recognition.Grammar> objects.
To modify how the recognizer handles the timing of speech or silence with respect to recognition, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A>, <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout%2A>, and <xref:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous%2A> properties.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine> must have at least one <xref:System.Speech.Recognition.Grammar> object loaded before performing recognition. To load a speech recognition grammar, use the <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammar%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.LoadGrammarAsync%2A> method.
To perform asynchronous recognition, use one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync%2A> methods.
]]></format>
</remarks>
</Docs>
</MemberGroup>
<Member MemberName="Recognize">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.RecognitionResult Recognize ();" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance class System.Speech.Recognition.RecognitionResult Recognize() cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize" />
<MemberSignature Language="VB.NET" Value="Public Function Recognize () As RecognitionResult" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Speech::Recognition::RecognitionResult ^ Recognize();" />
<MemberSignature Language="F#" Value="member this.Recognize : unit -&gt; System.Speech.Recognition.RecognitionResult" Usage="speechRecognitionEngine.Recognize " />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.Recognition.RecognitionResult</ReturnType>
</ReturnValue>
<Parameters />
<Docs>
<summary>Performs a synchronous speech recognition operation.</summary>
<returns>The recognition result for the input, or <see langword="null" /> if the operation is not successful or the recognizer is not enabled.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
This method performs a single recognition operation. The recognizer performs this operation against its loaded and enabled speech recognition grammars.
During a call to this method, the recognizer can raise the following events:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>. Raised when the recognizer detects input that it can identify as speech.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>. Raised when input creates an ambiguous match with one of the active grammars.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized>. Raised when the recognizer finalizes a recognition operation.
The recognizer does not raise the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> event when using this method.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize> method returns a <xref:System.Speech.Recognition.RecognitionResult> object, or `null` if the operation is not successful.
A synchronous recognition operation can fail for the following reasons:
- Speech is not detected before the timeout intervals expire for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> properties.
- The recognition engine detects speech but finds no matches in any of its loaded and enabled <xref:System.Speech.Recognition.Grammar> objects.
To perform asynchronous recognition, use one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync%2A> methods.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. The example creates a <xref:System.Speech.Recognition.DictationGrammar>, loads it into an in-process speech recognizer, and performs one recognition operation.
```
using System;
using System.Speech.Recognition;
namespace SynchronousRecognition
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Modify the initial silence time-out value.
recognizer.InitialSilenceTimeout = TimeSpan.FromSeconds(5);
// Start synchronous speech recognition.
RecognitionResult result = recognizer.Recognize();
if (result != null)
{
Console.WriteLine("Recognized text = {0}", result.Text);
}
else
{
Console.WriteLine("No recognition result available.");
}
}
Console.WriteLine();
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
}
}
```
]]></format>
</remarks>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.String)" />
</Docs>
</Member>
<Member MemberName="Recognize">
<MemberSignature Language="C#" Value="public System.Speech.Recognition.RecognitionResult Recognize (TimeSpan initialSilenceTimeout);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance class System.Speech.Recognition.RecognitionResult Recognize(valuetype System.TimeSpan initialSilenceTimeout) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.SpeechRecognitionEngine.Recognize(System.TimeSpan)" />
<MemberSignature Language="VB.NET" Value="Public Function Recognize (initialSilenceTimeout As TimeSpan) As RecognitionResult" />
<MemberSignature Language="C++ CLI" Value="public:&#xA; System::Speech::Recognition::RecognitionResult ^ Recognize(TimeSpan initialSilenceTimeout);" />
<MemberSignature Language="F#" Value="member this.Recognize : TimeSpan -&gt; System.Speech.Recognition.RecognitionResult" Usage="speechRecognitionEngine.Recognize initialSilenceTimeout" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Speech.Recognition.RecognitionResult</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="initialSilenceTimeout" Type="System.TimeSpan" />
</Parameters>
<Docs>
<param name="initialSilenceTimeout">The interval of time a speech recognizer accepts input containing only silence before finalizing recognition.</param>
<summary>Performs a synchronous speech recognition operation with a specified initial silence timeout period.</summary>
<returns>The recognition result for the input, or <see langword="null" /> if the operation is not successful or the recognizer is not enabled.</returns>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
If the speech recognition engine detects speech within the time interval specified by `initialSilenceTimeout` argument, <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize%28System.TimeSpan%29> performs a single recognition operation and then terminates. The `initialSilenceTimeout` parameter supersedes the recognizer's <xref:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout%2A> property.
During a call to this method, the recognizer can raise the following events:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected>. Raised when the recognizer detects input that it can identify as speech.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized>. Raised when input creates an ambiguous match with one of the active grammars.
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected> or <xref:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized>. Raised when the recognizer finalizes a recognition operation.
The recognizer does not raise the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeCompleted> event when using this method.
The <xref:System.Speech.Recognition.SpeechRecognitionEngine.Recognize> method returns a <xref:System.Speech.Recognition.RecognitionResult> object, or `null` if the operation is not successful.
A synchronous recognition operation can fail for the following reasons:
- Speech is not detected before the timeout intervals expire for the <xref:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout%2A> or for the `initialSilenceTimeout` parameter.
- The recognition engine detects speech but finds no matches in any of its loaded and enabled <xref:System.Speech.Recognition.Grammar> objects.
To perform asynchronous recognition, use one of the <xref:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync%2A> methods.
## Examples
The following example shows part of a console application that demonstrates basic speech recognition. The example creates a <xref:System.Speech.Recognition.DictationGrammar>, loads it into an in-process speech recognizer, and performs one recognition operation.
```csharp
using System;
using System.Speech.Recognition;
namespace SynchronousRecognition
{
class Program
{
static void Main(string[] args)
{
// Create an in-process speech recognizer for the en-US locale.
using (SpeechRecognitionEngine recognizer =
new SpeechRecognitionEngine(
new System.Globalization.CultureInfo("en-US")))
{
// Create and load a dictation grammar.
recognizer.LoadGrammar(new DictationGrammar());
// Configure input to the speech recognizer.
recognizer.SetInputToDefaultAudioDevice();
// Start synchronous speech recognition.
RecognitionResult result = recognizer.Recognize(TimeSpan.FromSeconds(5));
if (result != null)
{
Console.WriteLine("Recognized text = {0}", result.Text);
}
else
{
Console.WriteLine("No recognition result available.");
}
}
Console.WriteLine();
Console.WriteLine("Press any key to continue...");
Console.ReadKey();
}
}
}
```
]]></format>
</remarks>
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.BabbleTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.InitialSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeout" />
<altmember cref="P:System.Speech.Recognition.SpeechRecognitionEngine.EndSilenceTimeoutAmbiguous" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechDetected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechHypothesized" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognitionRejected" />
<altmember cref="E:System.Speech.Recognition.SpeechRecognitionEngine.SpeechRecognized" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.RecognizeAsync" />
<altmember cref="M:System.Speech.Recognition.SpeechRecognitionEngine.EmulateRecognize(System.String)" />
</Docs>
</Member>
<MemberGroup MemberName="RecognizeAsync">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Starts an asynchronous speech recognition operation.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
These methods perform single or multiple, asynchronous recognition operations. The recognizer performs each operation against its loaded and enabled speech recognition grammars.
During a call to this method, the recognizer can raise the following events:
- <xref:System.Speech.Recognition.SpeechRecognitionEngine.Spe