-
Notifications
You must be signed in to change notification settings - Fork 1.6k
/
DictationGrammar.xml
195 lines (170 loc) · 11.2 KB
/
DictationGrammar.xml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
106
107
108
109
110
111
112
113
114
115
116
117
118
119
120
121
122
123
124
125
126
127
128
129
130
131
132
133
134
135
136
137
138
139
140
141
142
143
144
145
146
147
148
149
150
151
152
153
154
155
156
157
158
159
160
161
162
163
164
165
166
167
168
169
170
171
172
173
174
175
176
177
178
179
180
181
182
183
184
185
186
187
188
189
190
191
192
193
194
195
<Type Name="DictationGrammar" FullName="System.Speech.Recognition.DictationGrammar">
<TypeSignature Language="C#" Value="public class DictationGrammar : System.Speech.Recognition.Grammar" />
<TypeSignature Language="ILAsm" Value=".class public auto ansi beforefieldinit DictationGrammar extends System.Speech.Recognition.Grammar" />
<TypeSignature Language="DocId" Value="T:System.Speech.Recognition.DictationGrammar" />
<TypeSignature Language="VB.NET" Value="Public Class DictationGrammar
Inherits Grammar" />
<TypeSignature Language="F#" Value="type DictationGrammar = class
 inherit Grammar" />
<TypeSignature Language="C++ CLI" Value="public ref class DictationGrammar : System::Speech::Recognition::Grammar" />
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
<AssemblyVersion>8.0.0.0</AssemblyVersion>
<AssemblyVersion>9.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Base>
<BaseTypeName>System.Speech.Recognition.Grammar</BaseTypeName>
</Base>
<Interfaces />
<Docs>
<summary>Represents a speech recognition grammar used for free text dictation.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
This class provides applications with a predefined language model that can process spoken user input into text. This class supports both default and custom <xref:System.Speech.Recognition.DictationGrammar> objects. For information about selecting a dictation grammar, see the <xref:System.Speech.Recognition.DictationGrammar.%23ctor%28System.String%29> constructor.
By default, the <xref:System.Speech.Recognition.DictationGrammar> language model is context free. It does not make use of specific words or word order to identify and interpret audio input. To add context to the dictation grammar, use the <xref:System.Speech.Recognition.DictationGrammar.SetDictationContext%2A> method.
> [!NOTE]
> <xref:System.Speech.Recognition.DictationGrammar> objects do not support the <xref:System.Speech.Recognition.Grammar.Priority%2A> property. <xref:System.Speech.Recognition.DictationGrammar> throws a <xref:System.NotSupportedException> if <xref:System.Speech.Recognition.Grammar.Priority%2A> is set.
## Examples
The following example creates three dictation grammars, adds them to a new <xref:System.Speech.Recognition.SpeechRecognitionEngine> object, and returns the new object. The first grammar is the default dictation grammar. The second grammar is the spelling dictation grammar. The third grammar is the default dictation grammar that includes a context phrase. The <xref:System.Speech.Recognition.DictationGrammar.SetDictationContext%2A> method is used to associate the context phrase with the dictation grammar after it is loaded to the <xref:System.Speech.Recognition.SpeechRecognitionEngine> object.
```csharp
private SpeechRecognitionEngine LoadDictationGrammars()
{
// Create a default dictation grammar.
DictationGrammar defaultDictationGrammar = new DictationGrammar();
defaultDictationGrammar.Name = "default dictation";
defaultDictationGrammar.Enabled = true;
// Create the spelling dictation grammar.
DictationGrammar spellingDictationGrammar =
new DictationGrammar("grammar:dictation#spelling");
spellingDictationGrammar.Name = "spelling dictation";
spellingDictationGrammar.Enabled = true;
// Create the question dictation grammar.
DictationGrammar customDictationGrammar =
new DictationGrammar("grammar:dictation");
customDictationGrammar.Name = "question dictation";
customDictationGrammar.Enabled = true;
// Create a SpeechRecognitionEngine object and add the grammars to it.
SpeechRecognitionEngine recoEngine = new SpeechRecognitionEngine();
recoEngine.LoadGrammar(defaultDictationGrammar);
recoEngine.LoadGrammar(spellingDictationGrammar);
recoEngine.LoadGrammar(customDictationGrammar);
// Add a context to customDictationGrammar.
customDictationGrammar.SetDictationContext("How do you", null);
return recoEngine;
}
```
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.Grammar" />
</Docs>
<Members>
<MemberGroup MemberName=".ctor">
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Docs>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.DictationGrammar" /> class.</summary>
</Docs>
</MemberGroup>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public DictationGrammar ();" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor() cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.DictationGrammar.#ctor" />
<MemberSignature Language="VB.NET" Value="Public Sub New ()" />
<MemberSignature Language="C++ CLI" Value="public:
 DictationGrammar();" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
<AssemblyVersion>8.0.0.0</AssemblyVersion>
<AssemblyVersion>9.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters />
<Docs>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.DictationGrammar" /> class for the default dictation grammar provided by Windows Desktop Speech Technology.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The default dictation grammar emulates standard dictation practices, including punctuation. It does not support the spelling of a word.
]]></format>
</remarks>
</Docs>
</Member>
<Member MemberName=".ctor">
<MemberSignature Language="C#" Value="public DictationGrammar (string topic);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig specialname rtspecialname instance void .ctor(string topic) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.DictationGrammar.#ctor(System.String)" />
<MemberSignature Language="VB.NET" Value="Public Sub New (topic As String)" />
<MemberSignature Language="F#" Value="new System.Speech.Recognition.DictationGrammar : string -> System.Speech.Recognition.DictationGrammar" Usage="new System.Speech.Recognition.DictationGrammar topic" />
<MemberSignature Language="C++ CLI" Value="public:
 DictationGrammar(System::String ^ topic);" />
<MemberType>Constructor</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
<AssemblyVersion>8.0.0.0</AssemblyVersion>
<AssemblyVersion>9.0.0.0</AssemblyVersion>
</AssemblyInfo>
<Parameters>
<Parameter Name="topic" Type="System.String" />
</Parameters>
<Docs>
<param name="topic">An XML-compliant Universal Resource Identifier (URI) that specifies the dictation grammar, either <c>grammar:dictation</c> or <c>grammar:dictation#spelling</c>.</param>
<summary>Initializes a new instance of the <see cref="T:System.Speech.Recognition.DictationGrammar" /> class with a specific dictation grammar.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
The Speech platform uses a specialized URI syntax to define the custom dictation grammar. The value `grammar:dictation` indicates the default dictation grammar. The value `grammar:dictation#spelling` indicates the spelling dictation grammar.
]]></format>
</remarks>
</Docs>
</Member>
<Member MemberName="SetDictationContext">
<MemberSignature Language="C#" Value="public void SetDictationContext (string precedingText, string subsequentText);" />
<MemberSignature Language="ILAsm" Value=".method public hidebysig instance void SetDictationContext(string precedingText, string subsequentText) cil managed" />
<MemberSignature Language="DocId" Value="M:System.Speech.Recognition.DictationGrammar.SetDictationContext(System.String,System.String)" />
<MemberSignature Language="VB.NET" Value="Public Sub SetDictationContext (precedingText As String, subsequentText As String)" />
<MemberSignature Language="F#" Value="member this.SetDictationContext : string * string -> unit" Usage="dictationGrammar.SetDictationContext (precedingText, subsequentText)" />
<MemberSignature Language="C++ CLI" Value="public:
 void SetDictationContext(System::String ^ precedingText, System::String ^ subsequentText);" />
<MemberType>Method</MemberType>
<AssemblyInfo>
<AssemblyName>System.Speech</AssemblyName>
<AssemblyVersion>3.0.0.0</AssemblyVersion>
<AssemblyVersion>4.0.0.0</AssemblyVersion>
<AssemblyVersion>8.0.0.0</AssemblyVersion>
<AssemblyVersion>9.0.0.0</AssemblyVersion>
</AssemblyInfo>
<ReturnValue>
<ReturnType>System.Void</ReturnType>
</ReturnValue>
<Parameters>
<Parameter Name="precedingText" Type="System.String" />
<Parameter Name="subsequentText" Type="System.String" />
</Parameters>
<Docs>
<param name="precedingText">Text that indicates the start of a dictation context.</param>
<param name="subsequentText">Text that indicates the end of a dictation context.</param>
<summary>Adds a context to a dictation grammar that has been loaded by a <see cref="T:System.Speech.Recognition.SpeechRecognizer" /> or a <see cref="T:System.Speech.Recognition.SpeechRecognitionEngine" /> object.</summary>
<remarks>
<format type="text/markdown"><![CDATA[
## Remarks
By default, the dictation grammar does not make use of specific words or word order to identify and interpret audio input. When a context is added to a dictation grammar, the recognition engine uses the `precedingText` and `subsequentText` to identify when to interpret speech as dictation.
> [!NOTE]
> A dictation grammar must be loaded by a <xref:System.Speech.Recognition.SpeechRecognizer> or <xref:System.Speech.Recognition.SpeechRecognitionEngine> object before you can use <xref:System.Speech.Recognition.DictationGrammar.SetDictationContext%2A> to add a context.
The following table describes how the recognition engine uses the two parameters to determine when to use the dictation grammar.
|`precedingText`|`subsequentText`|Description|
|---------------------|----------------------|-----------------|
|not `null`|not `null`|The recognition engine uses the terms to bracket possible candidate phrases.|
|`null`|not `null`|The recognition engine uses the `subsequentText` to finish dictation.|
|not `null`|`null`|The recognition engine uses the `precedingText` to start dictation.|
|`null`|`null`|The recognition engine does not use a context when using the dictation grammar.|
]]></format>
</remarks>
<altmember cref="T:System.Speech.Recognition.Grammar" />
</Docs>
</Member>
</Members>
</Type>