-
Notifications
You must be signed in to change notification settings - Fork 181
/
CustomAnalyzer.yml
105 lines (96 loc) · 3.15 KB
/
CustomAnalyzer.yml
1
2
3
4
5
6
7
8
9
10
11
12
13
14
15
16
17
18
19
20
21
22
23
24
25
26
27
28
29
30
31
32
33
34
35
36
37
38
39
40
41
42
43
44
45
46
47
48
49
50
51
52
53
54
55
56
57
58
59
60
61
62
63
64
65
66
67
68
69
70
71
72
73
74
75
76
77
78
79
80
81
82
83
84
85
86
87
88
89
90
91
92
93
94
95
96
97
98
99
100
101
102
103
104
105
### YamlMime:TSType
name: CustomAnalyzer
uid: '@azure/search-documents.CustomAnalyzer'
package: '@azure/search-documents'
summary: >-
Allows you to take control over the process of converting text into
indexable/searchable tokens.
It's a user-defined configuration consisting of a single predefined tokenizer
and one or more
filters. The tokenizer is responsible for breaking text into tokens, and the
filters for
modifying tokens emitted by the tokenizer.
fullName: CustomAnalyzer
remarks: ''
isDeprecated: false
type: interface
properties:
- name: charFilters
uid: '@azure/search-documents.CustomAnalyzer.charFilters'
package: '@azure/search-documents'
summary: >-
A list of character filters used to prepare input text before it is
processed by the
tokenizer. For instance, they can replace certain characters or symbols.
The filters are run
in the order in which they are listed.
fullName: charFilters
remarks: ''
isDeprecated: false
syntax:
content: 'charFilters?: string[]'
return:
description: ''
type: string[]
- name: name
uid: '@azure/search-documents.CustomAnalyzer.name'
package: '@azure/search-documents'
summary: >-
The name of the analyzer. It must only contain letters, digits, spaces,
dashes or underscores,
can only start and end with alphanumeric characters, and is limited to 128
characters.
fullName: name
remarks: ''
isDeprecated: false
syntax:
content: 'name: string'
return:
description: ''
type: string
- name: odatatype
uid: '@azure/search-documents.CustomAnalyzer.odatatype'
package: '@azure/search-documents'
summary: Polymorphic Discriminator
fullName: odatatype
remarks: ''
isDeprecated: false
syntax:
content: 'odatatype: "#Microsoft.Azure.Search.CustomAnalyzer"'
return:
description: ''
type: '"#<xref uid="Microsoft.Azure.Search.CustomAnalyzer" />"'
- name: tokenFilters
uid: '@azure/search-documents.CustomAnalyzer.tokenFilters'
package: '@azure/search-documents'
summary: >-
A list of token filters used to filter out or modify the tokens generated
by a tokenizer. For
example, you can specify a lowercase filter that converts all characters
to lowercase. The
filters are run in the order in which they are listed.
fullName: tokenFilters
remarks: ''
isDeprecated: false
syntax:
content: 'tokenFilters?: string[]'
return:
description: ''
type: string[]
- name: tokenizerName
uid: '@azure/search-documents.CustomAnalyzer.tokenizerName'
package: '@azure/search-documents'
summary: >-
The name of the tokenizer to use to divide continuous text into a sequence
of tokens, such as
breaking a sentence into words.
[KnownTokenizerNames](xref:@azure/search-documents.KnownTokenizerNames) is
an enum containing built-in tokenizer names.
fullName: tokenizerName
remarks: ''
isDeprecated: false
syntax:
content: 'tokenizerName: string'
return:
description: ''
type: string