Full-text-search: add an option to use ICU tokenizer #1095
Merged
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
Suggestions cannot be applied on multi-line comments.
Suggestions cannot be applied while the pull request is queued to merge.
Suggestion cannot be applied right now. Please check back later.
Probably fixes #1090.
The currently used FTS tokenizer (
unicode61
) doesn't know anything about CJK, so it doesn't split words in these languages.I'm not sure about the quality, but the
icu
tokenizer seems to do a better job at this (to my understandingunicode61
is still better for latin-based languages, hence it is the default).Here are some tests I ran on an emulator (Android 8.1):
icu
,icu zh_CN
andicu zh_TW
produced the same result in this case.I also tried to find this article using this query:
据台湾
.