ElasticSearch第二篇 ik分词
第一篇安装好了Elasticsearch后,这篇看看分词,ES默认自带了分词器:
比如输入字符串:
sojson在线工具
http://192.168.74.129:9200/_analyze?analyzer=standard&pretty=true&text=sojson在线工具
输出结果:
{
"tokens" : [
{
"token" : "sojson",
"start_offset" : 0,
"end_offset" : 6,
"type" : "<ALPHANUM>",
"position" : 0
},
{
"token" : "在",
"start_offset" : 6,
"end_offset" : 7,
"type" : "<IDEOGRAPHIC>",
"position" : 1
},
{
"token" : "线",
"start_offset" : 7,
"end_offset" : 8,
"type" : "<IDEOGRAPHIC>",
"position" : 2
},
{
"token" : "工",
"start_offset" : 8,
"end_offset" : 9,
"type" : "<IDEOGRAPHIC>",
"position" : 3
},
{
"token" : "具",
"start_offset" : 9,
"end_offset" : 10,
"type" : "<IDEOGRAPHIC>",
"position" : 4
}
]
}
中文是一个字一个字的分,而我们日常需要的应该是按词来分,比如:sojson
、在线
、工具
elasticsearch-analysis-ik
是一款中文的分词插件,支持自定义词库,也有默认的词库。
git checkout tags/{version}
从github获取
5.0.1版本的ik
: https://github.com/yangyp8110/elasticsearch-analysis-ik/tree/v5.0.1 \解压缩,用mvn打包,从项目地址(
elasticsearch-analysis-ik-5.0.1\target\releases
)取得zip包上传到es/plugins/ik
下:
[esuser@yyp ~]$ cp elasticsearch-analysis-ik-5.0.1.zip install/elasticsearch/plugins/ik/
[esuser@yyp ~]$ cd install/elasticsearch/plugins/ik/
[esuser@yyp ik]$ ll
total 3216
-rw-r--r--. 1 esuser esuser 3290326 Feb 19 02:30 elasticsearch-analysis-ik-5.0.1.zip
[esuser@yyp ik]$ unzip elasticsearch-analysis-ik-5.0.1.zip
[esuser@yyp ik]$ ll
total 5824
-rw-r--r--. 1 esuser esuser 263965 Feb 19 2017 commons-codec-1.9.jar
-rw-r--r--. 1 esuser esuser 61829 Feb 19 2017 commons-logging-1.2.jar
drwxr-xr-x. 3 esuser esuser 4096 Feb 19 2017 config
-rw-r--r--. 1 esuser esuser 51330 Feb 19 2017 elasticsearch-analysis-ik-5.0.1.jar
-rw-r--r--. 1 esuser esuser 4502014 Feb 19 02:40 elasticsearch-analysis-ik-5.0.1.zip
-rw-r--r--. 1 esuser esuser 736658 Feb 19 2017 httpclient-4.5.2.jar
-rw-r--r--. 1 esuser esuser 326724 Feb 19 2017 httpcore-4.4.4.jar
-rw-r--r--. 1 esuser esuser 2666 Feb 19 2017 plugin-descriptor.properties
[esuser@yyp ik]$ rm -rf elasticsearch-analysis-ik-5.0.1.zip
[esuser@yyp ik]$
[esuser@yyp elasticsearch]$ bin/elasticsearch
启动成功!
ik_max_word: 会将文本做最细粒度的拆分,比如会将“中华人民共和国国歌”拆分为“中华人民共和国,中华人民,中华,华人,人民共和国,人民,人,民,共和国,共和,和,国国,国歌”,会穷尽各种可能的组合;
ik_smart: 会做最粗粒度的拆分,比如会将“中华人民共和国国歌”拆分为“中华人民共和国,国歌”。
ik_max_word
与ik_smart
测试:
ik_max_word
: http://192.168.74.129:9200/_analyze?analyzer=ik_max_word&pretty=true&text=中华人民共和国国歌{ "tokens" : [ { "token" : "中华人民共和国", "start_offset" : 9, "end_offset" : 16, "type" : "CN_WORD", "position" : 1 }, { "token" : "中华人民", "start_offset" : 9, "end_offset" : 13, "type" : "CN_WORD", "position" : 2 }, { "token" : "中华", "start_offset" : 9, "end_offset" : 11, "type" : "CN_WORD", "position" : 3 }, { "token" : "华人", "start_offset" : 10, "end_offset" : 12, "type" : "CN_WORD", "position" : 4 }, { "token" : "人民共和国", "start_offset" : 11, "end_offset" : 16, "type" : "CN_WORD", "position" : 5 }, { "token" : "人民", "start_offset" : 11, "end_offset" : 13, "type" : "CN_WORD", "position" : 6 }, { "token" : "共和国", "start_offset" : 13, "end_offset" : 16, "type" : "CN_WORD", "position" : 7 }, { "token" : "共和", "start_offset" : 13, "end_offset" : 15, "type" : "CN_WORD", "position" : 8 }, { "token" : "国", "start_offset" : 15, "end_offset" : 16, "type" : "CN_CHAR", "position" : 9 }, { "token" : "国歌", "start_offset" : 16, "end_offset" : 18, "type" : "CN_WORD", "position" : 10 } ] }
ik_smart
: http://192.168.74.129:9200/_analyze?analyzer=ik_smart&pretty=true&text=中华人民共和国国歌{ "tokens" : [ { "token" : "中华人民共和国", "start_offset" : 0, "end_offset" : 7, "type" : "CN_WORD", "position" : 0 }, { "token" : "国歌", "start_offset" : 7, "end_offset" : 9, "type" : "CN_WORD", "position" : 1 } ] }
[esuser@yyp ik]$ curl -XPUT http://192.168.74.129:9200/index
{"acknowledged":true,"shards_acknowledged":true}[esuser@yyp ik]$
[esuser@yyp ik]$ curl -XPOST http://192.168.74.129:9200/index/fulltext/_mapping -d'
{
"fulltext": {
"_all": {
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word",
"term_vector": "no",
"store": "false"
},
"properties": {
"content": {
"type": "text",
"analyzer": "ik_max_word",
"search_analyzer": "ik_max_word",
"include_in_all": "true",
"boost": 8
}
}
}
}'
{"acknowledged":true}[esuser@yyp ik]$
{"acknowledged":true}[esuser@yyp ik]$ curl -XPOST http://192.168.74.129:9200/index/fulltext/1 -d'
{"content":"美国留给伊拉克的是个烂摊子吗"}
'
{"_index":"index","_type":"fulltext","_id":"1","_version":1,"result":"created","_shards":{"total":2,"successful":1,"fai
led":0},"created":true}[esuser@yyp ik]$ curl -XPOST http://192.168.74.129:9200/index/fulltext/3 -d'
{"content":"中韩渔警冲突调查:韩警平均每天扣1艘中国渔船"}
'
{"_index":"index","_type":"fulltext","_id":"3","_version":1,"result":"created","_shards":{"total":2,"successful":1,"fai
led":0},"created":true}[esuser@yyp ik]$
[esuser@yyp ik]$ curl -XPOST http://192.168.74.129:9200/index/fulltext/_search -d'
{
"query" : { "match" : { "content" : "中国" }},
"highlight" : {
"pre_tags" : ["<tag1>", "<tag2>"],
"post_tags" : ["</tag1>", "</tag2>"],
"fields" : {
"content" : {}
}
}
}
'
搜索返回结果:
{"took":185,"timed_out":false,"_shards":{"total":5,"successful":5,"failed":0},"hits":{"total":1,"max_score":2.2110996,"
hits":[{"_index":"index","_type":"fulltext","_id":"3","_score":2.2110996,"_source":{"content":"中韩渔警冲突调查:韩警平均每天扣1艘中国渔船"}
,"highlight":{"content":["中韩渔警冲突调查:韩警平均每天扣1艘<tag1>中国</tag1>渔船"]}}]}}[esuser@yyp ik]$
[esuser@yyp plugins]$ ll ik/config/
total 3016
drwxr-xr-x. 2 esuser esuser 4096 Feb 19 2017 custom
-rw-r--r--. 1 esuser esuser 697 Nov 16 11:45 IKAnalyzer.cfg.xml
-rw-r--r--. 1 esuser esuser 3058510 Nov 16 11:45 main.dic
-rw-r--r--. 1 esuser esuser 123 Nov 16 11:45 preposition.dic
-rw-r--r--. 1 esuser esuser 1824 Nov 16 11:45 quantifier.dic
-rw-r--r--. 1 esuser esuser 164 Nov 16 11:45 stopword.dic
-rw-r--r--. 1 esuser esuser 192 Nov 16 11:45 suffix.dic
-rw-r--r--. 1 esuser esuser 752 Nov 16 11:45 surname.dic
[esuser@yyp plugins]$ cd ik/config/
[esuser@yyp config]$ cat IKAnalyzer.cfg.xml
<?xml version="1.0" encoding="UTF-8"?>
<!DOCTYPE properties SYSTEM "http://java.sun.com/dtd/properties.dtd">
<properties>
<comment>IK Analyzer 扩展配置</comment>
<!--用户可以在这里配置自己的扩展字典 -->
<entry key="ext_dict">custom/mydict.dic;custom/single_word_low_freq.dic</entry>
<!--用户可以在这里配置自己的扩展停止词字典-->
<entry key="ext_stopwords">custom/ext_stopword.dic</entry>
<!--用户可以在这里配置远程扩展字典 -->
<!-- <entry key="remote_ext_dict">words_location</entry> -->
<!--用户可以在这里配置远程扩展停止词字典-->
<!-- <entry key="remote_ext_stopwords">words_location</entry> -->
</properties>
[esuser@yyp config]$
目前该插件支持热更新 IK 分词,通过上文在 IK 配置文件中提到的如下配置
<!--用户可以在这里配置远程扩展字典 -->
<entry key="remote_ext_dict">location</entry>
<!--用户可以在这里配置远程扩展停止词字典-->
<entry key="remote_ext_stopwords">location</entry>
其中 location 是指一个 url,比如 http://yoursite.com/getCustomDict ,该请求只需满足以下两点即可完成分词热更新。
满足上面两点要求就可以实现热更新分词了,不需要重启 ES 实例。
可以将需自动更新的热词放在一个 UTF-8 编码的 .txt 文件里,放在 nginx 或其他简易 http server 下,当 .txt 文件修改时,http server 会在客户端请求该文件时自动返回相应的 Last-Modified 和 ETag。可以另外做一个工具来从业务系统提取相关词汇,并更新这个 .txt 文件。