New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Japanese Tokenizer (Kuromoji) cannot build UniDic dictionary [LUCENE-4056] #5128
Comments
Christian Moen (@cmoen) (migrated from JIRA) Hello Kazu, Only the IPA dictionary is currently supported. Adding support for UniDic shouldn't be a big technical issue, but I think we perhaps should introduce some notion of dictionary concept in Kuromoji if we add support for other dictionaries since weights for the search mode heuristic needs to be tuned on a per-dictionary basis. Also note that the stop tag set used by JapanesePartOfSpeechStopFilter also needs to be aligned with the relevant POS tag set of UniDic and we might also want to update the stop words list based on UniDic segmentation in order to properly support it end-to-end. (I'd expect it to be the same or nearly the same as with IPA, though.) Hence, adding UniDic support end-to-end creates a cascade of changes. Even though UniDic is indeed a good dictionary, its license is quite restrictive and doesn't allow redistribution, requires a license for commercial use, only permits personal use, etc. This is from
As a result, we won't be able to redistribute it and support it out-of-the-box with Lucene/Solr, and using it will most likely require a custom dictionary build that you have attempted. Would fixing the dictionary builder for UniDic be a useful starting point in your case? Do you have any information you can share regarding what sort of improvements you expect to see with UniDic? Does it relate to compound segmentation in general or katakana compounds? If you can also share some information on how you think we should support UniDic and how big the demand for such support is, that would be very useful. Thanks. |
Kazuaki Hiraga (@hkazuakey) (migrated from JIRA) Hi Christian, Thank you for your comment. I understand the situation. I didn't expect that UniDic is bundled and shipped with Kuromoji. For the time being, I just want to buiild and use it with Kuromoji for lucene/Solr. We just started evaluation of UniDic but it's a very early stage, so We don't have any conclusion that We have to or need to use UniDic instead of IPA dictionary. However we haven't finished our evaluation of UniDic, I like the concept and policy of UniDic that strictly define how to specify the tokens. And I am satisfied with the result of tokenization. I think It's better than IPA dictionary regarding the Katakana segmentation and compound segmentation. On the other hand, I understand there's a license issue that We have to resolve if we decide to use it in our internal services. Thanks for reminding me. Thanks. |
Robert Muir (@rmuir) (migrated from JIRA)
That assert from the stacktrace would probably be pretty tricky. Its an optimization that works for To fix it, this optimization would have to either be conditionalized or pulled into a subclass for Still, unidic support seems pretty tricky to maintain because if we want to share any code at all, Anyway, thats the background for that particular assert, its my fault but I don't have an easy fix! |
Prashant Pol (migrated from JIRA) I found support of UNIDIC dictionary at, Its reading dictionary format as UNIDIC but later not using it while creating UnknownDictionaryWriter. Any update on this ? |
Tomoko Uchida (@mocobeta) (migrated from JIRA) Hi, as far as licensing, UniDic is now distributed under GPL, LGPL, and BSD 3-Clause. To my knowledge, the last one is compatible with ALv2. Please see: https://unidic.ninjal.ac.jp/download and https://unidic.ninjal.ac.jp/copying/BSD Personally I am looking for using UniDic from kuromoji, because the dictionary is still maintained by researchers and suitable for search purpose than current search mode based on mecab-ipadic. If there is possibility to proceed this issue I'd like to help with this issue.
|
Kazuaki Hiraga (@hkazuakey) (migrated from JIRA) I agree with |
Tomoko Uchida (@mocobeta) (migrated from JIRA) @hkazuakey: thanks, do you have a patch for this? I think we can work together. Even if it isn't merged to the Lucene master, it would be valuable for users that the patch is available here. While the mecab-ipadic dictionary will go obsolete (this fact sometimes affects search quality so the search engineers in Japan often suffer from this,) UniDic or their extension is still actively maintained to adopt the changes of language. Of course it is much better if we provide substantial evidence here.
|
Kazuaki Hiraga (@hkazuakey) (migrated from JIRA)
|
Jun Ohtani (@johtani) (migrated from JIRA) I succeeded to build "unidic-mecab-2.1.2_src.zip" with attached patch file. Unfortunately, my patch has my local directory path, but we can start to discuss with it :) |
Jun Ohtani (@johtani) (migrated from JIRA) I made a pull request on github repo. |
I tried to build a UniDic dictionary for using it along with Kuromoji on Solr 3.6. I think UniDic is a good dictionary than IPA dictionary, so Kuromoji for Lucene/Solr should support UniDic dictionary as standalone Kuromoji does.
The following is my procedure:
Modified build.xml under lucene/contrib/analyzers/kuromoji directory and run 'ant build-dict', I got the error as the below.
build-dict:
[java] dictionary builder
[java]
[java] dictionary format: UNIDIC
[java] input directory: /home/kazu/Work/src/solr/brunch_3_6/lucene/build/contrib/analyzers/kuromoji/unidic-mecab1312src
[java] output directory: /home/kazu/Work/src/solr/brunch_3_6/lucene/contrib/analyzers/kuromoji/src/resources
[java] input encoding: utf-8
[java] normalize entries: false
[java]
[java] building tokeninfo dict...
[java] parse...
[java] sort...
[java] Exception in thread "main" java.lang.AssertionError
[java] encode...
[java] at org.apache.lucene.analysis.ja.util.BinaryDictionaryWriter.put(BinaryDictionaryWriter.java:113)
[java] at org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.buildDictionary(TokenInfoDictionaryBuilder.java:141)
[java] at org.apache.lucene.analysis.ja.util.TokenInfoDictionaryBuilder.build(TokenInfoDictionaryBuilder.java:76)
[java] at org.apache.lucene.analysis.ja.util.DictionaryBuilder.build(DictionaryBuilder.java:37)
[java] at org.apache.lucene.analysis.ja.util.DictionaryBuilder.main(DictionaryBuilder.java:82)
And the diff of build.xml:
===================================================================
— build.xml (revision 1338023)
+++ build.xml (working copy)
@@
-28,19 +28,31@@
<property name="maven.dist.dir" location="../../../dist/maven" />
<!-- default configuration: uses mecab-ipadic -->
<!--
<property name="ipadic.version" value="mecab-ipadic-2.7.0-20070801" />
<property name="dict.src.file" value="${ipadic.version}.tar.gz" />
<property name="dict.url" value="http://mecab.googlecode.com/files/${dict.src.file}"/>
-->
<!-- alternative configuration: uses mecab-naist-jdic
<property name="ipadic.version" value="mecab-naist-jdic-0.6.3b-20111013" />
<property name="dict.src.file" value="${ipadic.version}.tar.gz" />
<property name="dict.url" value="http://sourceforge.jp/frs/redir.php?m=iij&f=/naist-jdic/53500/${dict.src.file}"/>
-->
<property name="dict.encoding" value="euc-jp"/>
<property name="dict.format" value="ipadic"/>
<property name="dict.target.dir" location="./src/resources"/>
@@
-58,7 +70,8@@
<target name="compile-core" depends="jar-analyzers-common, common.compile-core" />
<target name="download-dict" unless="dict.available">
Migrated from LUCENE-4056 by Kazuaki Hiraga (@hkazuakey), updated Oct 16 2019
Environment:
Attachments: LUCENE-4056.patch
Linked issues:
The text was updated successfully, but these errors were encountered: