A library that adds some NLP capabilities to the Lucene search engine
Java
Latest commit 131ab87 Jul 16, 2013 @larsmans Lucene 4.x support missing
Failed to load latest commit information.
src/nl/rug/eco/lucene fix build with recent Stanford POS tagger Jul 16, 2013
.gitignore ignore tarballs and zips Jul 16, 2013
COPYING First version of lucene-stanford-lemmatizer Oct 6, 2010
README.markdown
build.xml

README.markdown

lucene-stanford-lemmatizer

This is a library that adds NLP capabilities to Lucene-based search engines: lemmatization and filtering based on part-of-speech (POS) tag. It used the state-of-the-art Stanford POS Tagger for NLP support.

Lemmatizing is similar to stemming, except smarter: it takes into account the context of a word to determine the correct lemma/stem. POS filtering is a smarter replacement for stop lists. It allows filtering out all pronouns, adverbs, etc.

For lemmatization and POS tagging to work best, your queries should be English sentences instead of just bunches of keywords.

Getting started

Download this package and

Set your CLASSPATH to include the above, then issue ant jar.

In your search code, construct an EnglishLemmaAnalyzer instead of a StandardAnalyzer (or whatever you normally use). Pass the filename of a Stanford POS Tagger model file to the constructor (found in the models/ directory in the Stanford POS Tagger source directory.

Going further

It is possible to determine which parts-of-speech should be indexed by subclassing the tokenizer. See the API docs for details.

Bugs

Lucene 4.x support is missing. Please don't email me (Lars) about this; I don't have the time to learn the new APIs and fix it. If you know a fix, please fork this project and publish your changes.

The implementation is limited to English, because the Stanford lemmatizer only handles that languages. The POS tagger does Chinese and German, so it should be possible to add those languages.