Skip to content

Commit

Permalink
Added third highlighter type based on lucene postings highlighter
Browse files Browse the repository at this point in the history
Requires field index_options set to "offsets" in order to store positions and offsets in the postings list.
Considerably faster than the plain highlighter since it doesn't require to reanalyze the text to be highlighted: the larger the documents the better the performance gain should be.
Requires less disk space than term_vectors, needed for the fast_vector_highlighter.
Breaks the text into sentences and highlights them. Uses a BreakIterator to find sentences in the text. Plays really well with natural text, not quite the same if the text contains html markup for instance.
Treats the document as the whole corpus, and scores individual sentences as if they were documents in this corpus, using the BM25 algorithm.

Uses forked version of lucene postings highlighter to support:
- per value discrete highlighting for fields that have multiple values, needed when number_of_fragments=0 since we want to return a snippet per value
- manually passing in query terms to avoid calling extract terms multiple times, since we use a different highlighter instance per doc/field, but the query is always the same

The lucene postings highlighter api is  quite different compared to the existing highlighters api, the main difference being that it allows to highlight multiple fields in multiple docs with a single call, ensuring sequential IO.
The way it is introduced in elasticsearch in this first round is a compromise trying not to change the current highlight api, which works per document, per field. The main disadvantage is that we lose the sequential IO, but we can always refactor the highlight api to work with multiple documents.

Supports pre_tag, post_tag, number_of_fragments (0 highlights the whole field), require_field_match, no_match_size, order by score and html encoding.

Closes elastic#3704
  • Loading branch information
javanna committed Oct 24, 2013
1 parent d3ebd8e commit dbb0399
Show file tree
Hide file tree
Showing 21 changed files with 4,771 additions and 115 deletions.
126 changes: 105 additions & 21 deletions docs/reference/search/request/highlighting.asciidoc
Expand Up @@ -2,8 +2,9 @@
=== Highlighting

Allows to highlight search results on one or more fields. The
implementation uses either the lucene `fast-vector-highlighter` or
`highlighter`. The search request body:
implementation uses either the lucene `highlighter`, `fast-vector-highlighter`
or `postings-highlighter`. The following is an example of the search request
body:

[source,js]
--------------------------------------------------
Expand All @@ -24,16 +25,56 @@ fragments).

In order to perform highlighting, the actual content of the field is
required. If the field in question is stored (has `store` set to `yes`
in the mapping), it will be used, otherwise, the actual `_source` will
in the mapping) it will be used, otherwise, the actual `_source` will
be loaded and the relevant field will be extracted from it.

If `term_vector` information is provided by setting `term_vector` to
`with_positions_offsets` in the mapping then the fast vector
highlighter will be used instead of the plain highlighter. The fast vector highlighter:
Since `0.20.2` the field name supports wildcard notation. For example, using
`comment_*` will cause all fields that match the expression to be highlighted.

==== Postings highlighter

coming[0.90.6]

If `index_options` is set to `offsets` in the mapping the postings highlighter
will be used instead of the plain highlighter. The postings highlighter:

* Is faster since it doesn't require to reanalyze the text to be highlighted:
the larger the documents the better the performance gain should be
* Requires less disk space than term_vectors, needed for the fast vector
highlighter
* Breaks the text into sentences and highlights them. Plays really well with
natural languages, not as well with fields containing for instance html markup
* Treats the document as the whole corpus, and scores individual sentences as
if they were documents in this corpus, using the BM25 algorithm

Here is an example of setting the `content` field to allow for
highlighting using the postings highlighter on it:

[source,js]
--------------------------------------------------
{
"type_name" : {
"content" : {"index_options" : "offsets"}
}
}
--------------------------------------------------

Note that the postings highlighter is meant to perform simple query terms
highlighting, regardless of their positions. That means that when used for
instance in combination with a phrase query, it will highlight all the terms
that the query is composed of, regardless of whether they are actually part of
a query match, effectively ignoring their positions.


==== Fast vector highlighter

If `term_vector` information is provided by setting `term_vector` to
`with_positions_offsets` in the mapping then the fast vector highlighter
will be used instead of the plain highlighter. The fast vector highlighter:

* Is faster especially for large fields (> `1MB`)
* Can be customized with `boundary_chars`, `boundary_max_scan`, and
`fragment_offset` (see below)
`fragment_offset` (see <<boundary-characters,below>>)
* Requires setting `term_vector` to `with_positions_offsets` which
increases the size of the index

Expand All @@ -50,9 +91,24 @@ the index to be bigger):
}
--------------------------------------------------

Since `0.20.2` the field name support wildcard notation, for example,
using `comment_*` which will cause all fields that match the expression
to be highlighted.
==== Force highlighter type

The `type` field allows to force a specific highlighter type. This is useful
for instance when needing to use the plain highlighter on a field that has
`term_vectors` enabled. The allowed values are: `plain`, `postings` and `fvh`.
The following is an example that forces the use of the plain highlighter:

[source,js]
--------------------------------------------------
{
"query" : {...},
"highlight" : {
"fields" : {
"content" : { "type" : "plain"}
}
}
}
--------------------------------------------------

[[tags]]
==== Highlighting Tags
Expand All @@ -61,6 +117,23 @@ By default, the highlighting will wrap highlighted text in `<em>` and
`</em>`. This can be controlled by setting `pre_tags` and `post_tags`,
for example:

[source,js]
--------------------------------------------------
{
"query" : {...},
"highlight" : {
"pre_tags" : ["<tag1>"],
"post_tags" : ["</tag1>"],
"fields" : {
"_all" : {}
}
}
}
--------------------------------------------------

Using the fast vector highlighter there can be more tags, and the "importance"
is ordered.

[source,js]
--------------------------------------------------
{
Expand All @@ -75,9 +148,8 @@ for example:
}
--------------------------------------------------

There can be a single tag or more, and the "importance" is ordered.
There are also built in "tag" schemas, with currently a single schema
called `styled` with `pre_tags` of:
called `styled` with the following `pre_tags`:

[source,js]
--------------------------------------------------
Expand All @@ -87,7 +159,7 @@ called `styled` with `pre_tags` of:
<em class="hlt10">
--------------------------------------------------

And post tag of `</em>`. If you think of more nice to have built in tag
and `</em>` as `post_tags`. If you think of more nice to have built in tag
schemas, just send an email to the mailing list or open an issue. Here
is an example of switching tag schemas:

Expand All @@ -104,6 +176,9 @@ is an example of switching tag schemas:
}
--------------------------------------------------


==== Encoder

An `encoder` parameter can be used to define how highlighted text will
be encoded. It can be either `default` (no encoding) or `html` (will
escape html, if you use html highlighting tags).
Expand All @@ -112,7 +187,8 @@ escape html, if you use html highlighting tags).

Each field highlighted can control the size of the highlighted fragment
in characters (defaults to `100`), and the maximum number of fragments
to return (defaults to `5`). For example:
to return (defaults to `5`).
For example:

[source,js]
--------------------------------------------------
Expand All @@ -126,8 +202,11 @@ to return (defaults to `5`). For example:
}
--------------------------------------------------

On top of this it is possible to specify that highlighted fragments are
order by score:
The `fragment_size` is ignored when using the postings highlighter, as it
outputs sentences regardless of their length.

On top of this it is possible to specify that highlighted fragments need
to be sorted by score:

[source,js]
--------------------------------------------------
Expand Down Expand Up @@ -170,7 +249,10 @@ In the case where there is no matching fragment to highlight, the default is
to not return anything. Instead, we can return a snippet of text from the
beginning of the field by setting `no_match_size` (default `0`) to the length
of the text that you want returned. The actual length may be shorter than
specified as it tries to break on a word boundary.
specified as it tries to break on a word boundary. When using the postings
highlighter it is not possible to control the actual size of the snippet,
therefore the first sentence gets returned whenever `no_match_size` is
greater than `0`.

[source,js]
--------------------------------------------------
Expand Down Expand Up @@ -260,9 +342,11 @@ query and the rescore query in `highlight_query`.
}
--------------------------------------------------

Note the score of text fragment in this case is calculated by Lucene
highlighting framework. For implementation details you can check
`ScoreOrderFragmentsBuilder.java` class.
Note that the score of text fragment in this case is calculated by the Lucene
highlighting framework. For implementation details you can check the
`ScoreOrderFragmentsBuilder.java` class. On the other hand when using the
postings highlighter the fragments are scored using, as mentioned above,
the BM25 algorithm.

[[highlighting-settings]]
==== Global Settings
Expand Down Expand Up @@ -299,7 +383,7 @@ matches specifically on them.
[[boundary-characters]]
==== Boundary Characters

When highlighting a field that is mapped with term vectors,
When highlighting a field using the fast vector highlighter,
`boundary_chars` can be configured to define what constitutes a boundary
for highlighting. It's a single string with each boundary character
defined in it. It defaults to `.,!? \t\n`.
Expand Down
@@ -0,0 +1,78 @@
/*
* Licensed to ElasticSearch and Shay Banon under one
* or more contributor license agreements. See the NOTICE file
* distributed with this work for additional information
* regarding copyright ownership. ElasticSearch licenses this
* file to you under the Apache License, Version 2.0 (the
* "License"); you may not use this file except in compliance
* with the License. You may obtain a copy of the License at
*
* http://www.apache.org/licenses/LICENSE-2.0
*
* Unless required by applicable law or agreed to in writing, software
* distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
* WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
* License for the specific language governing permissions and limitations under
* the License.
*/

package org.apache.lucene.search.postingshighlight;

import org.apache.lucene.search.highlight.Encoder;
import org.elasticsearch.search.highlight.HighlightUtils;

/**
Custom passage formatter that allows us to:
1) extract different snippets (instead of a single big string) together with their scores ({@link Snippet})
2) use the {@link Encoder} implementations that are already used with the other highlighters
*/
public class CustomPassageFormatter extends XPassageFormatter {

private final String preTag;
private final String postTag;
private final Encoder encoder;

public CustomPassageFormatter(String preTag, String postTag, Encoder encoder) {
this.preTag = preTag;
this.postTag = postTag;
this.encoder = encoder;
}

@Override
public Snippet[] format(Passage[] passages, String content) {
Snippet[] snippets = new Snippet[passages.length];
int pos;
for (int j = 0; j < passages.length; j++) {
Passage passage = passages[j];
StringBuilder sb = new StringBuilder();
pos = passage.startOffset;
for (int i = 0; i < passage.numMatches; i++) {
int start = passage.matchStarts[i];
int end = passage.matchEnds[i];
// its possible to have overlapping terms
if (start > pos) {
append(sb, content, pos, start);
}
if (end > pos) {
sb.append(preTag);
append(sb, content, Math.max(pos, start), end);
sb.append(postTag);
pos = end;
}
}
// its possible a "term" from the analyzer could span a sentence boundary.
append(sb, content, pos, Math.max(pos, passage.endOffset));
//we remove the paragraph separator if present at the end of the snippet (we used it as separator between values)
if (sb.charAt(sb.length() - 1) == HighlightUtils.PARAGRAPH_SEPARATOR) {
sb.deleteCharAt(sb.length() - 1);
}
//and we trim the snippets too
snippets[j] = new Snippet(sb.toString().trim(), passage.score, passage.numMatches > 0);
}
return snippets;
}

protected void append(StringBuilder dest, String content, int start, int end) {
dest.append(encoder.encodeText(content.substring(start, end)));
}
}

0 comments on commit dbb0399

Please sign in to comment.