Skip to content

Latest commit

 

History

History
2807 lines (2307 loc) · 90.7 KB

mapping.asciidoc

File metadata and controls

2807 lines (2307 loc) · 90.7 KB

Mapping entities to the index structure

Mapping an entity

In [getting-started] you have already seen that all the metadata information needed to index entities is described through annotations. There is no need for XML mapping files. You can still use Hibernate mapping files for the basic Hibernate configuration, but the Hibernate Search specific configuration has to be expressed via annotations.

Note

There is no XML configuration available for Hibernate Search but we provide a programmatic mapping API that elegantly replaces this kind of deployment form (see Programmatic API for more information).

If you want to contribute the XML mapping implementation, see HSEARCH-210.

Basic mapping

Lets start with the most commonly used annotations when mapping an entity.

@Indexed

Foremost you must declare a persistent class as indexable by annotating the class with @Indexed. All entities not annotated with @Indexed will be ignored by the indexing process.

Example 1. Making a class indexable with @Indexed
@Entity
@Indexed
public class Essay {
    ...
}

You can optionally specify the Indexed.index attribute to change the default name of the index. For more information regarding index naming see [search-configuration-directory].

You can also specify an optional indexing interceptor. For more information see conditional indexing.

@Field

For each property of your entity, you have the ability to describe whether and how it will be indexed. Adding the @Field annotation declares a property as indexed and allows you to configure various aspects of the indexing process. Without @Field the property is ignored by the indexing process.

Hibernate Search tries to determine the best way to index your property. In most cases this will be as string, but for the types int, long, double and float (and their respective Java wrapper types) Lucene’s numeric field encoding (see @NumericField) is used. This numeric encoding uses a so called Trie structure which allows for efficient range queries and sorting, resulting in query response times being orders of magnitude faster than with the plain string encoding. Byte and short properties will only be encoded in numeric fields if explicitly marked with the @NumericField annotation.

Caution

Prior to Search 5, numeric field encoding was only chosen if explicitly requested via @NumericField. As of Search 5 this encoding is automatically chosen for numeric types. To avoid numeric encoding you can explicitly specify a non numeric field bridge via @Field.bridge or @FieldBridge. The package org.hibernate.search.bridge.builtin contains a set of bridges which encode numbers as strings, for example org.hibernate.search.bridge.builtin.IntegerBridge.

The following attributes of the @Field annotation help you control the indexing outcome:

  • name: describes under which name the property should be stored in the Lucene Document. The default value is the property name (following the JavaBeans convention)

  • store: describes whether or not the property is stored in the Lucene index. You can store the value Store.YES (consuming more space in the index but allowing projection), store it in a compressed way Store.COMPRESS (this does consume more CPU), or avoid any storage Store.NO (this is the default value). When a property is stored, you can retrieve its original value from the Lucene Document. Storing the property has no impact on whether the value is searchable or not.

  • index: describes whether the property is indexed or not. The different values are Index.NO (no indexing, meaning the value cannot be found by a query), Index.YES (the element gets indexed and is searchable). The default value is Index.YES. Index.NO can be useful for cases where a property is not required to be searchable, but needed for projection.

    Tip

    Index.NO in combination with Analyze.YES or Norms.YES is not useful, since analyze and norms require the property to be indexed

  • analyze: determines whether the property is analyzed (Analyze.YES) or not (Analyze.NO). The default value is Analyze.YES.

    Tip

    Whether or not you want to analyze a property depends on whether you wish to search the element as is, or by the words it contains. It make sense to analyze a text field, but probably not a date field.

    Tip

    Fields used for sorting or faceting must not be analyzed.

  • norms: describes whether index time boosting information should be stored (Norms.YES) or not (Norms.NO). Not storing the norms can save a considerable amount of memory, but index time boosting will not be available in this case. The default value is Norms.YES.

  • termVector: describes collections of term-frequency pairs. This attribute enables the storing of the term vectors within the documents during indexing. The default value is TermVector.NO.

    The different values of this attribute are:

    Value Definition

    TermVector.YES

    Store the term vectors of each document. This produces two synchronized arrays, one contains document terms and the other contains the term’s frequency.

    TermVector.NO

    Do not store term vectors.

    TermVector.WITH_OFFSETS

    Store the term vector and token offset information. This is the same as TermVector.YES plus it contains the starting and ending offset position information for the terms.

    TermVector.WITH_POSITIONS

    Store the term vector and token position information. This is the same as TermVector.YES plus it contains the ordinal positions of each occurrence of a term in a document.

    TermVector.WITH_POSITION_OFFSETS

    Store the term vector, token position and offset information. This is a combination of the YES, WITH_OFFSETS and WITH_POSITIONS.

  • indexNullAs: Per default null values are ignored and not indexed. However, using indexNullAs you can specify a string which will be inserted as token for the null value. Per default this value is set to org.hibernate.search.annotations.Field.DO_NOT_INDEX_NULL indicating that null values should not be indexed. You can set this value to DEFAULT_NULL_TOKEN to indicate that a default null token should be used. This default null token can be specified in the configuration using hibernate.search.default_null_token. If this property is not set the string "null" will be used as default.

    Note

    When indexNullAs is used, it is important to use the chosen null token in search queries (see [search-query]) in order to find null values. It is also advisable to use this feature only with un-analyzed fields (analyze=Analyze.NO).

    Note

    When implementing a custom FieldBridge or TwoWayFieldBridge it is up to the developer to handle the indexing of null values (see JavaDocs of LuceneOptions.indexNullAs()).

  • boost: Refer to section about boosting

  • bridge: Refer to section about field bridges

@NumericField

@NumericField is a companion annotation to @Field. It can be specified in the same scope as @Field, but only on properties of numeric type like byte, short, int, long, double and float (and their respective Java wrapper types). It allows to define a custom precisionStep for the numeric encoding of the property value.

@NumericField accepts the following parameters:

Value Definition

forField

(Optional) Specify the name of of the related @Field that will be indexed numerically. It’s only mandatory when the property contains more than a @Field declaration

precisionStep

(Optional) Change the way that the Trie structure is stored in the index. Smaller precisionSteps lead to more disk space usage and faster range and sort queries. Larger values lead to less space used and range query performance more close to the range query using string encoding. Default value is 4.

Lucene supports the numeric types: Double, Long, Integer and Float. For properties of types Byte and Short, an Integer field will be used in the index. Other numeric types should use the default string encoding (via @Field), unless the application can deal with a potential loss in precision, in which case a custom NumericFieldBridge can be used. See Defining a custom NumericFieldBridge for BigDecimal.

Example 2. Defining a custom NumericFieldBridge for BigDecimal
public class BigDecimalNumericFieldBridge extends NumericFieldBridge {
    private static final BigDecimal storeFactor = BigDecimal.valueOf(100);

    @Override
    public void set(String name, Object value, Document document, LuceneOptions luceneOptions) {
        if ( value != null ) {
            BigDecimal decimalValue = (BigDecimal) value;
            long tmpLong = decimalValue.multiply( storeFactor ).longValue();
            Long indexedValue = Long.valueOf( tmpLong );
            luceneOptions.addNumericFieldToDocument( name, indexedValue, document );
        }
    }

    @Override
    public Object get(String name, Document document) {
        String fromLucene = document.get( name );
        BigDecimal storedBigDecimal = new BigDecimal( fromLucene );
        return storedBigDecimal.divide( storeFactor );
    }
}

You would use this custom bridge like seen in Use of BigDecimalNumericFieldBridge. In this case three annotations are used - @Field, @NumericField and @FieldBridge. @Field is required to mark the property for being indexed (a standalone @NumericField is never allowed). @NumericField might be omitted in this specific case, because the used @FieldBridge annotation refers already to a NumericFieldBridge instance. However, the use of @NumericField makes the use of the property as numeric value explicit.

Example 3. Use of BigDecimalNumericFieldBridge
@Entity
@Indexed
public static class Item {
    @Id
    @GeneratedValue
    private int id;

    @Field
    @NumericField
    @FieldBridge(impl = BigDecimalNumericFieldBridge.class)
    private BigDecimal price;

    public int getId() {
        return id;
    }

    public BigDecimal getPrice() {
       return price;
    }

    public void setPrice(BigDecimal price) {
        this.price = price;
    }
}
@Id

Finally, the id property of an entity is a special property used by Hibernate Search to ensure index unicity of a given entity. By design, an id has to be stored and must not be tokenized. It is also always string encoded, even if the id is a number. To mark a property as index id, use the @DocumentId annotation. If you are using JPA and you are using @Id you can omit @DocumentId. The chosen entity id will also be used as document id.

Example 4. Specifying indexed properties
@Entity
@Indexed
public class Essay {
    ...

    @Id
    @DocumentId
    public Long getId() { return id; }

    @Field(name="Abstract", store=Store.YES)
    public String getSummary() { return summary; }

    @Lob
    @Field
    public String getText() { return text; }

    @Field
    @NumericField(precisionStep = 6)
    public float getGrade() { return grade; }
}

Specifying indexed properties defines an index with four fields: id, Abstract, text and grade. Note that by default the field name is de-capitalized, following the JavaBean specification. The grade field is annotated as Numeric with a slightly larger precisionStep than the default.

Mapping properties multiple times

Sometimes one has to map a property multiple times per index, with slightly different indexing strategies. For example, sorting a query by field requires the field to be un-analyzed. If one wants to search by words in this property and still sort it, one need to index it twice - once analyzed and once un-analyzed. @Fields allows to achieve this goal.

Example 5. Using @Fields to map a property multiple times
@Entity
@Indexed(index = "Book")
public class Book {
    @Fields( {
            @Field,
            @Field(name = "summary_forSort", analyze = Analyze.NO, store = Store.YES)
            } )
    public String getSummary() {
        return summary;
    }

    // ...
}

In Using @Fields to map a property multiple times the field summary is indexed twice, once as summary in a tokenized way, and once as summary_forSort in an un-tokenized way. @Field supports 2 attributes useful when @Fields is used:

  • analyzer: defines a @Analyzer annotation per field rather than per property

  • bridge: defines a @FieldBridge annotation per field rather than per property

See below for more information about analyzers and field bridges.

Embedded and associated objects

Associated objects as well as embedded objects can be indexed as part of the root entity index. This is useful if you expect to search a given entity based on properties of the associated objects.

In the example Indexing associations the aim is to return places where the associated city is Atlanta (in Lucene query parser language, it would translate into address.city:Atlanta). All place fields are added to the Place index, but also the address related fields address.street, and address.city will be added and made queryable. The embedded object id, address.id, is not added per default. To include it you need to also set @IndexedEmbedded(includeEmbeddedObjectId=true, …​).

Tip

Only actual indexed fields (properties annotated with @Field) are added to the root entity index when embedded objects are indexed. The embedded object identifiers are treated differently and need to be included explicitly.

Example 6. Indexing associations
@Entity
@Indexed
public class Place {
    @Id
    @GeneratedValue
    private Long id;

    @Field
    private String name;

    @OneToOne(cascade = { CascadeType.PERSIST, CascadeType.REMOVE })
    @IndexedEmbedded
    private Address address;
    ....
}
@Entity
public class Address {
    @Id
    @GeneratedValue
    private Long id;

    @Field
    private String street;

    @Field
    private String city;

    @ContainedIn
    @OneToMany(mappedBy="address")
    private Set<Place> places;
    ...
}

Be careful. Because the data is de-normalized in the Lucene index when using the @IndexedEmbedded technique, Hibernate Search needs to be aware of any change in the Place object and any change in the Address object to keep the index up to date. To make sure the Place Lucene document is updated when it’s Address changes, you need to mark the other side of the bidirectional relationship with @ContainedIn.

Tip

@ContainedIn is useful on both associations pointing to entities and on embedded (collection of) objects.

Let’s make Indexing associations a bit more complex by nesting @IndexedEmbedded as seen in Nested usage of @IndexedEmbedded and @ContainedIn.

Example 7. Nested usage of @IndexedEmbedded and @ContainedIn
@Entity
@Indexed
public class Place {
    @Id
    @GeneratedValue
    private Long id;

    @Field
    private String name;

    @OneToOne(cascade = { CascadeType.PERSIST, CascadeType.REMOVE })
    @IndexedEmbedded
    private Address address;

    // ...
}
@Entity
public class Address {
    @Id
    @GeneratedValue
    private Long id;

    @Field
    private String street;

    @Field
    private String city;

    @IndexedEmbedded(depth = 1, prefix = "ownedBy_")
    private Owner ownedBy;

    @ContainedIn
    @OneToMany(mappedBy="address")
    private Set<Place> places;

    // ...
}
@Embeddable
public class Owner {
    @Field
    private String name;
    // ...
}

As you can see, any @*ToMany, @*ToOne or @Embedded attribute can be annotated with @IndexedEmbedded. The attributes of the associated class will then be added to the main entity index. In Nested usage of @IndexedEmbedded and @ContainedIn the index will contain the following fields

  • id

  • name

  • address.street

  • address.city

  • address.ownedBy_name

The default prefix is propertyName., following the traditional object navigation convention. You can override it using the prefix attribute as it is shown on the ownedBy property.

Note

The prefix cannot be set to the empty string.

The depth property is necessary when the object graph contains a cyclic dependency of classes (not instances). For example, if Owner points to Place. Hibernate Search will stop including indexed embedded attributes after reaching the expected depth (or the object graph boundaries are reached). A class having a self reference is an example of cyclic dependency. In our example, because depth is set to 1, any @IndexedEmbedded attribute in Owner (if any) will be ignored.

Using @IndexedEmbedded for object associations allows you to express queries (using Lucene’s query syntax) such as:

  • Return places where name contains JBoss and where address city is Atlanta. In Lucene query this would be

+name:jboss +address.city:atlanta
  • Return places where name contains JBoss and where owner’s name contain Joe. In Lucene query this would be

+name:jboss +address.ownedBy_name:joe

In a way it mimics the relational join operation in a more efficient way (at the cost of data duplication). Remember that, out of the box, Lucene indexes have no notion of association, the join operation is simply non-existent. It might help to keep the relational model normalized while benefiting from the full text index speed and feature richness.

Note

An associated object can itself (but does not have to) be @Indexed

When @IndexedEmbedded points to an entity, the association has to be directional and the other side has to be annotated with @ContainedIn. If not, Hibernate Search has no way to update the root index when the associated entity is updated (in our example, a Place index document has to be updated when the associated Address instance is updated).

Sometimes, the object type annotated by @IndexedEmbedded is not the object type targeted by Hibernate and Hibernate Search. This is especially the case when interfaces are used in lieu of their implementation. For this reason you can override the object type targeted by Hibernate Search using the targetElement parameter.

Example 8. Using the targetElement property of @IndexedEmbedded
@Entity
@Indexed
public class Address {
    @Id
    @GeneratedValue
    private Long id;

    @Field
    private String street;

    @IndexedEmbedded(depth = 1, prefix = "ownedBy_", targetElement = Owner.class)
    @Target(Owner.class)
    private Person ownedBy;

    // ...
}
@Embeddable
public class Owner implements Person { ... }
Limiting object embedding to specific paths

The @IndexedEmbedded annotation provides also an attribute includePaths which can be used as an alternative to depth, or in combination with it.

When using only depth all indexed fields of the embedded type will be added recursively at the same depth; this makes it harder to pick only a specific path without adding all other fields as well, which might not be needed.

To avoid unnecessarily loading and indexing entities you can specify exactly which paths are needed. A typical application might need different depths for different paths, or in other words it might need to specify paths explicitly, as shown in Using the includePaths property of @IndexedEmbedded

Example 9. Using the includePaths property of @IndexedEmbedded
@Entity
@Indexed
public class Person {

   @Id
   public int getId() {
      return id;
   }

   @Field
   public String getName() {
      return name;
   }

   @Field
   public String getSurname() {
      return surname;
   }

   @OneToMany
   @IndexedEmbedded(includePaths = { "name" })
   public Set<Person> getParents() {
      return parents;
   }

   @ContainedIn
   @ManyToOne
   public Human getChild() {
      return child;
   }

   // ... other fields omitted

Using a mapping as in Using the includePaths property of @IndexedEmbedded, you would be able to search on a Person by name and/or surname, and/or the name of the parent. It will not index the surname of the parent, so searching on parent’s surnames will not be possible but speeds up indexing, saves space and improve overall performance.

The @IndexedEmbedded.includePaths will include the specified paths in addition to what you would index normally specifying a limited value for depth. Using includePaths with a undefined (default) value for depth is equivalent to setting depth=0: only the included paths are indexed.

Example 10. Using the includePaths property of @IndexedEmbedded
@Entity
@Indexed
public class Human {

   @Id
   public int getId() {
      return id;
   }

   @Field
   public String getName() {
      return name;
   }

   @Field
   public String getSurname() {
      return surname;
   }

   @OneToMany
   @IndexedEmbedded(depth = 2, includePaths = { "parents.parents.name" })
   public Set<Human> getParents() {
      return parents;
   }

   @ContainedIn
   @ManyToOne
   public Human getChild() {
      return child;
   }

    // ... other fields omitted

In Using the includePaths property of @IndexedEmbedded, every human will have it’s name and surname attributes indexed. The name and surname of parents will be indexed too, recursively up to second line because of the depth attribute. It will be possible to search by name or surname, of the person directly, his parents or of his grand parents. Beyond the second level, we will in addition index one more level but only the name, not the surname.

This results in the following fields in the index:

  • id - as primary key

  • _hibernate_class - stores entity type

  • name - as direct field

  • surname - as direct field

  • parents.name - as embedded field at depth 1

  • parents.surname - as embedded field at depth 1

  • parents.parents.name - as embedded field at depth 2

  • parents.parents.surname - as embedded field at depth 2

  • parents.parents.parents.name - as additional path as specified by includePaths. The first parents. is inferred from the field name, the remaining path is the attribute of includePaths

Tip

You can explicitly include the id of the embedded object using includePath, for example @IndexedEmbedded(includePaths = { "parents.id" }). This will work regardless of the includeEmbeddedObjectId attribute. However, it is recommended to just set includeEmbeddedObjectId=true.

Tip

Having explicit control of the indexed paths might be easier if you’re designing your application by defining the needed queries first, as at that point you might know exactly which fields you need, and which other fields are unnecessary to implement your use case.

Associated objects: building a dependency graph with @ContainedIn

While @ContainedIn is often seen as the counterpart of @IndexedEmbedded, it can also be used on its own to build an indexing dependency graph.

When an entity is reindexed, all the entities pointed by @ContainedIn are also going to be reindexed.

Boosting

Lucene has the notion of boosting which allows you to give certain documents or fields more or less importance than others. Lucene differentiates between index and search time boosting. The following sections show you how you can achieve index time boosting using Hibernate Search.

Static index time boosting

To define a static boost value for an indexed class or property you can use the @Boost annotation. You can use this annotation within @Field or specify it directly on method or class level.

Example 11. Different ways of using @Boost
@Entity
@Indexed
@Boost(1.7f)
public class Essay {
    ...

    @Id
    @DocumentId
    public Long getId() { return id; }

    @Field(name="Abstract", store=Store.YES, boost=@Boost(2f))
    @Boost(1.5f)
    public String getSummary() { return summary; }

    @Lob
    @Field(boost=@Boost(1.2f))
    public String getText() { return text; }

    @Field
    public String getISBN() { return isbn; }

}

In Different ways of using @Boost, Essay’s probability to reach the top of the search list will be multiplied by 1.7. The summary field will be 3.0 (2 * 1.5, because @Field.boost and @Boost on a property are cumulative) more important than the isbn field. The text field will be 1.2 times more important than the isbn field. Note that this explanation is wrong in strictest terms, but it is simple and close enough to reality for all practical purposes. Please check the Lucene documentation or the excellent Lucene In Action from Otis Gospodnetic and Erik Hatcher.

Dynamic index time boosting

The @Boost annotation used in Static index time boosting defines a static boost factor which is independent of the state of of the indexed entity at runtime. However, there are use cases in which the boost factor may depend on the actual state of the entity. In this case you can use the @DynamicBoost annotation together with an accompanying custom BoostStrategy.

Example 12. Dynamic boost example
public enum PersonType {
    NORMAL,
    VIP
}
@Entity
@Indexed
@DynamicBoost(impl = VIPBoostStrategy.class)
public class Person {
    private PersonType type;

    // ...
}
public class VIPBoostStrategy implements BoostStrategy {
    public float defineBoost(Object value) {
        Person person = ( Person ) value;
        if ( person.getType().equals( PersonType.VIP ) ) {
            return 2.0f;
        }
        else {
            return 1.0f;
        }
    }
}

In Dynamic boost example a dynamic boost is defined on class level specifying VIPBoostStrategy as implementation of the BoostStrategy interface to be used at indexing time. You can place the @DynamicBoost either at class or field level. Depending on the placement of the annotation either the whole entity is passed to the defineBoost method or just the annotated field/property value. It’s up to you to cast the passed object to the correct type. In the example all indexed values of a VIP person would be double as important as the values of a normal person.

Note

The specified BoostStrategy implementation must define a public no-arg constructor.

Of course you can mix and match @Boost and @DynamicBoost annotations in your entity. All defined boost factors are cumulative.

Analysis

Analysis is the process of converting text into single terms (words) and can be considered as one of the key features of a fulltext search engine. Lucene uses the concept of Analyzers to control this process. In the following section we cover the multiple ways Hibernate Search offers to configure the analyzers.

Default analyzer and analyzer by class

The default analyzer class used to index tokenized fields is configurable through the hibernate.search.analyzer property. The default value for this property is org.apache.lucene.analysis.standard.StandardAnalyzer.

You can also define the analyzer class per entity, property and even per @Field (useful when multiple fields are indexed from a single property).

Example 13. Different ways of using @Analyzer
@Entity
@Indexed
@Analyzer(impl = EntityAnalyzer.class)
public class MyEntity {
    @Id
    @GeneratedValue
    @DocumentId
    private Integer id;

    @Field
    private String name;

    @Field
    @Analyzer(impl = PropertyAnalyzer.class)
    private String summary;

    @Field(analyzer = @Analyzer(impl = FieldAnalyzer.class)
    private String body;

    ...
}

In this example, EntityAnalyzer is used to index all tokenized properties (eg. name), except summary and body which are indexed with PropertyAnalyzer and FieldAnalyzer respectively.

Caution

Mixing different analyzers in the same entity is most of the time a bad practice. It makes query building more complex and results less predictable (for the novice), especially if you are using a QueryParser (which uses the same analyzer for the whole query). As a rule of thumb, for any given field the same analyzer should be used for indexing and querying.

Named analyzers

Analyzers can become quite complex to deal with. For this reason introduces Hibernate Search the notion of analyzer definitions. An analyzer definition can be reused by many @Analyzer declarations and is composed of:

  • a name: the unique string used to refer to the definition

  • a list of char filters: each char filter is responsible to pre-process input characters before the tokenization. Char filters can add, change or remove characters; one common usage is for characters normalization

  • a tokenizer: responsible for tokenizing the input stream into individual words

  • a list of filters: each filter is responsible to remove, modify or sometimes even add words into the stream provided by the tokenizer

This separation of tasks - a list of char filters, and a tokenizer followed by a list of filters - allows for easy reuse of each individual component and let you build your customized analyzer in a very flexible way (just like Lego). Generally speaking the char filters do some pre-processing in the character input, then the Tokenizer starts the tokenizing process by turning the character input into tokens which are then further processed by the TokenFilters. Hibernate Search supports this infrastructure by utilizing the advanced analyzers provided by Lucene; this is often referred to as the Analyzer Framework.

Note

Some of the analyzers and filters will require additional dependencies. For example to use the snowball stemmer you have to also include the lucene-snowball jar and for the PhoneticFilterFactory you need the commons-codec jar. Your distribution of Hibernate Search provides these dependencies in its lib/optional directory. Have a look at Example of available tokenizers and Examples of available filters to see which analyzers and filters have additional dependencies

Prior to Hibernate Search 5 it was required to add the Apache Solr dependency to your project as well; this is no longer required.

Let’s have a look at a concrete example now - @AnalyzerDef and the Analyzer Framework. First a char filter is defined by its factory. In our example, a mapping char filter is used, and will replace characters in the input based on the rules specified in the mapping file. Next a tokenizer is defined. This example uses the standard tokenizer. Last but not least, a list of filters is defined by their factories. In our example, the StopFilter filter is built reading the dedicated words property file. The filter is also expected to ignore case.

Example 14. @AnalyzerDef and the Analyzer Framework
@AnalyzerDef(name="customanalyzer",
  charFilters = {
    @CharFilterDef(factory = MappingCharFilterFactory.class, params = {
      @Parameter(name = "mapping",
        value = "org/hibernate/search/test/analyzer/mapping-chars.properties")
    })
  },
  tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
  filters = {
    @TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = StopFilterFactory.class, params = {
      @Parameter(name="words",
        value= "org/hibernate/search/test/analyzer/stoplist.properties" ),
      @Parameter(name="ignoreCase", value="true")
    })
})
public class Team {
    // ...
}
Tip

Filters and char filters are applied in the order they are defined in the @AnalyzerDef annotation. Order matters!

Some tokenizers, token filters or char filters load resources like a configuration or metadata file. This is the case for the stop filter and the synonym filter.

Example 15. Use a specific charset to load the property file
@AnalyzerDef(name="customanalyzer",
  charFilters = {
    @CharFilterDef(factory = MappingCharFilterFactory.class, params = {
      @Parameter(name = "mapping",
        value = "org/hibernate/search/test/analyzer/mapping-chars.properties")
    })
  },
  tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
  filters = {
    @TokenFilterDef(factory = ASCIIFoldingFilterFactory.class),
    @TokenFilterDef(factory = LowerCaseFilterFactory.class),
    @TokenFilterDef(factory = StopFilterFactory.class, params = {
      @Parameter(name="words",
        value= "org/hibernate/search/test/analyzer/stoplist.properties" ),
      @Parameter(name="ignoreCase", value="true")
  })
})
public class Team {
    // ...
}

Once defined, an analyzer definition can be reused by an @Analyzer declaration as seen in Referencing an analyzer by name.

Example 16. Referencing an analyzer by name
@Entity
@Indexed
@AnalyzerDef(name="customanalyzer", ... )
public class Team {
    @Id
    @DocumentId
    @GeneratedValue
    private Integer id;

    @Field
    private String name;

    @Field
    private String location;

    @Field
    @Analyzer(definition = "customanalyzer")
    private String description;
}

Analyzer instances declared by @AnalyzerDef are also available by their name in the SearchFactory which is quite useful wen building queries.

Analyzer analyzer = fullTextSession.getSearchFactory().getAnalyzer("customanalyzer");

Fields in queries should be analyzed with the same analyzer used to index the field so that they speak a common "language": the same tokens are reused between the query and the indexing process. This rule has some exceptions but is true most of the time. Respect it unless you know what you are doing.

Available analyzers

Apache Lucene comes with a lot of useful default char filters, tokenizers and filters. You can find a complete list of char filter factories, tokenizer factories and filter factories at http://wiki.apache.org/solr/AnalyzersTokenizersTokenFilters. Let’s check a few of them.

Table 1. Example of available char filters
Factory Description Parameters Additional dependencies

MappingCharFilterFactory

Replaces one or more characters with one or more characters, based on mappings specified in the resource file

mapping: points to a resource file containing the mappings using the format: "á" ⇒ "a"
"ñ" ⇒ "n"
"ø" ⇒ "o"

lucene-analyzers-common

HTMLStripCharFilterFactory

Remove HTML standard tags, keeping the text

none

lucene-analyzers-common

Table 2. Example of available tokenizers
Factory Description Parameters Additional dependencies

StandardTokenizerFactory

Use the Lucene StandardTokenizer

none

lucene-analyzers-common

HTMLStripCharFilterFactory

Remove HTML tags, keep the text and pass it to a StandardTokenizer.

none

lucene-analyzers-common

PatternTokenizerFactory

Breaks text at the specified regular expression pattern.

pattern: the regular expression to use for tokenizing

group: says which pattern group to extract into tokens

lucene-analyzers-common

Table 3. Examples of available filters
Factory Description Parameters Additional dependencies

StandardFilterFactory

Remove dots from acronyms and 's from words

none

lucene-analyzers-common

LowerCaseFilterFactory

Lowercases all words

none

lucene-analyzers-common

StopFilterFactory

Remove words (tokens) matching a list of stop words

words: points to a resource file containing the stop words

ignoreCase: true if case should be ignore when comparing stop words, false otherwise

lucene-analyzers-common

SnowballPorterFilterFactory

Reduces a word to it’s root in a given language. (eg. protect, protects, protection share the same root). Using such a filter allows searches matching related words.

language: Danish, Dutch, English, Finnish, French, German, Italian, Norwegian, Portuguese, Russian, Spanish, Swedish and a few more

lucene-analyzers-common

ASCIIFoldingFilterFactory

Remove accents for languages like French

none

lucene-analyzers-common

PhoneticFilterFactory

Inserts phonetically similar tokens into the token stream

encoder: One of DoubleMetaphone, Metaphone, Soundex or RefinedSoundex

inject: true will add tokens to the stream, false will replace the existing token

maxCodeLength: sets the maximum length of the code to be generated. Supported only for Metaphone and DoubleMetaphone encodings

lucene-analyzers-phonetic and commons-codec

CollationKeyFilterFactory

Converts each token into its java.text.CollationKey, and then encodes the CollationKey with IndexableBinaryStringTools, to allow it to be stored as an index term.

custom, language, country, variant, strength, `decomposition `see Lucene’s CollationKeyFilter javadocs for more info

lucene-analyzers-common and commons-io

We recommend to check out the implementations of org.apache.lucene.analysis.util.TokenizerFactory and org.apache.lucene.analysis.util.TokenFilterFactory in your IDE to see the implementations available.

Dynamic analyzer selection

So far all the introduced ways to specify an analyzer were static. However, there are use cases where it is useful to select an analyzer depending on the current state of the entity to be indexed, for example in a multilingual applications. For an BlogEntry class for example the analyzer could depend on the language property of the entry. Depending on this property the correct language specific stemmer should be chosen to index the actual text.

To enable this dynamic analyzer selection Hibernate Search introduces the AnalyzerDiscriminator annotation. Usage of @AnalyzerDiscriminator demonstrates the usage of this annotation.

Example 17. Usage of @AnalyzerDiscriminator
@Entity
@Indexed
@AnalyzerDefs({
  @AnalyzerDef(name = "en",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = EnglishPorterFilterFactory.class
      )
    }),
  @AnalyzerDef(name = "de",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = GermanStemFilterFactory.class)
    })
})
public class BlogEntry {

    @Id
    @GeneratedValue
    @DocumentId
    private Integer id;

    @Field
    @AnalyzerDiscriminator(impl = LanguageDiscriminator.class)
    private String language;

    @Field
    private String text;

    private Set<BlogEntry> references;

    // standard getter/setter
    // ...
}
public class LanguageDiscriminator implements Discriminator {

    public String getAnalyzerDefinitionName(Object value, Object entity, String field) {
        if ( value == null || !( entity instanceof Article ) ) {
            return null;
        }
        return (String) value;

    }
}

The prerequisite for using @AnalyzerDiscriminator is that all analyzers which are going to be used dynamically are predefined via @AnalyzerDef definitions. If this is the case, one can place the @AnalyzerDiscriminator annotation either on the class or on a specific property of the entity for which to dynamically select an analyzer. Via the impl parameter of the AnalyzerDiscriminator you specify a concrete implementation of the Discriminator interface. It is up to you to provide an implementation for this interface. The only method you have to implement is getAnalyzerDefinitionName() which gets called for each field added to the Lucene document. The entity which is getting indexed is also passed to the interface method. The value parameter is only set if the AnalyzerDiscriminator is placed on property level instead of class level. In this case the value represents the current value of this property.

An implementation of the Discriminator interface has to return the name of an existing analyzer definition or null if the default analyzer should not be overridden. Usage of @AnalyzerDiscriminator assumes that the language parameter is either 'de' or 'en' which matches the specified names in the @AnalyzerDefs.

Retrieving an analyzer

In some situations retrieving analyzers can be handy. For example, if your domain model makes use of multiple analyzers (maybe to benefit from stemming, use phonetic approximation and so on), you need to make sure to use the same analyzers when you build your query.

Note

This rule can be broken but you need a good reason for it. If you are unsure, use the same analyzers. If you use the Hibernate Search query DSL (see [search-query-querydsl]), you don’t have to think about it. The query DSL does use the right analyzer transparently for you.

Whether you are using the Lucene programmatic API or the Lucene query parser, you can retrieve the scoped analyzer for a given entity. A scoped analyzer is an analyzer which applies the right analyzers depending on the field indexed. Remember, multiple analyzers can be defined on a given entity each one working on an individual field. A scoped analyzer unifies all these analyzers into a context-aware analyzer. While the theory seems a bit complex, using the right analyzer in a query is very easy.

Example 18. Using the scoped analyzer when building a full-text query
org.apache.lucene.queryparser.classic.QueryParser parser = new QueryParser(
    "title",
    fullTextSession.getSearchFactory().getAnalyzer( Song.class )
);

org.apache.lucene.search.Query luceneQuery =
    parser.parse( "title:sky Or title_stemmed:diamond" );

org.hibernate.Query fullTextQuery =
    fullTextSession.createFullTextQuery( luceneQuery, Song.class );

List result = fullTextQuery.list(); //return a list of managed objects

In the example above, the song title is indexed in two fields: the standard analyzer is used in the field title and a stemming analyzer is used in the field title_stemmed. By using the analyzer provided by the search factory, the query uses the appropriate analyzer depending on the field targeted.

Tip

You can also retrieve analyzers defined via @AnalyzerDef by their definition name using searchFactory.getAnalyzer(String).

Bridges

When discussing the basic mapping for an entity one important fact was so far disregarded. In Lucene all index fields have to be represented as strings. All entity properties annotated with @Field have to be converted to strings to be indexed. The reason we have not mentioned it so far is, that for most of your properties Hibernate Search does the translation job for you thanks to a set of built-in bridges. However, in some cases you need a more fine grained control over the translation process.

Built-in bridges

Hibernate Search comes bundled with a set of built-in bridges between a Java property type and its full text representation.

null

Per default null elements are not indexed. Lucene does not support null elements. However, in some situation it can be useful to insert a custom token representing the null value. See @Field for more information.

java.lang.String

Strings are indexed as are

short, Short, integer, Integer, long, Long, float, Float, double, Double

Are per default indexed numerically using a Trie structure. You need to use a NumericRangeQuery to search for values. See also @Field and @NumericField

BigInteger, BigDecimal

BigInteger and BigDecimal are converted into their string representation and indexed. Note that in this form the values cannot be compared by Lucene using for example a TermRangeQuery. For that the string representation would need to be padded. An alternative using numeric encoding with a potential loss in precision can be seen in Defining a custom NumericFieldBridge for BigDecimal.

java.util.Date, java.util.Calendar

Dates are indexed as long value representing the number of milliseconds since January 1, 1970, 00:00:00 GMT. You shouldn’t really bother with the internal format. It is important, however, to query a numerically indexed date via a NumericRangeQuery.

Usually, storing the date up to the millisecond is not necessary. @DateBridge defines the appropriate resolution you are willing to store in the index.

@Entity
@Indexed
public class Meeting {
    @Field(analyze=Analyze.NO)
    @DateBridge(resolution=Resolution.MINUTE)
    private Date date;
    // ...

You can also choose to encode the date as string using the encoding=EncodingType.STRING of DateBridge. In this case the dates are stored in the format yyyyMMddHHmmssSSS (using GMT time).

Important

A Date whose resolution is lower than MILLISECOND cannot be a @DocumentId

Important

The default date bridge uses Lucene’s DateTools to convert from Date or Calendar to its indexed value. This means that all dates are expressed in GMT time. If your requirements are to store dates in a fixed time zone you have to implement a custom date bridge. Make sure you understand the requirements of your applications regarding to date indexing and searching.

java.net.URI, java.net.URL

URI and URL are converted to their string representation

java.lang.Class

Classes are converted to their fully qualified class name. The thread context classloader is used when the class is rehydrated

Tika bridge

Hibernate Search allows you to extract text from various document types using the built-in TikaBridge which utilizes Apache Tika to extract text and metadata from the provided documents. The TikaBridge annotation can be used with String, URI, byte[] or java.sql.Blob properties. In the case of String and URI the bridge interprets the values are file paths and tries to open a file to parse the document. In the case of byte[] and Blob the values are directly passed to Tika for parsing.

Tika uses metadata as in- and output of the parsing process and it also allows to provide additional context information. This process is described in Parser interface. The Hibernate Search Tika bridge allows you to make use of these additional configuration options by providing two interfaces in conjunction with TikaBridge. The first interface is the TikaParseContextProvider. It allows you to create a custom ParseContext for the document parsing. The second interface is TikaMetadataProcessor which has two methods - prepareMetadata() and set(String, Object, Document, LuceneOptions, Metadata metadata). The former allows to add additional metadata to the parsing process (for example the file name) and the latter allows you to index metadata discovered during the parsing process.

TikaParseContextProvider as well as TikaMetadataProcessor implementation classes can both be specified as parameters on the TikaBridge annotation.

Example 19. Example mapping with Apache Tika
@Entity
@Indexed
public class Song {
    @Id
    @GeneratedValue
    long id;

    @Field
    @TikaBridge(metadataProcessor = Mp3TikaMetadataProcessor.class)
    String mp3FileName;

    // ...
}
QueryBuilder queryBuilder = fullTextSession.getSearchFactory()
    .buildQueryBuilder()
    .forEntity( Song.class )
    .get();
Query query = queryBuilder.keyword()
    .onField( "mp3FileName" )
    .ignoreFieldBridge() //mandatory
    .matching( "Apes" )
    .createQuery();
List result = fullTextSession.createFullTextQuery( query ).list();

In the Example mapping with Apache Tika the property mp3FileName represents a path to an MP3 file; the headers of this file will be indexed and so the performed query will be able to match the MP3 metadata.

Warning

TikaBridge does not implement TwoWayFieldBridge: queries built using the DSL (as in the Example mapping with Apache Tika) need to explicitly enable the option ignoreFieldBridge().

Custom bridges

Sometimes, the built-in bridges of Hibernate Search do not cover some of your property types, or the String representation used by the bridge does not meet your requirements. The following paragraphs describe several solutions to this problem.

StringBridge

The simplest custom solution is to give Hibernate Search an implementation of your expected Object to String bridge. To do so you need to implement the org.hibernate.search.bridge.StringBridge interface. All implementations have to be thread-safe as they are used concurrently.

Example 20. Custom StringBridge implementation
/**
 * Padding Integer bridge.
 * All numbers will be padded with 0 to match 5 digits
 *
 * @author Emmanuel Bernard
 */
public class PaddedIntegerBridge implements StringBridge {

    private int padding = 5;

    public String objectToString(Object object) {
        String rawInteger = ((Integer) object).toString();
        if (rawInteger.length() > padding)
            throw new IllegalArgumentException("Number too big to be padded");
        StringBuilder paddedInteger = new StringBuilder();
        for (int padIndex = rawInteger.length(); padIndex < padding; padIndex++) {
            paddedInteger.append('0');
        }
        return paddedInteger.append( rawInteger ).toString();
    }
}

Given the string bridge defined in Custom StringBridge implementation, any property or field can use this bridge thanks to the @FieldBridge annotation:

@FieldBridge(impl = PaddedIntegerBridge.class)
private Integer length;
Parameterized bridge

Parameters can also be passed to the bridge implementation making it more flexible. Passing parameters to your bridge implementation implements a ParameterizedBridge interface and parameters are passed through the @FieldBridge annotation.

Example 21. Passing parameters to your bridge implementation
public class PaddedIntegerBridge implements StringBridge, ParameterizedBridge {

    public static String PADDING_PROPERTY = "padding";
    private int padding = 5; //default

    public void setParameterValues(Map<String,String> parameters) {
        String padding = parameters.get( PADDING_PROPERTY );
        if (padding != null) this.padding = Integer.parseInt( padding );
    }

    public String objectToString(Object object) {
        String rawInteger = ((Integer) object).toString();
        if (rawInteger.length() > padding)
            throw new IllegalArgumentException("Number too big to be padded");
        StringBuilder paddedInteger = new StringBuilder( );
        for (int padIndex = rawInteger.length(); padIndex < padding; padIndex++) {
            paddedInteger.append('0');
        }
        return paddedInteger.append(rawInteger).toString();
    }
}
//on the property:
@FieldBridge(impl = PaddedIntegerBridge.class,
             params = @Parameter(name="padding", value="10")
            )
private Integer length;

The ParameterizedBridge interface can be implemented by StringBridge, TwoWayStringBridge, FieldBridge implementations.

All implementations have to be thread-safe, but the parameters are set during initialization and no special care is required at this stage.

Type aware bridge

It is sometimes useful to get the type the bridge is applied on:

  • the return type of the property for field/getter-level bridges

  • the class type for class-level bridges

An example is a bridge that deals with enums in a custom fashion but needs to access the actual enum type. Any bridge implementing AppliedOnTypeAwareBridge will get the type the bridge is applied on injected. Like parameters, the type injected needs no particular care with regard to thread-safety.

Two-way bridge

If you expect to use your bridge implementation on an id property (ie annotated with @DocumentId ), you need to use a slightly extended version of StringBridge named TwoWayStringBridge. Hibernate Search needs to read the string representation of the identifier and generate the object out of it. There is no difference in the way the @FieldBridge annotation is used.

Example 22. Implementing a TwoWayStringBridge usable for id properties
public class PaddedIntegerBridge implements TwoWayStringBridge, ParameterizedBridge {

    public static String PADDING_PROPERTY = "padding";
    private int padding = 5; //default

    public void setParameterValues(Map parameters) {
        Object padding = parameters.get(PADDING_PROPERTY);
        if (padding != null) this.padding = (Integer) padding;
    }

    public String objectToString(Object object) {
        String rawInteger = ((Integer) object).toString();
        if (rawInteger.length() > padding)
            throw new IllegalArgumentException("Number too big to be padded");
        StringBuilder paddedInteger = new StringBuilder();
        for (int padIndex = rawInteger.length(); padIndex < padding ; padIndex++) {
            paddedInteger.append('0');
        }
        return paddedInteger.append(rawInteger).toString();
    }

    public Object stringToObject(String stringValue) {
        return new Integer(stringValue);
    }
}
//On an id property:
@DocumentId
@FieldBridge(impl = PaddedIntegerBridge.class,
             params = @Parameter(name="padding", value="10")
private Integer id;
Important

It is important for the two-way process to be idempotent (ie object = stringToObject(objectToString( object ) ) ).

FieldBridge

Some use cases require more than a simple object to string translation when mapping a property to a Lucene index. To give you the greatest possible flexibility you can also implement a bridge as a FieldBridge. This interface gives you a property value and let you map it the way you want in your Lucene Document. You can for example store a property in two different document fields. The interface is very similar in its concept to the Hibernate ORM UserTypes.

Example 23. Implementing the FieldBridge interface
/**
 * Store the date in 3 different fields - year, month, day - to ease the creation of RangeQuery per
 * year, month or day (eg get all the elements of December for the last 5 years).
 * @author Emmanuel Bernard
 */
public class DateSplitBridge implements FieldBridge {
    private final static TimeZone GMT = TimeZone.getTimeZone("GMT");

    public void set(String name, Object value, Document document,
                    LuceneOptions luceneOptions) {
        Date date = (Date) value;
        Calendar cal = GregorianCalendar.getInstance(GMT);
        cal.setTime(date);
        int year = cal.get(Calendar.YEAR);
        int month = cal.get(Calendar.MONTH) + 1;
        int day = cal.get(Calendar.DAY_OF_MONTH);

        // set year
        luceneOptions.addFieldToDocument(
            name + ".year",
            String.valueOf( year ),
            document );

        // set month and pad it if needed
        luceneOptions.addFieldToDocument(
            name + ".month",
            month < 10 ? "0" : "" + String.valueOf( month ),
            document );

        // set day and pad it if needed
        luceneOptions.addFieldToDocument(
            name + ".day",
            day < 10 ? "0" : "" + String.valueOf( day ),
            document );
    }
}
//property
@FieldBridge(impl = DateSplitBridge.class)
private Date date;

In Implementing the FieldBridge interface the fields are not added directly to Document. Instead the addition is delegated to the LuceneOptions helper; this helper will apply the options you have selected on @Field, like Store or TermVector, or apply the chosen @Boost value. It is especially useful to encapsulate the complexity of COMPRESS implementations. Even though it is recommended to delegate to LuceneOptions to add fields to the Document, nothing stops you from editing the Document directly and ignore the LuceneOptions in case you need to.

Tip

Classes like LuceneOptions are created to shield your application from changes in Lucene API and simplify your code. Use them if you can, but if you need more flexibility you’re not required to.

ClassBridge

It is sometimes useful to combine more than one property of a given entity and index this combination in a specific way into the Lucene index. The @ClassBridge respectively @ClassBridges annotations can be defined at class level (as opposed to the property level). In this case the custom field bridge implementation receives the entity instance as the value parameter instead of a particular property. Though not shown in Implementing a class bridge, @ClassBridge supports the termVector attribute discussed in section Basic mapping.

Example 24. Implementing a class bridge
@Entity
@Indexed
@ClassBridge(name="branchnetwork",
             store=Store.YES,
             impl = CatFieldsClassBridge.class,
             params = @Parameter( name="sepChar", value=" " ) )
public class Department {
    private int id;
    private String network;
    private String branchHead;
    private String branch;
    private Integer maxEmployees
    // ...
}
public class CatFieldsClassBridge implements FieldBridge, ParameterizedBridge {
    private String sepChar;

    public void setParameterValues(Map parameters) {
        this.sepChar = (String) parameters.get( "sepChar" );
    }

    public void set(
        String name, Object value, Document document, LuceneOptions luceneOptions) {
        // In this particular class the name of the new field was passed
        // from the name field of the ClassBridge Annotation. This is not
        // a requirement. It just works that way in this instance. The
        // actual name could be supplied by hard coding it below.
        Department dep = (Department) value;
        String fieldValue1 = dep.getBranch();
        if ( fieldValue1 == null ) {
            fieldValue1 = "";
        }
        String fieldValue2 = dep.getNetwork();
        if ( fieldValue2 == null ) {
            fieldValue2 = "";
        }
        String fieldValue = fieldValue1 + sepChar + fieldValue2;
        Field field = new Field( name, fieldValue, luceneOptions.getStore(),
            luceneOptions.getIndex(), luceneOptions.getTermVector() );
        field.setBoost( luceneOptions.getBoost() );
        document.add( field );
   }
}

In this example, the particular CatFieldsClassBridge is applied to the department instance, the field bridge then concatenate both branch and network and index the concatenation.

BridgeProvider: associate a bridge to a given return type

Custom field bridges are very flexible, but it can be tedious and error prone to apply the same custom @FieldBridge annotation every time a property of a given type is present in your domain model. That is what BridgeProviders are for.

Let’s imagine that you have a type Currency in your application and that you want to apply your very own CurrencyFieldBridge every time an indexed property returns Currency. You can do it the hard way:

Example 25. Applying the same @FieldBridge for a type the hard way
@Entity @Indexed
public class User {
    @FieldBridge(impl=CurrencyFieldBridge.class)
    public Currency getDefaultCurrency();

    // ...
}

@Entity @Indexed
public class Account {
    @FieldBridge(impl=CurrencyFieldBridge.class)
    public Currency getCurrency();

    // ...
}

// continue to add @FieldBridge(impl=CurrencyFieldBridge.class) everywhere Currency is

Or you can write your own BridgeProvider implementation for Currency.

Example 26. Writing a BridgeProvider
public class CurrencyBridgeProvider implements BridgeProvider {

    //needs a default no-arg constructor

    @Override
    public FieldBridge provideFieldBridge(BridgeContext bridgeProviderContext) {
        if ( bridgeProviderContext.getReturnType().equals( Currency.class ) ) {
            return CurrencyFieldBridge.INSTANCE;
        }
        return null;
    }
}
# service file named META-INF/services/org.hibernate.search.bridge.spi.BridgeProvider
com.acme.myapps.hibernatesearch.CurrencyBridgeProvider

You need to implement BridgeProvider and create a service file named META-INF/services/org.hibernate.search.bridge.spi.BridgeProvider. This file must contain the fully qualified class name(s) of the BridgeProvider implementations. This is the classic Service Loader discovery mechanism.

Now, any indexed property of type Currency will use CurrencyFieldBridge automatically.

Example 27. An explicit @FieldBrige is no longer needed
@Entity @Indexed
public class User {

    @Field
    public Currency getDefaultCurrency();

    // ...
}

@Entity @Indexed
public class Account {

    @Field
    public Currency getCurrency();

    // ...
}

//CurrencyFieldBridge is applied automatically everywhere Currency is found on an indexed property

A few more things you need to know:

  • a BridgeProvider must have a no-arg constructor

  • if a BridgeProvider only returns FieldBridge instances if it is meaningful for the calling context. Null otherwise. In our example, the return type must be Currency to be meaningful to our provider.

  • if two or more bridge providers return a FieldBridge instance for a given return type, an exception will be raised.

Note
What is a calling context

A calling context is represented by the BridgeContext object and represents the environment for which we are looking for a bridge. BridgeContext gives access to the return type of the indexed property as well as the ServiceManager which gives access to the ClassLoaderService for everything class loader related.

ClassLoaderService classLoaderService = serviceManager.requestService( ClassLoaderService.class );
//use the classLoaderService
serviceManager.releaseService( ClassLoaderService.class );

Conditional indexing

Important

This feature is considered experimental. More operation types might be added in the future depending on user feedback.

In some situations, you want to index an entity only when it is in a given state, for example:

  • only index blog entries marked as published

  • no longer index invoices when they are marked archived

This serves both functional and technical needs. You don’t want your blog readers to find your draft entries and filtering them off the query is a bit annoying. Very few of your entities are actually required to be indexed and you want to limit indexing overhead and keep indexes small and fast.

Hibernate Search lets you intercept entity indexing operations and override them. It is quite simple:

  • Write an EntityIndexingInterceptor class with your entity state based logic

  • Mark the entity as intercepted by this implementation

Example 28. Index blog entries only when they are published and remove them when they are in a different state
/**
 * Only index blog when it is in published state
 *
 * @author Emmanuel Bernard <emmanuel@hibernate.org>
 */
public class IndexWhenPublishedInterceptor implements EntityIndexingInterceptor<Blog> {
    @Override
    public IndexingOverride onAdd(Blog entity) {
        if (entity.getStatus() == BlogStatus.PUBLISHED) {
            return IndexingOverride.APPLY_DEFAULT;
        }
        return IndexingOverride.SKIP;
    }

    @Override
    public IndexingOverride onUpdate(Blog entity) {
        if (entity.getStatus() == BlogStatus.PUBLISHED) {
            return IndexingOverride.UPDATE;
        }
        return IndexingOverride.REMOVE;
    }

    @Override
    public IndexingOverride onDelete(Blog entity) {
        return IndexingOverride.APPLY_DEFAULT;
    }

    @Override
    public IndexingOverride onCollectionUpdate(Blog entity) {
        return onUpdate(entity);
    }
}
@Entity
@Indexed(interceptor=IndexWhenPublishedInterceptor.class)
public class Blog {
    @Id
    @GeneratedValue
    public Integer getId() { return id; }
    public void setId(Integer id) {  this.id = id; }
    private Integer id;

    @Field
    public String getTitle() { return title; }
    public void setTitle(String title) {  this.title = title; }
    private String title;

    public BlogStatus getStatus() { return status; }
    public void setStatus(BlogStatus status) {  this.status = status; }
    private BlogStatus status;

    // ...
}

We mark the Blog entity with @Indexed.interceptor. As you can see, IndexWhenPublishedInterceptor implements EntityIndexingInterceptor and accepts Blog entities (it could have accepted super classes as well - for example Object if you create a generic interceptor.

You can react to several planned indexing events:

  • when an entity is added to your datastore

  • when an entity is updated in your datastore

  • when an entity is deleted from your datastore

  • when a collection own by this entity is updated in your datastore

For each occurring event you can respond with one of the following actions:

  • APPLY_DEFAULT: that’s the basic operation that lets Hibernate Search update the index as expected - creating, updating or removing the document

  • SKIP: ask Hibernate Search to not do anything to the index for this event - data will not be created, updated or removed from the index in any way

  • REMOVE: ask Hibernate Search to remove indexing data about this entity - you can safely ask for REMOVE even if the entity has not yet been indexed

  • UPDATE: ask Hibernate Search to either index or update the index for this entity - it is safe to ask for UPDATE even if the entity has never been indexed

Note

Be careful, not every combination makes sense: for example, asking to UPDATE the index upon onDelete. Note that you could ask for SKIP in this situation if saving indexing time is critical for you. That’s rarely the case though.

By default, no interceptor is applied on an entity. You have to explicitly define an interceptor via the @Indexed annotation (see @Indexed) or programmatically (see Programmatic API). This class and all its subclasses will then be intercepted. You can stop or change the interceptor used in a subclass by overriding @Indexed.interceptor. Hibernate Search provides DontInterceptEntityInterceptor which will explicitly not intercept any call. This is useful to reset interception within a class hierarchy.

Note

Dirty checking optimization is disabled when interceptors are used. Dirty checking optimization does check what has changed in an entity and only triggers an index update if indexed properties are changed. The reason is simple, your interceptor might depend on a non indexed property which would be ignored by this optimization.

Warning

An EntityIndexingInterceptor can never override an explicit indexing operation such as index(T), purge(T, id) or purgeAll(class).

Providing your own id

You can provide your own id for Hibernate Search if you are extending the internals. You will have to generate a unique value so it can be given to Lucene to be indexed. This will have to be given to Hibernate Search when you create an org.hibernate.search.Work object - the document id is required in the constructor.

The ProvidedId annotation

Unlike @DocumentIdwhich is applied on field level, @ProvidedId is used on the class level. Optionally you can specify your own bridge implementation using the bridge property. Also, if you annotate a class with @ProvidedId, your subclasses will also get the annotation - but it is not done by using the java.lang.annotations.@Inherited. Be sure however, to not use this annotation with @DocumentId as your system will break.

Example 29. Providing your own id
@ProvidedId(bridge = org.my.own.package.MyCustomBridge)
@Indexed
public class MyClass{
    @Field
    String MyString;
    ...
}

Programmatic API

Although the recommended approach for mapping indexed entities is to use annotations, it is sometimes more convenient to use a different approach:

  • the same entity is mapped differently depending on deployment needs (customization for clients)

  • some automation process requires the dynamic mapping of many entities sharing common traits

While it has been a popular demand in the past, the Hibernate team never found the idea of an XML alternative to annotations appealing due to its heavy duplication, lack of code refactoring safety, because it did not cover all the use case spectrum and because we are in the 21st century :)

The idea of a programmatic API was much more appealing and has now become a reality. You can programmatically define your mapping using a programmatic API: you define entities and fields as indexable by using mapping classes which effectively mirror the annotation concepts in Hibernate Search. Note that fan(s) of XML approach can design their own schema and use the programmatic API to create the mapping while parsing the XML stream.

In order to use the programmatic model you must first construct a SearchMapping object which you can do in two ways:

  • directly

  • via a factory

You can pass the SearchMapping object directly via the property key hibernate.search.model_mapping or the constant Environment.MODEL_MAPPING. Use the Configuration API or the Map passed to the JPA Persistence bootstrap methods.

Example 30. Programmatic mapping
SearchMapping mapping = new SearchMapping();
// ... configure mapping
Configuration config = new Configuration();
config.getProperties().put( Environment.MODEL_MAPPING, mapping );
SessionFactory sf = config.buildSessionFactory();
Example 31. Programmatic mapping with JPA
SearchMapping mapping = new SearchMapping();
// ... configure mapping
Map props = new HashMap();
props.put( Environment.MODEL_MAPPING, mapping );
EntityManagerFactory emf = Persistence.createEntityManagerFactory( "userPU", props );

Alternatively, you can create a factory class (ie hosting a method annotated with @Factory) whose factory method returns the SearchMapping object. The factory class must have a no-arg constructor and its fully qualified class name is passed to the property key hibernate.search.model_mapping or its type-safe representation Environment.MODEL_MAPPING. This approach is useful when you do not necessarily control the bootstrap process like in a Java EE, CDI or Spring Framework container.

Example 32. Use a mapping factory
public class MyAppSearchMappingFactory {
    @Factory
    public SearchMapping getSearchMapping() {
        SearchMapping mapping = new SearchMapping();
        mapping
                .analyzerDef( "ngram", StandardTokenizerFactory.class )
                    .filter( LowerCaseFilterFactory.class )
                    .filter( NGramFilterFactory.class )
                        .param( "minGramSize", "3" )
                        .param( "maxGramSize", "3" );
        return mapping;
    }
}
<persistence ...>
    <persistence-unit name="users">
        ...
        <properties>
            <property name="hibernate.search.model_mapping"
                      value="com.acme.MyAppSearchMappingFactory"/>
        </properties>
    </persistence-unit>
</persistence>

The SearchMapping is the root object which contains all the necessary indexable entities and fields. From there, the SearchMapping object exposes a fluent (and thus intuitive) API to express your mappings: it contextually exposes the relevant mapping options in a type-safe way. Just let your IDE auto-completion feature guide you through.

Today, the programmatic API cannot be used on a class annotated with Hibernate Search annotations, chose one approach or the other. Also note that the same default values apply in annotations and the programmatic API. For example, the @Field.name is defaulted to the property name and does not have to be set.

Each core concept of the programmatic API has a corresponding example to depict how the same definition would look using annotation. Therefore seeing an annotation example of the programmatic approach should give you a clear picture of what Hibernate Search will build with the marked entities and associated properties.

Mapping an entity as indexable

The first concept of the programmatic API is to define an entity as indexable. Using the annotation approach a user would mark the entity as @Indexed, the following example demonstrates how to programmatically achieve this.

Example 33. Marking an entity indexable
SearchMapping mapping = new SearchMapping();

mapping.entity(Address.class)
           .indexed()
               .indexName("Address_Index") //optional
               .interceptor(IndexWhenPublishedInterceptor.class); //optional

cfg.getProperties().put("hibernate.search.model_mapping", mapping);

As you can see you must first create a SearchMapping object which is the root object that is then passed to the Configuration object as property. You must declare an entity and if you wish to make that entity as indexable then you must call the indexed() method. The indexed() method has an optional indexName(String indexName) which can be used to change the default index name that is created by Hibernate Search. Likewise, an interceptor(Class<? extends EntityIndexedInterceptor>) is available. Using the annotation model the above can be achieved as:

Example 34. Annotation example of indexing entity
@Entity
@Indexed(index="Address_Index", interceptor=IndexWhenPublishedInterceptor.class)
public class Address {
   // ...
}

Adding DocumentId to indexed entity

To set a property as a document id:

Example 35. Enabling document id with programmatic model
SearchMapping mapping = new SearchMapping();

mapping.entity(Address.class).indexed()
           .property("addressId", ElementType.FIELD) //field access
               .documentId()
                   .name("id");

cfg.getProperties().put( "hibernate.search.model_mapping", mapping);

The above is equivalent to annotating a property in the entity as @DocumentId as seen in the following example:

Example 36. DocumentId annotation definition
@Entity
@Indexed
public class Address {
 @Id
 @GeneratedValue
 @DocumentId(name="id")
 private Long addressId;

 // ...
}

Defining analyzers

Analyzers can be programmatically defined using the analyzerDef(String analyzerDef, Class<? extends TokenizerFactory> tokenizerFactory) method. This method also enables you to define filters for the analyzer definition. Each filter that you define can optionally take in parameters as seen in the following example :

Example 37. Defining analyzers using programmatic model
SearchMapping mapping = new SearchMapping();

mapping
    .analyzerDef( "ngram", StandardTokenizerFactory.class )
        .filter( LowerCaseFilterFactory.class )
        .filter( NGramFilterFactory.class )
            .param( "minGramSize", "3" )
            .param( "maxGramSize", "3" )
    .analyzerDef( "en", StandardTokenizerFactory.class )
        .filter( LowerCaseFilterFactory.class )
        .filter( EnglishPorterFilterFactory.class )
    .analyzerDef( "de", StandardTokenizerFactory.class )
        .filter( LowerCaseFilterFactory.class )
        .filter( GermanStemFilterFactory.class )
    .entity(Address.class).indexed()
        .property("addressId", ElementType.METHOD) //getter access
            .documentId()
                .name("id");

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The analyzer mapping defined above is equivalent to the annotation model using @AnalyzerDef in conjunction with @AnalyzerDefs:

Example 38. Analyzer definition using annotation
@Indexed
@Entity
@AnalyzerDefs({
  @AnalyzerDef(name = "ngram",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = NGramFilterFactory.class,
        params = {
          @Parameter(name = "minGramSize",value = "3"),
          @Parameter(name = "maxGramSize",value = "3")
       })
   }),
  @AnalyzerDef(name = "en",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = EnglishPorterFilterFactory.class)
   }),

  @AnalyzerDef(name = "de",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = GermanStemFilterFactory.class)
  })

})
public class Address {
   // ...
}

Defining full text filter definitions

The programmatic API provides easy mechanism for defining full text filter definitions which is available via @FullTextFilterDef and @FullTextFilterDefs (see [query-filter]). The next example depicts the creation of full text filter definition using the fullTextFilterDef method.

Example 39. Defining full text definition programmatically
SearchMapping mapping = new SearchMapping();

mapping
    .analyzerDef( "en", StandardTokenizerFactory.class )
        .filter( LowerCaseFilterFactory.class )
        .filter( EnglishPorterFilterFactory.class )
    .fullTextFilterDef("security", SecurityFilterFactory.class)
            .cache(FilterCacheModeType.INSTANCE_ONLY)
    .entity(Address.class)
        .indexed()
        .property("addressId", ElementType.METHOD)
            .documentId()
                .name("id")
        .property("street1", ElementType.METHOD)
            .field()
                .analyzer("en")
                .store(Store.YES)
            .field()
                .name("address_data")
                .analyzer("en")
                .store(Store.NO);

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The previous example can effectively been seen as annotating your entity with @FullTextFilterDef like below:

Example 40. Using annotation to define full text filter definition
@Entity
@Indexed
@AnalyzerDefs({
  @AnalyzerDef(name = "en",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = EnglishPorterFilterFactory.class)
   })
})
@FullTextFilterDefs({
 @FullTextFilterDef(name = "security", impl = SecurityFilterFactory.class, cache = FilterCacheModeType.INSTANCE_ONLY)
})
public class Address {

 @Id
 @GeneratedValue
 @DocumentId(name="id")
 public Long getAddressId() {...};

 @Fields({
      @Field(store=Store.YES, analyzer=@Analyzer(definition="en")),
      @Field(name="address_data", analyzer=@Analyzer(definition="en"))
 })
 public String getAddress1() {...};

 // ...

}

Defining fields for indexing

When defining fields for indexing using the programmatic API, call field() on the property(String propertyName, ElementType elementType) method. From field() you can specify the name, index, store, bridge and analyzer definitions.

Example 41. Indexing fields using programmatic API
SearchMapping mapping = new SearchMapping();

mapping
    .analyzerDef( "en", StandardTokenizerFactory.class )
        .filter( LowerCaseFilterFactory.class )
        .filter( EnglishPorterFilterFactory.class )
    .entity(Address.class).indexed()
        .property("addressId", ElementType.METHOD)
            .documentId()
                .name("id")
        .property("street1", ElementType.METHOD)
            .field()
                .analyzer("en")
                .store(Store.YES)
            .field()
                .name("address_data")
                .analyzer("en");

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The above example of marking fields as indexable is equivalent to defining fields using @Field as seen below:

Example 42. Indexing fields using annotation
@Entity
@Indexed
@AnalyzerDefs({
  @AnalyzerDef(name = "en",
    tokenizer = @TokenizerDef(factory = StandardTokenizerFactory.class),
    filters = {
      @TokenFilterDef(factory = LowerCaseFilterFactory.class),
      @TokenFilterDef(factory = EnglishPorterFilterFactory.class)
   })
})
public class Address {

 @Id
 @GeneratedValue
 @DocumentId(name="id")
 private Long getAddressId() {...};

 @Fields({
      @Field(store=Store.YES, analyzer=@Analyzer(definition="en")),
      @Field(name="address_data", analyzer=@Analyzer(definition="en"))
 })
 public String getAddress1() {...}

 // ...
}
Note

When using a programmatic mapping for a given type X, you can only refer to fields defined on X. Fields or methods inherited from a super type are not configurable. In case you need to configure a super class property, you need to either override the property in X or create a programmatic mapping for the super class. This mimics the usage of annotations where you cannot annotate a field or method of a super class either, unless it is redefined in the given type.

Programmatically defining embedded entities

In this section you will see how to programmatically define entities to be embedded into the indexed entity similar to using the @IndexedEmbedded model. In order to define this you must mark the property as indexEmbedded.There is the option to add a prefix to the embedded entity definition which can be done by calling prefix as seen in the example below:

Example 43. Programmatically defining embedded entities
SearchMapping mapping = new SearchMapping();

mapping
    .entity(ProductCatalog.class)
        .indexed()
        .property("catalogId", ElementType.METHOD)
            .documentId()
                .name("id")
        .property("title", ElementType.METHOD)
            .field()
                .index(Index.YES)
                .store(Store.NO)
        .property("description", ElementType.METHOD)
             .field()
                 .index(Index.YES)
                 .store(Store.NO)
        .property("items", ElementType.METHOD)
            .indexEmbedded()
                .prefix("catalog.items"); //optional

cfg.getProperties().put( "hibernate.search.model_mapping", mapping )

The next example shows the same definition using annotation (@IndexedEmbedded):

Example 44. Using @IndexedEmbedded
@Entity
@Indexed
public class ProductCatalog {
 @Id
 @GeneratedValue
 @DocumentId(name="id")
 public Long getCatalogId() {...}

 @Field
 public String getTitle() {...}

 @Field
 public String getDescription();

 @OneToMany(fetch = FetchType.LAZY)
 @IndexColumn(name = "list_position")
 @Cascade(org.hibernate.annotations.CascadeType.ALL)
 @IndexedEmbedded(prefix="catalog.items")
 public List<Item> getItems() {...}

 // ...
}

Contained In definition

@ContainedIn can be defined as seen in the example below:

Example 45. Programmatically defining ContainedIn
SearchMapping mapping = new SearchMapping();

mapping
    .entity(ProductCatalog.class)
        .indexed()
        .property("catalogId", ElementType.METHOD)
            .documentId()
        .property("title", ElementType.METHOD)
            .field()
        .property("description", ElementType.METHOD)
            .field()
        .property("items", ElementType.METHOD)
            .indexEmbedded()

    .entity(Item.class)
        .property("description", ElementType.METHOD)
            .field()
        .property("productCatalog", ElementType.METHOD)
            .containedIn();

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

This is equivalent to defining @ContainedIn in your entity:

Example 46. Annotation approach for ContainedIn
@Entity
@Indexed
public class ProductCatalog {

 @Id
 @GeneratedValue
 @DocumentId
 public Long getCatalogId() {...}

 @Field
 public String getTitle() {...}

 @Field
 public String getDescription() {...}

 @OneToMany(fetch = FetchType.LAZY)
 @IndexColumn(name = "list_position")
 @Cascade(org.hibernate.annotations.CascadeType.ALL)
 @IndexedEmbedded
 private List<Item> getItems() {...}

 // ...
}
@Entity
public class Item {

 @Id
 @GeneratedValue
 private Long itemId;

 @Field
 public String getDescription() {...}

 @ManyToOne( cascade = { CascadeType.PERSIST, CascadeType.REMOVE } )
 @ContainedIn
 public ProductCatalog getProductCatalog() {...}

 // ...
}

Date/Calendar Bridge

In order to define a calendar or date bridge mapping, call the dateBridge(Resolution resolution) or calendarBridge(Resolution resolution) methods after you have defined a field() in the SearchMapping hierarchy.

Example 47. Programmatic model for defining calendar/date bridge
SearchMapping mapping = new SearchMapping();

mapping
    .entity(Address.class)
        .indexed()
        .property("addressId", ElementType.FIELD)
            .documentId()
    .property("street1", ElementType.FIELD()
        .field()
    .property("createdOn", ElementType.FIELD)
        .field()
        .dateBridge(Resolution.DAY)
    .property("lastUpdated", ElementType.FIELD)
        .calendarBridge(Resolution.DAY);

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

See below for defining the above using @CalendarBridge and @DateBridge:

Example 48. @CalendarBridge and @DateBridge definition
@Entity
@Indexed
public class Address {

 @Id
 @GeneratedValue
 @DocumentId
 private Long addressId;

 @Field
 private String address1;

 @Field
 @DateBridge(resolution=Resolution.DAY)
 private Date createdOn;

 @CalendarBridge(resolution=Resolution.DAY)
 private Calendar lastUpdated;

 // ...
}

Declaring bridges

It is possible to associate bridges to programmatically defined fields. When you define a field() programmatically you can use the bridge(Class<?> impl) to associate a FieldBridge implementation class. The bridge method also provides optional methods to include any parameters required for the bridge class. The below shows an example of programmatically defining a bridge:

Example 49. Declaring field bridges programmatically
SearchMapping mapping = new SearchMapping();

mapping
    .entity(Address.class)
        .indexed()
        .property("addressId", ElementType.FIELD)
            .documentId()
        .property("street1", ElementType.FIELD)
            .field()
            .field()
                .name("street1_abridged")
                .bridge( ConcatStringBridge.class )
                    .param( "size", "4" );

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The above can equally be defined using annotations, as seen in the next example.

Example 50. Declaring field bridges using annotation
@Entity
@Indexed
public class Address {

 @Id
 @GeneratedValue
 @DocumentId(name="id")
 private Long addressId;

 @Fields({
      @Field,
      @Field(name="street1_abridged",
             bridge = @FieldBridge( impl = ConcatStringBridge.class,
             params = @Parameter( name="size", value="4" ))
 })
 private String address1;

 // ...
}

Mapping class bridge

You can define class bridges on entities programmatically. This is shown in the next example:

Example 51. Defining class bridges using API
SearchMapping mapping = new SearchMapping();

mapping
    .entity(Departments.class)
      .classBridge(CatDeptsFieldsClassBridge.class)
         .name("branchnetwork")
         .index(Index.YES)
         .store(Store.YES)
         .param("sepChar", " ")
      .classBridge(EquipmentType.class)
         .name("equiptype")
         .index(Index.YES)
         .store(Store.YES)
         .param("C", "Cisco")
         .param("D", "D-Link")
         .param("K", "Kingston")
         .param("3", "3Com")
      .indexed();

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The above is similar to using @ClassBridge as seen in the next example:

Example 52. Using @ClassBridge
@Entity
@Indexed
@ClassBridges ( {
  @ClassBridge(name="branchnetwork",
     store= Store.YES,
     impl = CatDeptsFieldsClassBridge.class,
     params = @Parameter( name="sepChar", value=" " ) ),
  @ClassBridge(name="equiptype",
     store= Store.YES,
     impl = EquipmentType.class,
     params = {@Parameter( name="C", value="Cisco" ),
        @Parameter( name="D", value="D-Link" ),
        @Parameter( name="K", value="Kingston" ),
        @Parameter( name="3", value="3Com" )
   })
})
public class Departments {
   // ...
}

Mapping dynamic boost

You can apply a dynamic boost factor on either a field or a whole entity:

Example 53. DynamicBoost mapping using programmatic model
SearchMapping mapping = new SearchMapping();
mapping
  .entity(DynamicBoostedDescLibrary.class)
   .indexed()
   .dynamicBoost(CustomBoostStrategy.class)
  .property("libraryId", ElementType.FIELD)
    .documentId().name("id")
  .property("name", ElementType.FIELD)
    .dynamicBoost(CustomFieldBoostStrategy.class);
    .field()
      .store(Store.YES)

cfg.getProperties().put( "hibernate.search.model_mapping", mapping );

The next example shows the equivalent mapping using the @DynamicBoost annotation:

Example 54. Using the @DynamicBoost
@Entity
@Indexed
@DynamicBoost(impl = CustomBoostStrategy.class)
public class DynamicBoostedDescriptionLibrary {

 @Id
 @GeneratedValue
 @DocumentId
 private int id;

 private float dynScore;

 @Field(store = Store.YES)
 @DynamicBoost(impl = CustomFieldBoostStrategy.class)
 private String name;

 public DynamicBoostedDescriptionLibrary() {
  dynScore = 1.0f;
 }

 // ...
}