Skip to content


Evren Sirin edited this page · 5 revisions
Clone this wiki locally

Pellet FAQ


Are there any known bugs or issues with Pellet?

Yes, like any complex software system, Pellet has some bugs and lacks some features that you might want. Those issues that have been documented can be reviewed in the public issue tracker.

If you believe you’ve found a new bug, please review the tickets that already exist the issue tracker. If your bug does not match an existing ticket, send a report detailing the error condition and including the data necessary to reproduce the bug to the pellet-users mailing list. See this page for more details about how to file a bug report.

Which version of library X is shipped within Pellet?

Libraries shipped with Pellet come in a separate directory which, as of Pellet 1.5, contains a file named “version.txt” giving the information about the version of that library.

Which version of Pellet am I using?

If you are using an official release of Pellet, you can get version info by passing the-version option to the Pellet command line utility. For example, on a Unix-like system, from the distribution directory one would run

pellet --version

And expect to see something like

Version: 2.0 Released: November 16 2009

Alternately, theorg.mindswap.pellet.utils.VersionInfo class can be used in a Java program to determine the Pellet version.

I’m new to Pellet. How can I start?

Pellet’s basic functionality can be accessed through a command line interface using the classorg.mindswap.pellet.Pellet. The Pellet distribution includes convenient shell scripts that will run this on most Java platforms. When run without arguments, it will produce a usage message describing available options and arguments.

To access this tool, download the latest release, unzip the distribution and get a command shell in the base directory of the distribution. If you are on a Windows platform, run thepellet.bat script. If you are on a Unix-like platforms (including a newer Mac), run script.

For example, the following uses Pellet to classify the commonly referenced wine ontology

sh classify

If you are new to OWL-DL, you may find the OWL 2 Overview helpful.

If you are interested in using the reasoner in a Java program, a good place to start is the collection of example code included in theexamples directory of the Pellet distribution. It includes source code demonstrating the use of Pellet with Jena and OWL-API.

Which version of Java does Pellet require?

Pellet requires a Java 5 (or greater) JVM; we test each Pellet release with Sun JVM. Other JVMs may or may not work, or work as well, as Sun JVMs.

How do I file a Pellet bug report?

Please read How to Make Good Bug Reports for Pellet carefully.

In summary, try to isolate the bug, both in code and data. If you can reproduce the bug with either (1) a subset of the data or (2) yr code, that will make the bug report much more valuable. The ideal is to be able to isolate the minimal data & code necessary for someone else to reproduce the bug; though, like all ideals, approximations are helpful, too.


How can I control logging messages printed by Pellet?

If you are using Pellet 2.0.0-rc1 or later:

Pellet uses JDK logging. The verbosity of this logging can be controlled with a configuration file. The distribution includes a sample configuration,examples/ The location of the configuration file can be specified by setting the system propertyjava.util.logging.config.file. The location should be specified as a (relative or absolute) path. For example, the following command would use a local file located in the current directory:

java -jar lib\pellet-cli.jar

Note that, in the configuration file, in addition to changing the log level for specific loggers, you must change the handler’s level (you can think of it as two levels of severity checks — one at log record generation, one at consumption). You can find more information about JDK logging here.

If you are using Pellet 1.5.2 or earlier:

Pellet uses log4j for logging. The verbosity of this logging can be controlled with a log4j configuration file. The distribution includes a sample configuration,src/ The location of the configuration file can be specified by setting the system propertylog4j.configuration. The location should be specified as a (relative or absolute) URL. For example, the following command would use a local file located in the current directory:

java -jar lib\pellet.jar

If the system property is not set, the filename defaults and the corresponding file inside pellet.jar will be used. By rebuilding the jar file you can change the default configuration.

Note thatjena.jar also contains file that will take precedence if it precedespellet.jar in the classpath.

There has been some sample code posted to the mailing list that might be helpful.

How is Pellet configured?

Several Pellet features can be controlled by changing the configuration file. A sample configuration file,, can be found in the Pellet distribution. The location of the configuration file should be specified by setting the system propertypellet.configuration. The location should be specified as a (relative or absolute) URL. For example, the following command would use the local file located at the current directory:

java -jar lib\pellet.jar

If the system property is not set, the configuration file insidepellet.jar will be used. By rebuilding the jar file you can change the default configuration. See the sample configuration file for the description of parameters that can be configured. Note that these configuration parameters can also be controlled programmatically by accessing thePelletOptions class.


Does Pellet reason with OWL-Full ontologies?

OWL-DL restricts OWL-Full ontologies in several different ways as explained in Section 8.2 of OWL reference. Pellet relaxes most of the OWL-DL restrictions and handles OWL-Full ontologies (see the list below for more details).

Any OWL-Full feature that is not supported by the reasoner will be ignored. When an OWL-Full ontology is loaded to the reasoner, a series of warning messages will be printed about the axioms violating the OWL-DL restrictions. This behavior can be changed such that Pellet will refuse to load the OWL-Full ontology by throwing an exception. This can be achieved by setting the configuration option IGNORE_UNSUPPORTED_AXIOMS to false.

The following list contains all the OWL-DL restrictions defined in the OWL specification and for each restriction explains if/how Pellet relaxes that restriction:

  • ** OWL-DL Restriction:** OWL-DL requires a pairwise separation between classes, datatypes, datatype properties, object properties, annotation properties, ontology properties (i.e., the import and versioning stuff), individuals, data values and the built-in vocabulary. This means that, for example, a class cannot be at the same time an individual.

    Pellet Restriction: Pellet uses punning semantics of OWL 1.1 to partially overcome the vocabulary separation restriction. The punning support in Pellet is explained here.

  • ** OWL-DL Restriction:** In OWL-DL the set of object properties and datatype properties are disjoint. This implies that the following four property characteristics:
    o inverse of,
    o inverse functional,
    o symmetric, and
    o transitive
    can never be specified for datatype properties.

    Pellet Restriction: Pellet has these restrictions with the exception of inverse functional datatype properties. Pellet fully supports inverse functional datatype properties. Axioms violating other restrictions are ignored.

  • ** OWL-DL Restriction:** OWL DL requires that no cardinality constraints (local nor global) can be placed on transitive properties or their inverses or any of their superproperties.

    Pellet Restriction: Pellet requires this restriction. Any transitivity axiom violating these restrictions are ignored (cardinality restrictions are not ignored).

  • ** OWL-DL Restriction:** Annotations are allowed only under certain conditions. See Sec. 7.1 for details.

    Pellet Restriction: Annotations have no effect on reasoning results and will be completely ignored by Pellet regardless of the configuration option.

  • ** OWL-DL Restriction:** Most RDF vocabulary cannot be used within OWL DL. See the OWL Semantics and Abstract Syntax document [OWL S&AS] for details.

    Pellet Restriction: Pellet relaxes this restriction by extending punning to built-in RDFS vocabulary. For example, if rdf:List is used in individual assertions it will be treated as a new class.

  • ** OWL-DL Restriction:** All axioms must be well-formed, with no missing or extra components, and must form a tree-like structure. This constraint implies that all classes and properties that one refers to are explicitly typed as OWL classes or properties, respectively.

    Pellet Restriction: Pellet partially requires this restriction. All axioms need to be well-formed and any other axiom will be ignored. However, some URI’s might be untyped and Pellet will uses some heuristics to infer the type for those URI’s. For example, if a URI is used in a owl:subClassOf then that URI will be treated as if it is defined as an OWL class.

  • ** OWL-DL Restriction:** Axioms (facts) about individual equality and difference must be about named individuals.

    Pellet Restriction: Pellet requires this restriction. If one uses owl:sameAs or owl:differentFrom on URI’s that are not defined as individuals, Pellet will use punning to infer that same URI’s are also used as individuals.

Does Pellet support closed world reasoning?

Some forms of closed world can be encoded in an OWL ontology by explicitly limiting the universe to the known individuals, e.g. setting owl:Thing equivalent to an enumeration of all the known individuals (using owl:oneOf). The extension of a class can also be closed by setting it equivalent to the enumeration of all members. A similar thing can be done for property assertions.

A common motivation for closed world reasoning has to do with Integrity Constraint validation. Indeed, one might want to use OWL in order to validate RDF, Linked Data, virtual RDF, and so on. Pellet Integrity Constraint Validator treats OWL as a schema or validation language for RDF data via auto-generated SPARQL queries that can be executed on any SPARQL-enabled RDF store. Pellet ICV extends core Pellet by interpreting OWL axioms with integrity constraint semantics. That means you can write ontologies that validate RDF data via auto-generated SPARQL queries.

Does Pellet support the Unique Name Assumption ( UNA)?

With UNA every named individual is assumed to be different from every other. OWL semantics do not adopt UNA, but it is possible to mimic UNA by having anowl:AllDifferent description and adding all the named individuals in theowl:distinctMembers list. However, maintaining such a list is costly—the size of the input file is increased considerably and each individual addition and removal affects the list. Pellet provides an optionUSE_UNIQUE_NAME_ASSUMPTION to enable UNA in the reasoning process. You can turn this option on by changing the configuration file or programmatically accessing thePelletOptions class.

Does Pellet support rules? SWRL? Which Builtins?

Pellet supports reasoning with SWRL rules. Pellet interprets SWRL using the DL-Safe Rules notion which means rules will be applied to only named individuals in the ontology. No additional atoms are needed in the rules to enforce DL-safety. Pellet supports all SWRL atoms described in the specification, see below for details about which built-in functions are supported.

You don’t need to use any utility function to use SWRL in Pellet. You can directly load a file that contains SWRL rules into Pellet and rules will be parsed and processed. SWRL rules can be mixed with OWL axioms in an ontology, too. See the examples directory in the Pellet distribution for a sample program that shows how to use SWRL rules with Pellet. You can either use the OWLAPI or Jena interface; they both handle SWRL. Using the configuration optionDL_SAFE_RULES, the reasoner can be configured to ignore SWRL rules. Rules support is enabled by default and only the rules that use one of the mentioned unsupported features will be ignored (or an exception will be thrown based on the value of the configuration optionIGNORE_UNSUPPORTED_AXIOMS).

Which SWRL builtins does Pellet support?

As of Pellet 2.0, we support all of the SWRL builtins with the following exceptions:

  1. We don’t support any builtins for Lists
  2. We only support the first 5 Builtins for Date, Time, and Duration

Pellet will almost certainly never support the List builtins because of OWL DL restrictions; however, we will support all the date, time, and duration builtins in some future release.

What is punning? Does Pellet support punning?

Punning is the meta-modeling practice of using a single name to refer to any or all of an individual, a class, or a property. Complications of this practice are managed by restricting its use such that, for a given use of a name, context sufficiently disambiguates the entities to which it may refer. Punning is discussed in the OWL 1.1 overview document.

Yes, Pellet supports punning with a minor caveat. The same name should not be used for both an object property and a datatype property. Similarly, the same name should not be used for both a class and a datatype.

Does Pellet support incremental reasoning?

Incremental reasoning means the ability of the reasoner to process updates (additions or removals) applied to an ontology without having to perform all the reasoning steps from scratch. Pellet supports two different incremental reasoning techniques: incremental consistency checking and incremental classification. These techniques are applicable under different conditions and provide benefits for different use cases.

Incremental Classification

Incremental classification is used to update classification results incrementally when the class hierarchy changes. This feature is enabled by utilizing module extraction feature provided in Pellet. The first time an ontology is classified, Pellet also computes modules for each class. Initial classification and module extraction can be performed in serial or in parallel. When the ontology is changed in a way that affects the class hierarchy, Pellet will determine which module is affected and the reclassify only that module which is typically much smaller than the original ontology.

Incremental classification is most suitable for cases where the ontology contains a large and/or complex class hierarchy. It is not intended to improve instance reasoning and the benefits of incremental classification are minimized when the ontology uses enumerated classes.

Incremental classification is accessible only through OWLAPI interface via a specialized reasoner interface. This reasoner interface only supports queries about classes and not instance related queries. See IncrementalClassifierExample in the Pellet distribution for details. Incremental classifier is robust, well-tested, and suitable for production systems.

Incremental Consistency Checking

Incremental consistency checking is used to update consistency checking results incrementally when the instance data changes. The only changes it supports are addition or removals of instance assertions. If the ontology is changed by adding new class or property axioms, incremental consistency checking will not be used. It is also not used if the ontology contains enumerations, inverse properties, or rules.

Incremental consistency checking is most suitable for cases when consistency checking takes a lot of time or there are very frequent changes to the instance data. It is not intended to improve classification time.

This feature is accessible both from Jena and OWLAPI interfaces. It is disabled default and needs to be enabled manually. See IncrementalConsistencyExample in the Pellet distribution for details. Incremental classifier is robust, well-tested, and suitable for production systems. Incremental consistency checking is not robust, not well-tested, and not suitable for production systems.


How can I answer SPARQL queries with Pellet?

Pellet includes an optimized query engine capable of answering ABox queries. It can be accessed directly, using the API defined in theorg.mindswap.pellet.query Java package. It has also been integrated with Pellet’s Jena bindings so that Jena’s query objects can be used with the Pellet’s query engine (seeorg.mindswap.pellet.jena.PelletQueryExecution). Note that if the query to be answered is not an ABox query, the Pellet query engine is inapplicable and the bindings will fall-back to using the default Jena query engine.

An example demonstrating use of the Pellet query engine in Jena is included in theexamples directory of the Pellet distribution.

For interactive use, Pellet’s command line interface provides access to the query engine (through the Jena interface).

What is an ABox query?

An ABox query is a query about individuals and their relationship to data and other individuals through (datatype or object) properties. In SPARQL, an ABox query must satisfy three conditions:

  1. They contain no variables in the predicate position.
  2. Each property used in the predicate position is either a (datatype or object) property defined in the ontology or one of the following built-in properties:rdf:type,owl:sameIndividualAs,owl:differentFrom.
  3. Ifrdf:type is used in the predicate position, a constant URI denoting an OWL class (or a class expression) is used in the object position.

Jena Interface

How can I use Pellet with Jena?

There are two different ways to use Pellet in a Jena program: either using direct Pellet interface (highly recommended); or using Jena DIG interface (not recommended). The Direct Pellet interface is much more efficient (e.g. does not have the HTTP communication overhead) and provides more inferences ( DIG protocol has some limitations). Using the direct interface is not any different than any other Jena reasoner:

// ontology that will be used
String ont = "";

// create an empty ontology model using Pellet spec
OntModel model = ModelFactory.createOntologyModel( PelletReasonerFactory.THE_SPEC );   

// read the file ont );

// get the instances of a class
OntClass Person = model.getOntClass( "" );         
Iterator instances = Person.listInstances();

See the examples directory in the Pellet distribution for more examples.

How do I use Pellet with Jena concurrently?

We document here the thread-safe use of Pellet in Jena applications, including operations and configuration options that may be problematic. We assume that the KB is represented and manipulated using a JenaInfModel associated with aPelletReasoner. (Examples demonstrating this use model are included in the Pellet release distribution: seeexamples/org/mindswap/pellet/examples/

Concurrency in Jena, independent of Pellet, is discussed in the Jena documentation. This includes discussion of locking mechanisms to control concurrent access to a singleModel object.

Basic Approach

Pellet cannot be used via the Jena interface in a multi-threaded fashion without explicit calls to prepare the knowledge base following any model modification. Without explicit calls, the knowledge base will perform processing operations as needed, which may lead to concurrent modification of internal data structures and incorrect query results.

To insure thread safety, modifications should be followed by a call to classify method on thePelletInfGraph underlying theInfModel. The following code example illustrates this pattern.

Model model = ModelFactory.createOntologyModel( PelletReasonerFactory.THE_SPEC, model ); "" );

/* ... any additional model modification ... */

((PelletInfGraph) model.getGraph()).classify();
model.query( ... );

Note that because the classify method performs many operations, including loading the knowledge base, classification, and individual realization, it may be a costly operation.classify does not need to be called after each modification, but it must be called between any modification and query operations.


In addition to forcing processing between modification and query operations, some additional use constraints should be enforced.

Incremental Reasoning

Incremental reasoning, as implemented in Pellet-1.5.1, is not-thread safe. It is disabled by default, and should not be enabled if Pellet is used concurrently by multiple threads.

Complex Concept Descriptions

Concurrent query answering is possible, in part, because the reasoner is able to classify the concept hierarchy, as a batch operation, prior to any query requests. Queries that involve unnamed concepts require classification of the concept at query time, causing modifications to internal data structures that are not thread-safe. The following SPARQL query includes a concept that makes the query unsafe

SELECT ?C WHERE {?C rdfs:subClassOf  [
        rdf:type owl:Restriction ;
        owl:onProperty ub:takesCourse ;
        owl:minCardinality 2

To avoid problems during concurrent querying, queries should only use named concepts. If necessary, queries using unnamed concepts should be executed without any concurrent model access.

Queries with Posited Models

Jena provides a methodlistStatements(Resource,Property,RDFNode,Model) in theInfModel interface that permits queries to use vocabulary terms that exist in a positedModel, but not in theInfModel being queried. This method causes temporary modifications to internal data structures and cannot safely be used with concurrent query operations. If necessary, the method should be executed without any concurrent model access.

Why does Pellet produce different results than the Jena OWL reasoner?

There is one fundamental difference between Pellet in Jena and other existing Jena reasoners. Pellet treats the anonymous restrictions defined in an OWL ontology as syntactic expressions and does not return them as answers to any query. Consider the following example:

Class(<a:Person> partial restriction(<a:hasAddress> someValuesFrom(<a:Address>))
Class(<a:Student> partial <a:Person>))
ObjectProperty(a:hasAddress Functional)

If this ontology is loaded in anOntModel backed by Pellet, callingStudent.listSuperClasses() would not include the restriction in the result. There are several reasons for this behavior: Treating each restriction as a named class would make it harder to reason with the ontology and the results you get in the end are not that useful.

For example, in the above example, the reasoner can also returnrestriction(<a:hasAddress> minCardinality(1)) orrestriction(<a:hasAddress> maxCardinality(1)) orrestriction(<a:hasAddress> allValuesFrom(<a:Address>)) as a super class because they are all entailed by the above definition. Once class expressions are considered, there are infinitely many possibilities, e.g.maxCardinality(1) impliesmaxCardinality(2) and so on.

For this reason, Pellet will not return a restriction in an answer just because it exists in the ontology. However, one needs to get the syntactic definitions in the ontology and all the results Pellet returns are concatenated by the answers from the raw model. This means callingPerson.listSuperClasses() for the above example would include the restriction in the results.

Note that all the boolean functions still work as expected. For example, asking the questionStudent.hasSuperClass( restriction ) whererestriction is the anonymous resource corresponding to thesomeValuesFrom restriction, Pellet will returntrue.

Why does Pellet run slowly in TopBraid Composer?

We don’t support TopBraid Composer or any TopQuadrant products. Often, Pellet appears to run slowly in TopBraid Composer due to the nature of the integration performed by TopQuadrant. If you are experiencing problems using Pellet via TopBraid Composer, we recommend trying Pellet directly before reporting an issue or bug.

What is the difference between the Pellet and Jena query engines?

The Jena query engine is RDF triple based, so to produce variable bindings it works one triple at a time. In contrast, the Pellet query engine considers the entire conjunctive query. This difference causes the engines to have different performance characteristics and in some cases yields different results.

First, by using a level of representation consistent with the logic, the Pellet query engine can perform optimizations that are inaccessible to the Jena query engine. These optimizations often yield a significant speed-up in query answering.

Second, results may differ based on handling of blank nodes in queries. The semantics of SPARQL are such that blank nodes need not be bound to asserted individuals and can match individuals that are inferred (such as those from existential restrictions and minimum cardinality constraints). The Pellet engine will produce results using these inferred individuals and the Jena engine will not. An example demonstrating how this can change results is included in theexamples directory of the Pellet distribution and is accessible in the svn repository.

How can I extract all inferences?

The most straight-forward way to retrieve all the inferences from a Jena reasoner can be done with the following call:

   Iterator i = model.listStatements();

However, getting all inferences is problematic for some datasets since it requires a lot of computation. For example, for ontologies with large number of individuals getting differentFrom inferences can be computationally expensive. Similarly, for ontologies with large number of classes getting disjointWith inferences can be slow. Typically only a subset of all inferences is required for a specific application and it is not necessary to compute the reasoning results that will not be used in the application.

The org.mindswap.pellet.jena.ModelExtractor class provides a convenient way to extract a specific subset of inferences. The inference extractor can be used as follows:

   // Create an inference extractor
   ModelExtractor extractor = new ModelExtractor(model);

   // Extract default set of inferences
   Model inferences = extractor.extractModel();

The default setting extracts the following inferences: DIRECT_SUBCLASS, EQUIVALENT_CLASS, DIRECT_SUBPROPERTY, EQUIVALENT_PROPERTY, INVERSE_PROPERTY, OBJECT_PROPERTY_VALUE, DATA_PROPERTY_VALUE, DIRECT_INSTANCE. This setting is designed to be an acceptable trade-off of performance and coverage. For example, it excludes SAME_AS, DIFFERENT_FROM and DISJOINT_CLASS inferences.

The default setting can be customized to a different set of inferences. For example, extractor can be configured to only retrieve direct subclass relationships as follows:

      EnumSet.of( StatementType.DIRECT_SUBCLASS ) );

If disjointness inferences between classes is also needed then:

      EnumSet.of( StatementType.DIRECT_SUBCLASS, StatementType.DISJOINT_CLASS ) );

Note that for convenience, ModelExtractor contains some constant fields that can be passed directly to setSelector, including ALL_CLASS_STATEMENTS, ALL_PROPERTY_STATEMENTS, ALL_INDIVIDUAL_STATEMENTS, and DEFAULT_STATEMENTS.

The command “extract” makes this function available from the command line. For usage information run

   pellet extract -h

Each Pellet distribution includes a version of Jena libraries that is compatible with that version of Pellet. It is possible (though not guaranteed) that Pellet will work fine with another version of Jena. To accomplish this instead of including the “pellet/lib/jena/.jar” files in your classpath simply include the Jena jar files from another distribution. But note that command-line utilities in Pellet (e.g. pellet-cli.jar) includes a classpath declaration in its manifest file to point to the “pellet/lib/jena/.jar” files. If you include pellet-cli.jar in your classpath with the other version of Jena libraries you will end up having two different versions of Jena in your classpath which will cause problems. You can exclude pellet-cli.jar from your classpath to avoid this problem. If you want to use the Pellet command-line with a different version of Jena then you can replace the files in “pellet/lib/jena” directory with the version you want after you make sure that the version of Pellet is compatible with the version of Jena you want to use.

OWL-API Interface

How can I use Pellet with OWL- API?

Pellet implements various reasoner interfaces provided in the OWL- API. You can create a Pellet reasoner and then use it as any other reasoner:

// import org.mindswap.pellet.owlapi.Reasoner

// create an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();

// create the Pellet reasoner
Reasoner reasoner = new Reasoner( manager );

// ontology that will be used
String file = "";

// Load the ontology 
OWLOntology ontology = manager.loadOntology( URI.create( file ) );
reasoner.loadOntology( ontology );

// get the instances of a class
URI PersonURI = URI.create( "" );
OWLClass Person = manager.getOWLDataFactory().getOWLClass( PersonURI );   
Set instances  = reasoner.getInstances( Person, false );

Pellet 1.5 uses a newer snapshot of OWL- API including support for OWL 1.1, built from development sources. See version.txt included with OWL- API in the Pellet source tree for details on obtaining the source for this revision of OWL- API.

OWL- API is under rapid development and the best source of documentation, including general examples, is likely to be the project page at SourceForge

How can the reasoner be updated after a loaded ontology is modified?

When using OWLAPI v3.0

Pellet reasoner can be registered as an ontology change listener so it will be notified when a loaded ontology is modified and automatically update the loaded information. However, whether the modifications will be immediately visible in the reasoning results depends whether the reasoner was created as a buffering or non-buffering. For a non-buffering reasoner, the changes will impact the reasoning results for the next query asked. The following code snippet shows how to create a non-buffering reasoner and register a listener:

// import com.clarkparsia.pellet.owlapiv3.PelletReasoner;
// import com.clarkparsia.pellet.owlapiv3.PelletReasonerFactory;

// create an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();

// create the Pellet reasoner
PelletReasoner reasoner = PelletReasonerFactory.getInstance().createNonBufferingReasoner( ontology );

// add the reasoner as an ontology change listener
manager.addOntologyChangeListener( reasoner );

If a large number of updates occurs before the next query, tracking these changes may unnecessarily impact the performance of the reasoner. In such a case, a buffering reasoner may be a better choice — the modifications affect the reasoning results, only if the reasoner is explicitly notified by the following call:


When using OWLAPI v2

Pellet reasoner can be registered as an ontology change listener so it will be notified when a loaded ontology is modified and automatically update the loaded information. The following code snippet shows how to do this:

// import org.mindswap.pellet.owlapi.Reasoner

// create an ontology manager
OWLOntologyManager manager = OWLManager.createOWLOntologyManager();

// create the Pellet reasoner
Reasoner reasoner = new Reasoner( manager );

// add the reasoner as an ontology change listener
manager.addOntologyChangeListener( reasoner );

When an ontology is modified, reasoner will only load the new set of axioms to its internal memory but will not do perform any reasoning. Reasoning will be performed next time a query is asked. If there are a large number of ontology change events, tracking these changes as they occur might still slow down performance. If that is the case Pellet should not be registered as an ontology change listener and updated manually after changes occur with the following call:


There is no native SPARQL support in OWLAPI. However, a Jena model can be created with the contents of an OWLAPI reasoner so SPARQL query answering can be done on the Jena model. OWLAPI reasoner and Jena model can be queried together but all updates should be done on the OWLAPI ontology. The following snippet shows the general idea behind this approach:

   // import com.clarkparsia.pellet.owlapiv3.PelletReasoner;
   // import com.clarkparsia.pellet.owlapiv3.PelletReasonerFactory;

   // Create OWLAPI ontology as usual
   OWLOntology ontology = ...
   // Create Pellet-OWLAPI reasoner (non-buffering mode makes synchronization easier)
   PelletReasoner reasoner = 
         PelletReasonerFactory.getInstance().createNonBufferingReasoner( ontology );
   // Get the KB from the reasoner
   KnowledgeBase kb = reasoner.getKB();
   // Create a Pellet graph using the KB from OWLAPI
   PelletInfGraph graph = new org.mindswap.pellet.jena.PelletReasoner().bind( kb );
   // Wrap the graph in a model
   InfModel model = ModelFactory.createInfModel( graph );
   // Use the model to answer SPARQL queries

The model created this way cannot be used to answer SPARQL queries that query for RDF syntax triples, e.g.SELECT * WHERE { ?x owl:onProperty ?p }.


How can I use Pellet with Protégé 4?

The Pellet Reasoner Plug-in for Protégé 4 provides full integration of Pellet in the Protégé 4 environment. It is full described, with installation directions on its product page.

What version of Pellet is included in the Pellet Reasoner Plug-in for Protégé 4?

If you have the Pellet Reasoner Plug-in installed in Protégé 4, you can determine the plug-in version by selecting “Help | About” from the menu. The list below maps the reasoner plug-in version to the corresponding Pellet release version.

  • 2.2.0 → Pellet 2.3.0
  • 2.1.2 → Pellet 2.2.2
  • 1.2 build 1 → Pellet 2.2.1
  • 1.2 build 0 → Pellet 2.2.0
  • 1.1 build 1 → Pellet 2.1.1
  • 1.1 build 0 → Pellet 2.1.0
  • 1.0 build 3 → Pellet 2.0.2
  • 1.0 build 2 → Pellet 2.0.1
  • 1.0 build 1 → Pellet 2.0.0
  • 0.9 build 3 → Pellet 2.0.0-rc7
  • 0.9 build 2 → Pellet 2.0.0-rc6
  • 0.9 build 1 → Pellet 2.0.0-rc5
  • 0.9 build 0 → Pellet 2.0.0-rc4

DIG Interface

How can I use Pellet with Protégé 3.x?

Pellet comes with a DIG server that you can use with Protégé 3.×. You can start the DIG server by runningpellet-dig.bat on Windows systems and by on Unix-like systems. You need to make sure that the port number used by Pellet and Protégé is the same. When Pellet DIG server starts you will see a message:

PelletDIGServer Version 1.3 (April 17 2006)
Port: 8081

In Protégé, go to “ OWL” menu and select “Preferences”. For the “Reasoner URL” value enter “http://localhost:8081” and hit the close button. Or alternatively you can start Pellet DIG server using the port number defined in Protégé, e.g. type the following command at the command prompt:

pellet-dig -port 8080


How can I process KRSS files with Pellet?

There is a separate simple command line program provided to process files written in KRSS syntax. This program is similar to the Pellet command line version and provides options to classify or realize KRSS ontologies. You can run the following command to see the options available:

java -cp lib/pellet.jar org.mindswap.pellet.PelletKRSS

TheKnowledgeBase.loadKRSS(Reader) function also provides programmatic support to load KRSS files to a Pellet knowledge base

What parts of the KRSS syntax does Pellet support?

Pellet’s KRSS support was implemented to process the KRSS ontologies found at the DL Benchmark Suite. These ontologies use a slightly different syntax than the official KRSS specification. The main difference is indefine-primitive-role definitions where super roles need to be defined with the identifier:parents. Pellet also support the definition of domain and range restriction insidedefine-primitive-role definitions via:domain and:range identifiers respectively (some example KRSS ontologies can be found at Pellet SVN).

Pellet does not support any KRSS feature that is not directly expressible in OWL-DL, e.g. role composition, transitive closure, role closure, rules, etc. As in KRSS specification, Pellet requires any concept, role, or individual name defined before its first use.

Pellet also supports the native FaCT++ syntax which is similar to but not compatible with KRSS syntax. As far as we know, there is no explicit documentation describing this syntax but the OWL Ontology Converter can be used to convert OWL ontologies intoFaCT++ syntax.

Note that, KRSS support in Pellet is not considered to be robust or usable. There is limited support for loading KRSS files into Pellet (see the other FAQ entry) and there are no plans to update or maintain the KRSS parser in the future.

Tuning Performance

How can I tune Pellet for ontologies with rules?

Pellet 1.5.1 and 1.5.2 include an optimization that is disabled by default, but has shown to be stable and much faster when reasoning with ontologies that include DL-Safe rules. To enable this optimization, add the following initialization.


This setting will be the default beginning with Pellet 2.0.

How do I debug classification performance issues in my ontology using logging?

Logging messages printed by Pellet can be a valuable source of information, when trying to determine which classes in the ontology cause performance issues. Information how to control which messages get printed is available in this FAQ Entry: How can I control logging messages printed by Pellet?

The first step in debugging performance issues is to set the following property in the logging configuration file:

org.mindswap.pellet.taxonomy.level = FINE

This will cause Pellet to print every class processed by the classifier. A longer pause for a class will be a good indication of a problematic class. If this information is insufficient, setting the log level toFINER will additionally print each subclass check the reasoner performs. Even more detail (the sizes of the completion graphs) can be obtained by setting:

org.mindswap.pellet.ABox.level = FINE

Once you find the problematic class, you can check the lint tool (pellet lint), if any of the problematic patterns apply to this class. Lint will provide suggestions about how to change the pattern to improve performance. Seedoc/PELLINT-README.txt for more information about lint, anddoc/PELLINT-PATTERNS.txt for more details about problematic patterns.

Something went wrong with that request. Please try again.