Skip to content
Browse files
JENA-1806: Updates for moving of code to jena-examples
  • Loading branch information
afs committed Dec 9, 2021
1 parent 8590d1e commit 55434f5ebf1cffebb5c36d825d561a5b0d98ab28
Showing 14 changed files with 55 additions and 58 deletions.
@@ -234,38 +234,38 @@ The base URI for reading models will be the original URI, not the alternative lo

## Advanced examples

Example code may be found in [jena-arq/src-examples](
Example code may be found in [jena-examples:arq/examples](

### Iterating over parser output

One of the capabilities of the RIOT API is the ability to treat parser output as an iterator,
this is useful when you don't want to go to the trouble of writing a full sink implementation and can easily express your
logic in normal iterator style.

To do this you use one of the subclasses of
in conjunction with a [PipedRDFStream](
To do this you use `AsyncParser.asyncParseTriples` which parses the input on
another thread:

This `PipedRDFStream` provides an implementation of `StreamRDF` which allows it to consume parser output and this is consumed by
the `PipedRDFIterator` implementation. This has some advantages over a direct `StreamRDF` implementation since it allows the parser
production of data to run ahead of your consumption of data which may result in better overall throughput.
Iterator<Triple> iter = AsyncParser.asyncParseTriples(filename);
// Do something with triple

The only complication is that you need to ensure that the thread feeding the `PipedRDFStream` and the consumer of the iterator are on different threads
as otherwise you can run into a deadlock situation where one is waiting on data from the other which is never started.
For N-Triples and N-Quads, you can use
`RiotParsers.createIteratorNTriples(input)` which parses the input on the
calling thread.

See [RIOT example 6](
which shows an example usage including a simple way to push the parser onto a different thread to avoid the possible deadlock.
[RIOT example 9](

### Filter the output of parsing

When working with very large files, it can be useful to
process the stream of triples or quads produced
by the parser so as to work in a streaming fashion.

See [RIOT example 4](
See [RIOT example 4](

### Add a new language

The set of languages is not fixed. A new language,
together with a parser, can be added to RIOT as shown in
[RIOT example 5](
[RIOT example 5](
@@ -368,7 +368,7 @@ pass the "frame" in the `JSONLD_FRAME_PRETTY` and `JSONLD_FRAME_FLAT`

What can be done, and how it can be, is explained in the
[sample code](
[sample code](

### RDF Binary {#rdf-binary}

@@ -403,7 +403,7 @@ while the jena writer name defaults to a streaming plain output.

## Examples {#examples}

Example code may be found in [jena-arq/src-examples](
Example code may be found in [jena-examples:arq/examples](

### Ways to write a model

@@ -457,7 +457,7 @@ might give:
### Adding a new output format

A complete example of adding a new output format is given in the example file:
[RIOT Output example 7](
[RIOT Output example 7](

## Notes {#notes}

@@ -86,15 +86,14 @@ SSE is simply passing the calls to the writer operation from the

## Creating an algebra expression programmatically

See the example in `src-examples/arq.examples.AlgebraExec`.
See the example in

To produce the complete javadoc for ARQ, download an ARQ
distribution and run the ant task 'javadoc-all'.

## Evaluating a algebra expression

See the example in `src-examples/arq.examples.AlgebraExec`.

QueryIterator qIter = Algebra.exec(op,graph) ;

QueryIterator qIter = Algebra.exec(op,datasetGraph) ;
@@ -31,8 +31,8 @@ For higher SPARQL performance, ARQ can be extended at the

Applications writers who extend ARQ at the query execution level
should be prepared to work with the source code for ARQ for
specific details and for finding code to reuse. Some example can be
found in the `src-examples` directory in the ARQ download.
specific details and for finding code to reuse. Some examples can be
found [arq/examples directory](

- [Overview of ARQ Query processing](#overview-of-arq-query-processing)
- [The Main Query Engine](#the-main-query-engine)
@@ -230,8 +230,8 @@ called to return a `Plan` object for the execution. The main
operation of the `Plan` interface is to get the `QueryIterator` for
the query.

See the example in
See the example `arq.examples.engine.MyQueryEngine` at

## The Main Query Engine

@@ -363,7 +363,8 @@ has a convenience operation to do this):
StageBuilder.setGenerator(ARQ.getContext(), myStageGenerator) ;

Example: `src-examples/arq.examples.bgpmatching`.

## OpExecutor

@@ -483,7 +484,8 @@ Generator or a combination of all three extension mechanism.

Only a small, skeleton custom query engine is needed to intercept
the initial setup. See the example in

While it is possible to replace the entire process of query
evaluation, this is a substantial endeavour. `QueryExecutionBase`
@@ -105,7 +105,8 @@ on the default graph. For instance:
// The part of "GRAPH ?g1 { ?s1 ?p1 ?o1 }" will be ignored. Only "?s ?p ?o" in the default graph will be returned.
Iterator<Triple> triples = qexec.execConstructTriples();

More examples can be found at `` under `jena-arq/src-examples`
More examples can be found at `` at

## Fuseki Support

@@ -4,4 +4,5 @@ title: ARQ - Custom aggregates

ARQ supports custom aggregate functions as allowed by the SPARQL 1.1 specification.

See [example code](
@@ -7,13 +7,9 @@ tree (as the parser does) or by building the algebra expression for
the query.  It is usually better to work with the algebra form as
it is more regular.

See the examples in the ARQ `src-examples/` directory of the ARQ
distribution, particularly `arq.examples.AlgebraExec`.

* [src-examples at](;a=tree;f=jena-arq/src-examples/arq/examples)
* [src-examples at](
See the examples such as `arq.examples.algrebra.AlgebraExec` at

See also [ARQ - SPARQL Algebra](algebra.html)

[ARQ documentation index](index.html)
@@ -11,7 +11,8 @@ operations, so in a single request graphs can be created, loaded
with RDF data and modified.

Some examples of ARQ's SPARQL Update support are to be found in the
download in src-examples/arq/examples/update.
download in

The main API classes are:

@@ -117,9 +117,9 @@ At its simplest, it is:
which uses default settings used by `RDFConenctionFactory.connect`.

See [example
and [example

There are many options, including setting HTTP headers for content types
@@ -138,7 +138,7 @@ which uses settings tuned to Fuseki, including round-trip handling of
blank nodes.

See [example

## Graph Store Protocol

@@ -92,7 +92,7 @@ The package `org.apache.jena.shacl` has the main classes.

## API Examples

@@ -57,7 +57,7 @@ The package `org.apache.jena.shex` has the main classes.


public static void main(String ...args) {
@@ -109,8 +109,8 @@ objects have been removed.
## Substitution
All query and update builders provide operations to uses a query and substitute
variables for concrete RDF terms in the execution
All query and update builders provide operations to use a query and substitute
variables for concrete RDF terms in the execution.
Unlike "initial bindings" substitution is provided in query and update builders
for both local and remote cases.
@@ -121,7 +121,7 @@ DATA` but can be used with `INSERT { ?s ?p ?o } WHERE {}` and
`DELETE { ?s ?p ?o } WHERE {}`.
Full example:
ResultSet resultSet1 = QueryExecution.dataset(dataset)
@@ -136,3 +136,7 @@ used.

## Examples

@@ -6,8 +6,9 @@ This page describes how to filter quads at the lowest level of TDB.
It can be used to hide certain quads (triples in named graphs) or

The code for the example on this page can be found in the TDB
download: `src-examples/tdb.examples/` Filtering
The code for the example on this page can be found in the
[TDB examples](
quads should be used with care. The performance of the tuple filter
callback is critical.

@@ -24,20 +25,13 @@ graph pattern processing.
A rejected quad is simply not processed further in the basic graph
pattern and it is as if it is not in the dataset.

The filter has a signature of:

// org.apache.jena.atlas.iterator.Filter
interface Filter<T>
public boolean accept(T item) ;

The filter has a signature of: `java.util.function.Predicate<Tuple<NodeId>>`
with a type parameter of `Tuple<NodeId>`. `NodeId` is the low level
internal identifier TDB uses for RDF terms. `Tuple` is a class for
an immutable tuples of values of the same type.

/** Create a filter to exclude the graph http://example/g2 */
private static Filter<Tuple<NodeId>> createFilter(Dataset ds)
private static Predicate<Tuple<NodeId>> createFilter(Dataset ds)
DatasetGraphTransaction dst = (DatasetGraphTransaction)(ds.asDatasetGraph()) ;
DatasetGraphTDB dsg = dst.getBaseDatasetGraph();
@@ -48,29 +42,28 @@ an immutable tuples of values of the same type.
final NodeId target = nodeTable.getNodeIdForNode(Node.createURI("http://example/g2")) ;

// Filter for accept/reject as quad as being visible.
Filter<Tuple<NodeId>> filter = new Filter<Tuple<NodeId>>() {
public boolean accept(Tuple<NodeId> item)
Predicate<Tuple<NodeId>> filter = item ->
// Quads are 4-tuples, triples are 3-tuples.
if ( item.size() == 4 && item.get(0).equals(target) )
// reject
return false ;
// Accept
return true ;
} } ;
} ;
return filter ;

To install a filter, put it in the context of a query execution
under the symbol `SystemTDB.symTupleFilter` then execute the query as normal.

Dataset ds = ... ;
Filter<Tuple<NodeId>> filter = createFilter(ds) ;
Predicate<Tuple<NodeId>> filter = createFilter(ds) ;
Query query = ... ;
try (QueryExecution qExec = QueryExecution.dataset(ds)
.set(SystemTDB.symTupleFilter, filter)
.build() ) {}
.build() ) {
ResultSet rs = qExec.execSelect() ;

0 comments on commit 55434f5

Please sign in to comment.