Skip to content
Ruben Verborgh edited this page Nov 27, 2015 · 1 revision

Please add your own use cases to this list.

JSON to/from any RDF

A client or server application can access any LDPC or LDPR resource using the Accept header to request the preferred resource representation, but accepting any representation format including JSON-LD, Turtle, RDF/XML, N3, etc. But what the JavaScript client sees and uses is usually JSON-LD.

Turtle parser

A Turtle parser takes as input a Turtle stream, and gives as output a stream of triples. It can also take a prefix map as input, and can give a prefix map as output.

N3.js parser

this._callback(null, { subject:   subject,
                       predicate: this._predicate,
                       object:    this._object,
                       graph:     graph || '' });

Ruben Verborgh (@RubenVerborgh)

TriG serializer

A TriG serializer takes as input a stream of quads, and outputs a stream of TriG. It can also take a prefix map as input, and formatting settings.

N3.js writer

this._write((this._subject === null ? '' : '.\n') +
            this._encodeSubject(this._subject = subject) + ' ' +
            this._encodePredicate(this._predicate = predicate) + ' ' +
            this._encodeObject(object), done);

Ruben Verborgh (@RubenVerborgh)

SPARQL query engine

A SPARQL query engine takes as input a SPARQL query, and outputs either a stream of variable mappings or a stream of quads.

Linked Data Fragments Client

if (util.isVariableOrBlank(left)) {
  if (!(left in mapping))
    mapping[left] = right;
  else if (right !== mapping[left])
    throw new Error(['Cannot bind', left, 'to', right,
                     'because it was already bound to', mapping[left] + '.'].join(' '));
}
else if (left !== right) {
  throw new Error(['Cannot bind', left, 'to', right].join(' '));
}

Ruben Verborgh (@RubenVerborgh)

Regex object value search

For wildcard searches Regex can be used to search in graphs/datasets for matching triples.

graph.match(null, null, new RegEx('.*test.*'))

Thomas Bergwinkl (@bergos)

Gremlin language to query graphs/datasets

To traverse complex graphs the gremlin language could be used.

// Draft
gremlin> g.V().has('s:name','Die Hard').inE('s:rated').values('s:stars').mean()

Thomas Bergwinkl (@bergos)

Parsers with debug information

Parsers can produce addition debug information, like line and column number or DOM element for HTML based formats, that should be attached to nodes/triples.

ShExDemo, Green Turtle

// TODO: example code

Thomas Bergwinkl (@bergos)

Local copy of graph with post sync

A portion of a graph is copied to a non-persistent storage for modifications, either from a server or locally. After the modifications are done (probably validated by the user) it is possible to sync the portion back to the origin graph.

// TODO: example code

Michael Luggen (@l00mi)

Client graph in sync with server graph

Either per pull or through websockets per push the server keeps the local graph (or subgraph of it) in sync with persistent representation on the server.

// TODO: example code

Michael Luggen (@l00mi)

Using a graph as a reactive data source. As a common data model for a JS application changes of the graph get propagated to all current active views in the interface. The graph representation needs for this use case to throw events for new changes.

References:

https://www.meteor.com/tracker (Meteor Implementation in JS)

https://github.com/meteor/meteor/wiki/Tracker-Manual (Extensive Manual)

https://www.meteor.com/blaze (Templating on top of Tracker)

https://facebook.github.io/react/ (Facebook Implementation in JS)

// TODO: example code

UI Events and UI State as RDF

In the Functional Reactive Programming idea, the RDF graph(s) would be updated when some event is received. If the UI Events and the UI State (e.g., of the browser) are represented as reactive RDF sources themselves, the logic may be represented entirely on top of the RDF data model (e.g., with SPARQL rules).

// TODO: example code

Miguel Ceriani (@miguel76)

RDF-based Web Components

It addresses building reusable UI Web Components to view, edit and browse RDF data. LD-R framework employs ReactJS components in an isomorphic software architecture to realize the vision of Adaptive Linked Data-driven User Interfaces.

Ali Khalili (@ali1k)

Persisting data indexed for fast searches

LevelGraph uses hexastore approach for indexing stored triples. Following this approach, LevelGraph uses six indices for every triple, in order to access them as fast as it is possible.

Currently both LevelGraph-N3 and LevelGraph-JSONLD use N3.js representation of triples when storing them. As of today LevelGraph doesn't provide free text search and matching literals requires providing exact datatype or language tag.

var db = level('your_database')
var triple = JSON.stringify({
  subject: 'A', predicate: 'C', object: 'B'
})
db.batch([
  { key: 'spo::A::C::B', value: triple, type: 'put' },
  { key: 'sop::A::B::C', value: triple, type: 'put' },
  { key: 'ops::B::C::A', value: triple, type: 'put' },
  { key: 'osp::B::A::C', value: triple, type: 'put' },
  { key: 'pso::C::A::B', value: triple, type: 'put' },
  { key: 'pos::C::B::A', value: triple, type: 'put' }
])
var db = level('your_database')
var stream = db.createReadStream({
  start: 'pso::C::',
  end:   'pso::C::\xff'
})

stream.on('data', function(data) {
  console.log(data.value)
})

elf Pavlik (@elf-pavlik)

Data-Aware CMS

I heard about the call for use cases from a mail after the original post, misunderstood thinking it was for general rdfjs. Just discovered the actual purpose :) But most is still valid here I reckon, with minor tweaking

Tweak - call it having the ability to pass data between two separate Data-Aware CMSs...

In its simplest form, a blog engine.

User launches page, enters content (HTML, markdown, whatever), adds data annotations, clicks Post. System augments data as appropriate (e.g. from links embedded in content), publishes material on web.

Templating Engine

Not strictly a use case, but nice-to-have

I've been getting a lot of mileage out of using Mustache templating (Hogan engine) inside SPARQL queries/inserts in the browser. Running those against a remote SPARQL 1.1 store. Already built a simple Wiki that way, currently working on a vocab editor, same technique. It actually bypasses the need for an RDF representation in-browser.

But, the mustache side of it is only really doing string replacement, with conditionals & simple loops. Given the way RDFJS seems to be headed, support for such operations is likely to be built into the API, at a more RDF/SPARQL-aware level. So flip out mustache, flip in RDFJS.

Changelog Support

Not strictly a use case, but nice-to-have

An easy way of monitoring changes to the graph(s) in use, allowing for change reversion etc.

A prerequisite would be some form of diff. Somewhere - Apache libs? - RetoBG put a decent algorithm (RDF Molecules based think) for rdfdiff, implemented in Java. I've recently been toying with a quick & dirty approach: make a quasi-canonical representation as ntriples, do a line-by-line diff. (Bit stupid when it comes to bnodes, but probably adequate for my current needs).

All stuff that could be a simple layer on top of what's already been proposed for rdfjs, hopefully done reasonably efficiently.

Reasoning: asserted or inferred triple?

Keeping reasoners in mind, triple/quad representation could provide an easy way to distinguishing those asserted from inferred ones.

elf Pavlik (@elf-pavlik)

Full text search

A full text search graph implementation with the graph interface and additional methods for full text search would keep the memory usage low, because there are no copy operations. That requires a way to give the constructor to any library that produces triples/graphs.

var parser = new Parser({rdf: FullTextSearchGraph})

parser.parse(textStream).then(function (graph) {
graph.textSearch('searchstring')
})

Thomas Bergwinkl (@bergos)

Node type variable

SPARQL engines and also LDP Patch implementations require a special node type for variables.

//TODO

Thomas Bergwinkl (@bergos)

Graph to object mapping

To handle graph data in a JS way, there should a wrapper around a graph that creates dynamically setters and getters to read/write the graph data. Prefixes can be also mapped to properties.

SimpleRDF

var profile = new SimpleRdf(profileGraph, {'': 'http://xmlns.com/foaf/0.1/'})

console.log(profile.name)

Thomas Bergwinkl (@bergos)

Javascript object to RDF Graph mapping

JSON-LD contexts allow a subset of Javascript objects (JSON structures) to be represented as RDF. Specifically, JSON is acyclic, while RDF can have cycles. The subset of Javascript objects that can be transformed to RDF could be extended to the ones having cyclic references too, possibly still using JSON-LD contexts. That way Javascript structures having backlinks could be represented in RDF without resorting to more complex transformation techniques.

TODO: code example

Miguel Ceriani (@miguel76)

Graph traversing

It should be possible to traverse graphs in a JS way. Gremlin is a generic graph traversing language. A more RDF/JS specific implementation would allow more readable code.

Clownface

var profile = new clownface.Graph(profileGraph, webId)

// lists all foaf:knows -> foaf:name in the same graph
console.log(profile.out('foaf:knows').out('foaf:name').literal().join())

var profile = new clownface.Store(ldpStore)

// lists all foaf:knows -> foaf:name, but jumps to the other profiles
profile.jump(webId).out('foaf:knows').jump().out('foaf:name').literal().then(function (names) {
  console.log(names.join())
})

Thomas Bergwinkl (@bergos)

RDF Data Layer for Client Applications

JavaScript client apps become more-and-more sophisticated -- especially in the Browser -- which enforces a layered architecture to keep state and state synchronization manageable. Handling RDF data on the client incurs specific challenges that could be solved in a dedicated layer. This "data layer" described here is neither concerned with how resources are loaded (through a REST API for example), nor with how the application maintains its state, it's a layer in between.

I envision the aspects to be covered by this layer described in the following. To every aspect I provide an example use-case.

Transformation: Transform between the various RDF serialization formats.

UC: Data read from a restul API in RDF/XML serialization is parsed and merged into the data layer's local graph.

UC: A sub-graph must be written back to a restful API which accepts only the turtle serialization.

Shaping/Framing: Expose resources according to a desired form, in the case of JSON-LD through framing [1]. For other RDF serializations I'm not aware of something equivalent.

UC: A JS app using an MV* framework with some data binding mechanism is binding to properties of resources in the data layer, exposed in JSON format. The bindings are static and rely on object structure and property names. Therefore, the JSON-LD inherit polymorphy must be overcome.

UC: Data which has to be written back to a restful API must follow a certain shape. The data layer generates this shape from the local graph and the shape definition.

Validation: Resources can be validated against a constraint specified in an appropriate language like SHACL [2].

UC: Data input by users through a GUI is validated against a schema noted in SHACL.

Automated UI Generation: Resource constraints (+ additional information) can be used to dynamically generate intuitive and easy to use GUIs.

UC: An application consuming a REST API can generically render forms to prompt for user input to drive API operations. The API provides a specification in form of a SHACL shape describing validation rules for the provided operations. The data layer can be consulted for a specification that is sufficient to generate a GUI.

l10n: The data layer is aware of rdf:langStrings and for read access it's possible to specify the desired translation. When adding new resources or modifying existing ones, the layer merges them back accordingly.

UC: A restful API provides properties in resources as JSON-LD language containers. The application is only interested in a single language. The data layer can be asked to flatten the corresponding properties so that the client application can bind to the properties just by name, without considering the language keys.

Additional concerns that might make sense to be solved in some way:

  • Actual state synchronization
  • Caching
  • Querying

[1] http://json-ld.org/spec/latest/json-ld-framing/ [2] http://w3c.github.io/data-shapes/shacl/

Thomas Hoppe (@vanthome)

Generate Documentation for RDFS/OWL Vocabularies

When creating or browsing an RDFS or OWL vocabulary/ontology a javascript tool to generate a static ReSPec styled specification document would be useful. Similar to the LODE toolkit but handled client-side, where special tags are used to populate title, editors, etc. We are doing something like this with python scripts and ReSpec for a data exchange standard in brain imaging (NIDM), but it isn't a general tool.

Nolan Nichols (@nicholsn)

SPARQL Resulset Handler

SPARQL (SELECT) results are available in several serializations: json, xml, csv, tsv. A wrapper for these serializations is needed to be able to easily build an interface that connects with any possibly SPARQL endpoint. Where these serializations are part of the SPARQL standard, error handling of SPARQL endpoints is not. Some endpoints may provide error messages in RDF, where others pass the error message via the http response text or via a header. A simple implementation that handles different methods for error handling, and that provides a wrapper around the different resultset serializations is available in the YASR library.

Laurens Rietveld (@LaurensRietveld)

YASGUI

YASGUI (Yet Another SPARQL GUI) is a SPARQL query editor that includes many of the features that developers are used to in IDEs. The features includes sharing queries via permalinks, syntax highlighting and checking, autocompletion functionality and rendering of SPARQL results in (pivot) tables and charts. Others can use YASGUI via the public website, or include it as JS library in their own projects.

SHACL

The Shapes Constraint Language (SHACL) is an evolving W3C standard to describe structural constraints on RDF graphs. One aspect of SHACL is an extension mechanism that allows users to define almost arbitrary integrity checks in one of the supported extension languages. SPARQL is the only built-in extension language currently specified, but JavaScript should be another extension language. For this to work, a standard Graph API with the usual classes (Node, Literal, BlankNode, IRI) and the usual functions (findSPO) would be required. Input could be a graph encoded in JSON-LD.

Holger Knublauch (https://github.com/HolgerKnublauch)

Index RDF data using JSON-LD Contexts

By applying a JSON-LD context and indexing each object (at least on type and reverse relations (OPS)), it's possible to make a graph of data accessible to developers without exposing more details than necessary.

Handling the data as a connected in-memory graph of plain JS objects is cheaper, more uniform and limits redundant choices (in code) for a specific solution. Of course, this is harder to do (if at all possible) for very diverse data, where you cannot expect uniformity in the choice of properties, datatypes, cardinalities etc. Still, this might be solvable by inference and/or error-correcting shapes checking performed on the full RDF prior to application consumption.

It comes down to whether data shapes can be chosen up-front, or have to be made in each branch point of consumption code.

It's possible to define a simple query and path traversal language upon this restricted view of RDF.

References:

http://json-ld.org/spec/latest/json-ld-connect/

http://niklasl.github.io/rdfa-lab/examples/rdfa-api-comparisons.html

Niklas Lindström (@niklasl)

Treating RDF as Plain Objects

(N.B. this approach is mainly a stopgap solution and an experiment to challenge the conceived prerequisites for using RDF data in applications.)

JSON-LD can itself be used as an in-memory representation of RDF, essentially bypassing the need to define a quad representation.

Implementation (TriG to JSON-LD): https://github.com/niklasl/ldtr

Arguments for this approach:

  • Turtle can be used instead of JSON as a serialization format for simple applications
  • This can be used as a "teaching aid" to show the isomorphism of different RDF serializations (up to a certain point, since there are minute differences between serializations).

Note that this does not work for stream-based usage in general (unless the resulting object is built lazily using generators), and has educational drawbacks (since the notion of properties as real resources continues to be hidden away). See also my own arguments against this approach in another case, noting that consumers should be in control of resulting data shape: https://github.com/niklasl/rdfa-lab/wiki/RDFa-to-JSON-LD

It is worth to consider the benefits though, and if they are valuable to users, consider various means of achieving this (with various underlying costs).

Niklas Lindström (@niklasl)