Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Mapping between Triples and IRIs #23

Open
HolgerKnublauch opened this issue Oct 30, 2020 · 11 comments
Open

Mapping between Triples and IRIs #23

HolgerKnublauch opened this issue Oct 30, 2020 · 11 comments
Labels
discussion Open ended discussion that does not call for a specific action

Comments

@HolgerKnublauch
Copy link

There has been some discussion about "long URIs" to represent embedded triples in a backwards-compatible way. If we go down this road, we need to decide on a syntax for this mapping. The mapping should be bi-directional so that systems can parse URIs back to triples if needed. Ideally, the URIs should be as short as possible and be reasonably human-readable in case someone encounters them through a "leak".

PROPOSAL:
Given a triple S, P, O produce a IRI using the template urn:triple:${encode(S)}:${encode(P)}:${encode(O)} where the encode(N) function is (JavaScript) encodeURIComponent(ttl(N)) and ttl(N) is the Turtle serialization of N, without using prefixes but using absolute IRIs only. Blank nodes would become _:ID where ID is some internal ID that the current system uses (e.g. the Jena blank node label). See the sections including https://www.w3.org/TR/turtle/#sec-iri. For literals, the available short forms need to be used, e.g. "1"^^xsd:integer becomes 1, see https://www.w3.org/TR/turtle/#literals

We might want to use 'a' for rdf:type as there is a large number of triples of this form, but I have no strong opinion on that. Potentially the system could also rely on a number of hard-coded "well-known" prefixes such as rdf, owl, sh, skos. This would further shorten the URIs in case the implementation has them occupy memory.

See http://datashapes.org/reification.html#uriReification for an earlier version that is currently implemented in TopBraid. I have since convinced myself that relying on locally defined prefixes (per file) is not desirable, as prefixes may change and then these identifiers break.

@VladimirAlexiev
Copy link

GraphDB and rdf4j use urn:rdf4j:triple:xxx where xxx stands for the Base64 URL-safe encoding of the N-Triples representation of the embedded triple.

@HolgerKnublauch
Copy link
Author

Ok Base 64 is an option (assuming we agree that the :rdf4j part can simply be removed in a standardized form.

Comments:

  • N-Triples doesn't use any namespace abbreviations, which would cause quite a bit of bloat, e.g. when xsd:date has to be spelled out each time. I would argue that for brevity we should define standard prefixes and require their use.
  • Base64 is not human-readable, while URL-encoded strings are at least manageable

Why did you use Base64? Is it producing shorter URIs in average?

Is RDF4J ever storing these long URIs internally or does it use SPO pointers and only produces the URIs when needed (i.e. rarely)?

@VladimirAlexiev
Copy link

I vote against relying on prefixes because they can be redefined locally and even xsd is not standardized (some people use xs).

Can we use some short hash instead of base64?

@HolgerKnublauch
Copy link
Author

HolgerKnublauch commented Nov 6, 2020

On prefixes we had similar discussions in the SHACL-SPARQL work and noted that prefix declarations are not really an RDF graph concept, but merely a feature of serializations. They do not necessarily "survive" round-tripping so are generally not reliable, as you also say. However, we need to keep in mind that some implementations of a long-URI policy may in fact store these URIs are real strings, and in that case we should aim at keeping the URIs as short as reasonable. A catalog of prefixes such as

[ rdf, rdfs, owl, sh, xsd, skos ]

would hopefully be quite easy to agree on and would shorten the majority of triples considerably, esp with datatypes and in common cases like rdf:type and rdfs:comment triples. These hard-coded abbreviations improve memory consumption but also human-readability.

With hash number, how would they uniquely identify triples - they cannot be parsed back.

@blake-regalia
Copy link
Contributor

The way GraphDB does it is perfect IMO.

N-Triples doesn't use any namespace abbreviations

Exactly 👍

prefix declarations are not really an RDF graph concept, but merely a feature of serializations.

Yes, prefixes should absolutely be avoided.

However, we need to keep in mind that some implementations of a long-URI policy may in fact store these URIs are real strings, and in that case we should aim at keeping the URIs as short as reasonable.

I wouldn't worry about implementation in this regard. We should focus on the serialization, the data model does not change; implementors will choose the appropriate data structures.

The mention of long URLs is interesting. As of today, the de facto maximum URL string length widely supported on the interwebz is about a 2000 characters, which would leave about 2,500 characters worth of content unencoded.

@HolgerKnublauch
Copy link
Author

Would you help me understand your reason why prefixes should be absolutely avoided?

Are the URL string length restrictions relevant for IRIs?

@blake-regalia
Copy link
Contributor

blake-regalia commented Nov 10, 2020

Are the URL string length restrictions relevant for IRIs?

Ah yes, I meant to mention that I could see this becoming a concern for dereferencing long URLs which encode several layers of embedded RDF* triples this way. Although I imagine it would likely never happen in practice.

Prefixes should be avoided mainly because they introduce ambiguity to an otherwise canonical form. If prefixes are allowed, then there can be two IRIs which encode semantically equivalent triples but manifest as different strings. While it may reduce string length and ease readability, it comes at a great cost to implementations since they must first normalize every string before storing or comparing. Also, prefixes are not in any way intrinsic to the specification (e.g., there is no ontology or set of vocabulary terms RDF-star uses other than maybe rdf) so selecting a set of prefixes would be rather arbitrary and preferential.

@HolgerKnublauch
Copy link
Author

HolgerKnublauch commented Nov 10, 2020 via email

@pchampin pchampin added the semantics About the semantics of RDF-star label Nov 10, 2020
@VladimirAlexiev
Copy link

  • I think readability is not an important requirement, since when you go 2-3 levels of nesting, you'll get an unreadable mess no matter what encoding you use.
  • Limiting length is a legitimate requirement
  • Holger points out a requirement of parsability (invertibility). I hadn't thought about it, but I now think it's important, eg to parse and reconstruct RDF* from NTriples*

Using a set of fixed prefixes is a very small step towards limiting length and doesn't solve the problem.
Eg what would be the encoding of this RDF* triple:

<<:Michail_Sholokhov :wrote "<full text of And Quiet Flows the Don, all 5k pages of it>" >>
  :disputedBy :A_Chernov.

I think we need to pick some compression method.
Eg EXI https://en.wikipedia.org/wiki/Efficient_XML_Interchange uses Huffman coding for representing XML efficiently on constrained (IoT) devices.
See https://www.w3.org/TR/exi/, https://www.w3.org/TR/2009/WD-exi-evaluation-20090407/

@HolgerKnublauch
Copy link
Author

It is quite easy to come up with cases where any algorithm will behave poorly. Going down multiple levels of nesting (i.e. statements about statements about statements) is one of those, but is this really happening in practice? Likewise, if anyone stores a whole book text as an RDF literal then the database will suffer no matter what.

I am open to compression algorithms assuming their trade-off is worth it. Keep in mind that we are talking about URIs, so any compressed binary format may require an extra level of URL-encoding. So you'd end up with quite a layering of algorithms that add up complicating the assessment. Qnames already solve compression in the RDF world, but they only work if we either define a comprehensive catalog of common prefixes or another mechanism to safely reference local prefixes (which I don't think is possible).

A proper scientific approach here would be to collect realistic sample data and then let the conversion algorithms do their work to compare size versus serialization/parsing performance, and then also readability (which I wouldn't want to give up on yet). The problem then becomes a matter of proper engineering.

So: does anyone have some example data?

@pchampin
Copy link
Collaborator

pchampin commented Jul 1, 2021

A long time ago, I flagged this discussion as relevant to semantics, but in retrospect, it seems to me that its is more about implementations. Semantically, this method raises problems as long as blank nodes are involved, because the blank node label that will be put in the IRI is irrelevant for the semantics (actually, it is even irrelevant for the abstract syntax). Of course, implementations can rely on that internally, and "do the right thing" under the hood with blank node labels.

Therefore, refiling this issue as discussion, and removing the semantics label. Shout if you disagree.

@pchampin pchampin added discussion Open ended discussion that does not call for a specific action and removed semantics About the semantics of RDF-star labels Jul 1, 2021
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
discussion Open ended discussion that does not call for a specific action
Projects
None yet
Development

No branches or pull requests

4 participants