All the basics of LinkedDataHub. From installation to customizing the model and user interface.
-This guide will show how a LinkedDataHub application can be used to manage domain-specific RDF classes and instances. As an example, we will use SKOS concepts and concept schemes.
-Note that most management actions can also be performed using the CLI (Command Line Interface). Where applicable, the UI and CLI instructions are shown side by side. The UNESCO Thesaurus demo app demonstrates how SKOS vocabularies can be managed in LinkedDataHub.
-Setup is only required if you plan to run your own instance of LinkedDataHub. It consists of few steps, which involve creating a configuration file and running a
- docker-compose command.
If you want to use an existing instance of LinkedDataHub, proceed to the next step.
-With LinkedDataHub, you obtain a WebID and use its client certificate for authentication. You can obtain the WebID either by setting up your own instance or signing up on - an existing instance. Alternatively, you can authenticate using your social login.
- -After you login to (authenticate with) a LinkedDataHub instance, the next step is getting an authorization from its owners that allows you to view and possibly append or edit documents. That is done by issuing an access request.
- -You will need to create documents if you want to store data in LinkedDataHub. Documents are RDF Linked Data resources as well as named graphs in the application's dataset.
-There are types of documents supported by LinkedDataHub: containers and items. Containers can have children documents, just like folders in a filesystem. Items usually contain content and/or are paired with - non-informations resources such as abstract concepts or physical objects.
- -Out of the box, you can create instances of classes that LinkedDataHub ships with by default. Those are built-in classes in system ontologies such as container and item mentioned above, all forms of SPARQL queries, charts, data imports, files etc.
-LinkedDataHub also allows creation of instances of user-defined classes from ontologies imported by the user.
-Learn how to create instances of built-in and user-defined classes.
-In order to be able to instance data, we need to create classes that represent them (and ontologies where those classes are defined) in the model of our dataspace. Not only will they serve as RDF types of the instances, but will have constructors attached that define the default properties and their (data)types for that class.
-Follow a tutorial to change the model: create a class and its constructor.
-A common step at this point would be to populate your dataset with instances by importing data. LinkedDataHub currently supports CSV and RDF data imports.
- -User interface changes are done by adding and overriding templates in XSLT stylesheets.
-Follow a tutorial to change the layout: create an XSLT stylesheet and override a template.
-A list of LinkedDataHub's features and their functions, split into general and administration parts
-How to create documents, content, and instances
-Documents are RDF documents stored in the application's dataset. They contain descriptions of resources, e.g. instances created by the user, imported from files, or loaded from Linked Data.
-Each document can have content, which is a chain (RDF list) of URI resources and XHTML literals.
-Step by step guide to importing RDF data
-This guide is for importing larger amounts of data (e.g. more than a few thousands of RDF triples) asynchronously. - For smaller data, it is simpler to add data synchronously.
-Read more on how RDF imports work.
-There are 3 main components that comprise a CSV import:
-CONSTRUCT query that maps RDF to another RDF represenationEither query or graph have to be specified, not both.
-This method asynchronously appends RDF data to a document (see how to create a document).
-LinkedDataHub allows all the import components be created from a single dialogue, but we will go through the process in separate - steps:
-Checkout the Command line interface (CLI) scripts into a folder on your machine. Provide a list of arguments to the import-rdf script and execute it. For example:
-import-rdf.sh \
- -b "https://localhost:4443/" \
- -f ./ssl/owner/cert.pem \
- -p "$owner_cert_password" \
- --title "Concepts" \
- --file concepts.ttl \
- --file-content-type "text/turtle" \
- --graph "${base}skos/"
- LinkedDataHub allows all the import components be created from a single dialogue, but we will go through the process in separate - steps:
-CONSTRUCT query stringCheckout the Command line interface (CLI) scripts into a folder on your machine. Provide a list of arguments to the import-rdf script and execute it. For example:
-import-rdf.sh \ - -b "https://localhost:4443/" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --title "Concepts" \ - --file concepts.ttl \ - --file-content-type "text/turtle" \ - --query-file concepts.rq-
Step by step guide to converting CSV data to RDF and importing into end-user application
-Read more on how CSV imports work.
-There are 3 main components that comprise a CSV import:
-CONSTRUCT query that maps CSV to generic RDF represenation- -
-LinkedDataHub allows all the import components be created from a single dialogue, but we will go through the process in separate - steps:
-CONSTRUCT query stringCheckout the Command line interface (CLI) scripts into a folder on your machine. Provide a list of arguments to the import-csv script and execute it. For example:
-import-csv.sh \ - -b "https://localhost:4443/" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --title "Places" \ - --query-file places.rq \ - --file places.csv-
Search for resources using text keywords
-You can lookup resources by typing a phrase (it does not have to be complete, start with a few letters) into the input in the navigation bar.
-A dropdown list will appear if there are any matches. Use up/down keys or mouse click to select one of the results, and you will be redirected to its document.
-The matching is done by looking for substrings using SPARQL regex() in common literal properties such as dct:title, rdfs:label, foaf:name etc. You can find the exact query in Queries / Select labelled.
The same widget is used for autocomplete inputs in the create/edit forms.
-
-
-
You can use SPARQL to query data from the application's SPARQL service.
-This guide describes how the application dataset can be queried using SPARQL.
-The end-user application dataset can be queried using SPARQL 1.1 - endpoint, which is available via link in the navigation bar.
-All forms of SPARQL queries are allowed; SPARQL updates are not allowed. A result limit might apply.
-For DESCRIBE and CONSTRUCT results, it is possible to switch the
- layout mode using a button above to the results.
-
-
If you want to save the query string which is currently in the editor, click the Save button underneath the editor form. A document creation form will open where the query - string will be filled in, you only need to enter the titles of the query and its document. You can then access the query by navigating to the query container via the dropdown in the - breadcrumb bar.
-You will not be able to save invalid query strings.
-The SPARQL editor includes an interactive chart pane which can be used to visualize the query results. You can change the chart type, select its category and one or more series.
-The chart pane supports both graph query (CONSTRUCT, DESCRIBE) and result set query (SELECT, ASK) results. For graph results, the category and series are
- rendered from resource properties; for result sets, they are rendered from variable names.
If you want to save the chart with properties which are currently selected, click the Save button underneath the chart pane. That will open two document creation forms: one for - the query, and the second one for the chart (as the chart depends on the query). The query string and chart properties will be filled in, you only need to - enter the titles of the query and the chart and their document. You can then access the chart by navigating to the chart container via the dropdown in the - breadcrumb bar.
-You will not be able to save invalid query strings.
-Step by step guide to installing and uninstalling packages in your LinkedDataHub dataspace.
-Version: Packages were introduced in LinkedDataHub 5.2.
-Package management requires Control access to the administration application. Only users with administrative privileges can install or uninstall packages.
-Read more about how packages work.
-Important: After installing or uninstalling a package, you must restart the Docker service for XSLT stylesheet changes to take effect:
-docker-compose restart linkeddatahub-
Do not use --force-recreate as that would overwrite the stylesheet file changes.
Installing a package adds new functionality to your dataspace by importing ontologies, data, and resources.
-Follow these steps to install a package through the web interface:
-Execute the following command:
-install-package.sh \ - -b "https://localhost:4443/" \ - -f ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --package "https://packages.linkeddatahub.com/skos/#this"-
Parameters:
--b-f-p--packageUninstalling a package removes its ontologies and resources from your dataspace. User-created data that uses the package vocabulary will remain but may not function correctly.
-Follow these steps to uninstall a package through the web interface:
-Uninstalling a package will remove its ontologies and associated resources. Any data you created using the package's vocabulary will remain in your dataspace but may not display or function correctly without the package.
-Execute the following command:
-uninstall-package.sh \ - -b "https://localhost:4443/" \ - -f ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --package "https://packages.linkeddatahub.com/skos/#this"-
Parameters are the same as for installation.
-After installing a package, verify that it was installed correctly:
-Packages may include custom XSLT stylesheets that affect how data is displayed. You can further customize the behavior by:
-See the Change layout guide for detailed instructions on stylesheet customization.
-If package installation fails:
-If installed package features are not visible:
-How to bulk import data into LinkedDataHub
-This guide is about bulk data imports running asynchronously, suitable for larger datasets. If you want to simply add RDF data to a document, see the add data guide.
-Fork RDF data from Linked Data resources and SPARQL results
-You can fork (copy the contents of) any RDF document to a namedgraph in your dataspace. Two sources of RDF are supported: RDF resources on the web (Linked Data), and RDF file uploads.
-Adding data this way will cause a blocking request, so use it for small amounts of data only (e.g. a few thousands of RDF triples). - For larger data, use asynchronous RDF imports.
-- -
-To copy the contents of the current RDF document (local or remote) into a local document, follow these steps:
-To upload the contents of an RDF file into a document, simply drag the file from your filesystem into LinkedDataHub when the document's properties mode is active.
-The following syntaxes are supported:
-Navigate the document hierarchy, related results, and backlinks
-- -
-Document tree is a widget that enables quick overview of the application's document hierarchy. Each document is shown as a node in the tree; container documents - can be expanded to reveal the children documents.
-The widget can be accessed by sliding the mouse to the left edge of the screen (on responsive layouts, it is always visible).
-
-
-
Backlinks are shown in the right-side navigation for every resource in property layout mode. - They display a list of resources which have properties with the current resource as the object.
-Step by step guide to creating a new dataspace backed by SPARQL services
-Dataspaces are configured in config/system.trig. TriG RDF syntax is used.
-The configuration uses the Application domain ontology. A dataspace is comprised of an end-user - application and an administrative application, both of them are backed by each own SPARQL service. Each application can also specify its own XSLT stylesheet.
-Application base URIs need to be relative to the system base URI configured in the .env file. A change of system base URI currently requires a change of application base URIs, otherwise they will not be reachable.
-Lets say we want to use https://ec2-54-235-229-141.compute-1.amazonaws.com/linkeddatahub/ as the new base URI of our dataspaces. The easiest way is to simple replace occurences of the default https://localhost:4443/ base URI with the new value. It can be done using the following shell command:
-sed -i 's/https:\/\/localhost:4443\//https:\/\/ec2-54-235-229-141.compute-1.amazonaws.com\/linkeddatahub\//g' config/system.trig-
Note that sed requires to escape forward slashes / with backslashes \.
-Add instances of lapp:EndUserApplication, lapp:AdminApplication and their corresponding sd:Services following the default dataspace in config/system.trig.
Use URIs (for example in the urn: scheme) to identify apps and services, not blank nodes. Make sure the file's syntax is valid Turtle, otherwise the setup will not work. You can use Turtle Validator to check the syntax.
-Change the value of ac:stylesheet to the URI of your XSLT stylesheet. Add the property if it is absent.
The stylesheet can either be uploaded as a file or mounted in docker-compose.yml, in the volumes section of the linkeddatahub service. Mounting is useful while developing.
-You will need to restart LinkedDataHub's Docker service for the new stylesheet to take effect.
-It is rarely necessary to change the stylesheet of an admin application.
-LinkedDataHub service as well as the default SPARQL services fuseki-end-user and fuseki-admin are defined in docker-compose.yml and run as Docker containers.
-Follow these steps:
-The following video shows the creation of both a context and a dataspace:
-- -
-create-context.sh \
--b "${base}" \
--f "${cert_pem_file}" \
--p "${cert_password}" \
---title "${title}" \
---description "${description}" \
---app-base "${base}${slug}/" \
---public
- Follow these steps:
-The following video shows the creation of both a context and a dataspace:
-- -
-create-app.sh \
--b "${base}" \
--f "${cert_pem_file}" \
--p "${cert_password}" \
---title "${title}" \
---description "${description}" \
---app-base "${base}${slug}/" \
---public
- How to customize the layout of an application and the UI of the resources
-LinkedDataHub's user interface is simply a rendering of the underlying Linked Data resource descriptions, which are exposed via the HTTP API.
-LinkedDataHub provides XSLT stylesheets that render a default UI layout. When building an application you might want to render a custom - layout however. The recommended way of doing that is to create a new stylesheet which imports the system stylesheet and only override specific templates, while reusing the rest of the layout.
-First we need to create a new XSLT 3.0 stylesheet and use <xsl:import> to import the system stylesheet static/com/atomgraph/linkeddatahub/xsl/bootstrap/2.3.2/layout.xsl.
See an example in the SKOS demo app.
-First, either upload the XSLT file or mount it using Docker and docker-compose.override.yml:
-version: "2.3" -services: - linkeddatahub: - volumes: - - ../LinkedDataHub-Apps/demo/skos/files/skos.xsl:/usr/local/tomcat/webapps/ROOT/static/com/atomgraph/linkeddatahub/demo/skos/xsl/index.xsl:ro-
Then change the value of ac:stylesheet on the dataspace with base URI https://localhost:4443/ to the relative URI of the stylesheet:
<urn:linkeddatahub:apps/end-user> a lapp:EndUserApplication ; - ... - ac:stylesheet <static/com/atomgraph/linkeddatahub/demo/skos/xsl/index.xsl> ; - ...-
In the examples below we assume that the graph contains a SKOS taxonomy with concept instances, which can also have broader/narrower concepts.
-At this point, the default layout of the current document, its topic concept, and narrower/broader concepts looks like this:
-Keep the default output using <xsl:apply-imports> or <xsl:next-match> and add new output before/after it:
<xsl:key name="resources-by-broader" match="*[@rdf:about] | *[@rdf:nodeID]" use="skos:broader/@rdf:resource"/>
-
-<xsl:template match="*[foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri][key('resources-by-broader', @rdf:about)] | *[foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri][key('resources', skos:narrower/@rdf:resource)]" priority="1">
- <xsl:next-match/>
-
- <h3>Narrower concepts</h3>
- <ul>
- <xsl:apply-templates select="key('resources-by-broader', @rdf:about) | key('resources', skos:narrower/@rdf:resource)" mode="bs2:List">
- <xsl:sort select="ac:label(.)"/>
- </xsl:apply-templates>
- </ul>
-</xsl:template>
- The match pattern will match the resource descriptions that:
-We can render broader concepts by making a copy of the key and the template above and replacing all occurences of skos:narrower with skos:broader and vice versa.
With the overriding template, the layout looks like this:
-To completely change the layout without keeping the default one, use the same logic as for augmenting it, but do not call <xsl:apply-imports>/<xsl:next-match>.
In the above example, we have rendered skos:narrower properties in our own special way. Which means that the default output of the same properties from bs2:PropertyList is no longer desired.
You can specify an empty template at any level (graph/property/resource) to disable output of that layout mode. For example, this will suppress the default rendering of concepts which are - now shown in the customized broader/narrower list:
-<xsl:template match="*[key('resources', skos:narrower/@rdf:resource)/foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri] | *[key('resources', skos:broader/@rdf:resource)/foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri] | *[@rdf:about = key('resources', key('resources', $ac:uri)/foaf:primaryTopic/@rdf:resource)/skos:narrower/@rdf:resource] | *[@rdf:about = key('resources', key('resources', $ac:uri)/foaf:primaryTopic/@rdf:resource)/skos:broader/@rdf:resource]"/>
- The match pattern will match the resource descriptions that:
-<xsl:template match="*[foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri][key('resources', skos:broader/@rdf:resource)]/skos:broader | *[foaf:isPrimaryTopicOf/@rdf:resource = $ac:uri][key('resources', skos:narrower/@rdf:resource)]/skos:narrower" mode="bs2:PropertyList"/>
- The match pattern will match the resource properties that:
-skos:broader properties of a resource that is the primary topic of the current document and has broader concepts in the RDF graphskos:narrower properties of a resource that is the primary topic of the current document and has narrower concepts in the RDF graphWith the supressing templates, the layout now looks like this:
-Learn how to create container and item documents
-- -
-To create a container document (which can have children documents) with a custom URL, follow these steps:
-See how below:
-- -
-Replace owner_cert_password with the value of the corresponding secret and execute the following command:
-create-container.sh \ - -b "https://localhost:4443/" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --title "Concepts" \ - --slug "concepts" \ - --parent "https://localhost:4443/"-
A container titled Concepts should appear with the URI https://localhost:4443/concepts/.
- -Learn how to create instances of built-in and user-defined classes
-- -
-Built-in classes (such as Container, Item, SELECT etc.) ship with system ontologies in LinkedDataHub while user-defined classes come from ontologies imported by the user (see the user guide on changing the model on how to do that).
-Note that in order to create class instances, you have to be in properties mode.
-To create instances of built-in classes, follow these steps:
-You will be redirected to the document of the newly created instance.
-To create instances of user-defined classes, follow these steps:
-You will be redirected to the document of the newly created instance.
-You can only create instances of classes that have constructors.
-Learn how to create data-driven content
-- -
-Each document (container or item) can have content, i.e. a list of blocks which is shown in the content layout mode. Currently, the following content types are supported:
-SELECT query (client-side "container")Content other than HTML content is called object. The HTML content is part of the document while resource content is simply embedded (transcluded) into the HTML page.
-To create a new XHTML block, follow these steps:
-After this, a new XHTML block should be appended to the page, replacing the form.
-To edit an XHTML block, follow these steps:
-To remove an XHTML block, follow these steps:
-To add a new XHTML block with the value <p>A paragraph</p>, replace owner_cert_password with the value of the corresponding secret and execute the following command:
-add-xhtml-block.sh \ - -b "https://localhost:4443/" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --fragment "xhtml-block" \ - --value "<div xmlns=\\\"http://www.w3.org/1999/xhtml\\\"><p>A paragraph</p></div>" \ - "https://localhost:4443/concepts/example/"-
To create a new object block, follow these steps:
-After this, a new object block should be appended to the page, replacing the form.
-To edit an object block, follow these steps:
-To remove an object block, follow these steps:
-To add a new object block with the value http://dbpedia.org/resource/Copenhagen, replace $owner_cert_password with the value of the corresponding secret and execute the following command:
-add-object-block.sh \ - -b "https://localhost:4443/" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --fragment "object-block" \ - --value "http://dbpedia.org/resource/Copenhagen" \ - "https://localhost:4443/concepts/example/"-
Use LinkedDataHub's built-in Linked Data browser to explore remote datasources
-LinkedDataHub's has a built-in Linked Data browser which is accessed through the navigation bar. Enter a http:// or https:// URL and
- press Enter or click the search button.
The browser supports all standard RDF formats as well as JSON-LD embedded in HTML page's <script> elements, which is often used to publish schema.org metadata.
When the data loads successfully, you can navigate it the same way as your local documents, switch between different layout modes etc. You can also add the data into your dataspace.
-If an RDF document cannot be read from the supplied URL, an error message will be shown.
-Learn how to upload files to LinkedDataHub
-See how below:
-- -
-Replace owner_cert_password with its value from the .env file and execute the following command:
-create-file.sh \ --b "https://localhost:4443/" \ --f ./ssl/owner/cert.pem \ --p "$owner_cert_password" \ ---title "$title" \ ---file "$filename" \ ---file-content-type "$content_type"-
Change the model: create constructors, classes, and constraints
-In order to be able to manage instances, we need to create classes that represent them in the model of our dataspace. Not only will they serve as RDF types of the instances, but will have constructors attached that define the default properties and their (data)types for that class.
-Model is managed in the administration application of a dataspace. Head there by clicking the in the action bar and then choosing Administration.
-In order to manage the access control, or the model of a dataspace, - the agent needs to be a member of the owners group.
-We will use the SKOS ontology and its skos:Concept class as an example in this guide.
-We will use the following SPARQL CONSTRUCT query as a constructor for our Concept class and save it in a file under queries/construct-concept.rq.
PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
-PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
-
-CONSTRUCT
-{
- $this skos:inScheme [ a skos:ConceptScheme ] ;
- skos:topConceptOf [ a skos:ConceptScheme ] ;
- skos:prefLabel [ a xsd:string ] ;
- skos:altLabel [ a xsd:string ] ;
- skos:hiddenLabel [ a xsd:string ] ;
- skos:notation [ a xsd:string ] ;
- skos:note [ a xsd:string ] ;
- skos:changeNote [ a xsd:string ] ;
- skos:definition [ a xsd:string ] ;
- skos:editorialNote [ a xsd:string ] ;
- skos:example [ a xsd:string ] ;
- skos:historyNote [ a xsd:string ] ;
- skos:scopeNote [ a xsd:string ] ;
- skos:semanticRelation [ a skos:Concept ] ;
- skos:broader [ a skos:Concept ] ;
- skos:narrower [ a skos:Concept ] ;
- skos:related [ a skos:Concept ] ;
- skos:broaderTransitive [ a skos:Concept ] ;
- skos:narrowerTransitive [ a skos:Concept ] ;
- skos:mappingRelation [ a skos:Concept ] ;
- skos:broadMatch [ a skos:Concept ] ;
- skos:narrowMatch [ a skos:Concept ] ;
- skos:relatedMatch [ a skos:Concept ] ;
- skos:exactMatch [ a skos:Concept ] ;
- skos:closeMatch [ a skos:Concept ] .
-}
-WHERE {}
- In the administration application, follow these steps:
-CONSTRUCT query stringpwd=$(realpath -s $PWD)
-
-create-construct.sh \
- -b "${base}admin/" \
- -f ./ssl/owner/cert.pem \
- -p "$owner_cert_password" \
- --uri "${base}ns#ConstructConcept" \
- --label "Construct concept" \
- --slug construct-concept \
- --query-file "${pwd}/queries/construct-concept.rq" \
- "${base}admin/model/ontologies/namespace/"
- Follow the same steps for Concept scheme.
- -To control data quality, we probably want to make some of the instance properties mandatory. For example, a skos:Concept instance should always have a skos:prefLabel value.
In the administration application, follow these steps:
-create-property-constraint.sh \
- -b "$base" \
- -f ./ssl/owner/cert.pem \
- -p "$owner_cert_password" \
- --uri "https://localhost:4443/ns#MissingPrefLabel" \
- --label "Missing skos:prefLabel" \
- --slug missing-pref-label \
- --property "http://www.w3.org/2004/02/skos/core#prefLabel" \
- "${base}admin/model/ontologies/namespace/"
- In the administration application, follow these steps to create the concept class:
-create-class.sh \
- -b "$base" \
- -f ./ssl/owner/cert.pem \
- -p "$owner_cert_password" \
- --uri "http://www.w3.org/2004/02/skos/core#Concept" \
- --label "Concept" \
- --slug concept \
- --constructor "{$base}ns#ConstructConcept" \
- --constraint "{$base}ns#MissingPrefLabel" \
- "${base}admin/model/ontologies/namespace/"
- Follow the same steps for Concept scheme.
- -Using LinkedDataHub as a low-code platform for Knowledge Graph applications
-Every component in LinkedDataHub is data-driven and was designed with extensibility in mind. You can override behavior (e.g. Java method or XSLT template) without having to modify LinkedDataHub's - codebase, and more importantly, without having to write the same logic from scratch.
-The following sections are split by component/layer and explain how to extend them when building bespoke apps.
-You can go a long way just by mounting files (e.g. config files, ontologies, stylesheets) into LinkedDataHub's default Docker setup. But you may also want to build a dedicated Docker image for your app using LinkedDataHub as the base. Usually this is done to COPY files inside the the image or RUN additional commands.
Here's an sample Dockerfile that copies custom stylesheets into the app's Docker image:
-FROM atomgraph/linkeddatahub:4.0.0
-
-ARG SAXON_VERSION=9.9.1-2
-
-USER root
-
-RUN curl -fsSL https://deb.nodesource.com/setup_16.x | bash - && \
- apt-get update --allow-releaseinfo-change && \
- apt-get install -y nodejs && \
- rm -rf /var/lib/apt/lists/* && \
- mkdir /home/ldh && chown ldh:ldh /home/ldh
-
-USER ldh
-
-RUN npm install saxon-js && \
- npm install xslt3
-
-WORKDIR $CATALINA_HOME/webapps/ROOT/static
-
-COPY --chown=ldh:ldh files/layout.xsl net/example/xsl/layout.xsl
-COPY --chown=ldh:ldh files/client.xsl net/example/xsl/client.xsl
-
-# pre-processing stylesheets in order to inline XML entities which SaxonJS does not support
-RUN curl https://repo1.maven.org/maven2/net/sf/saxon/Saxon-HE/${SAXON_VERSION}/Saxon-HE-${SAXON_VERSION}.jar -O && \
- cat com/atomgraph/linkeddatahub/xsl/client.xsl | grep 'xsl:import' | cut -d '"' -f 2 | xargs -I{} java -cp Saxon-HE-${SAXON_VERSION}.jar net.sf.saxon.Query -qs:"." -s:com/atomgraph/linkeddatahub/xsl/{} -o:com/atomgraph/linkeddatahub/xsl/{} && \
- cat com/atomgraph/linkeddatahub/xsl/client.xsl | grep 'xsl:include' | cut -d '"' -f 2 | xargs -I{} java -cp Saxon-HE-${SAXON_VERSION}.jar net.sf.saxon.Query -qs:"." -s:com/atomgraph/linkeddatahub/xsl/{} -o:com/atomgraph/linkeddatahub/xsl/{} && \
- java -cp Saxon-HE-${SAXON_VERSION}.jar net.sf.saxon.Query -qs:"." -s:com/atomgraph/linkeddatahub/xsl/client.xsl -o:com/atomgraph/linkeddatahub/xsl/client.xsl && \
- java -cp Saxon-HE-${SAXON_VERSION}.jar net.sf.saxon.Query -qs:"." -s:net/example/xsl/client.xsl -o:net/example/xsl/client.xsl && \
- npx xslt3 -t -xsl:net/example/xsl/client.xsl -export:net/example/xsl/client.xsl.sef.json -nogo -ns:##html5 -relocate:on && \
- rm Saxon-HE-${SAXON_VERSION}.jar && \
- setfacl -Rm user:ldh:rwx net/example/xsl
-
-WORKDIR $CATALINA_HOME
- The Java layer of LinkedDataHub is a Maven project with a webapp layout. The codebase uses JAX-RS, specifically Jersey 3, as the HTTP/REST framework and uses Apache Jena for RDF I/O.
-If you want to extend the Java codebase, for example to add a custom REST endpoint, you need to create a new Maven project with LinkedDataHub as dependency:
-<dependency> - <groupId>com.atomgraph</groupId> - <artifactId>linkeddatahub</artifactId> - <version>5.0.14</version> - <classifier>classes</classifier> -</dependency> -<dependency> - <groupId>com.atomgraph</groupId> - <artifactId>linkeddatahub</artifactId> - <version>5.0.14</version> - <type>war</type> -</dependency>-
After adding the new endpoint, you'll also need to extend the LinkedDataHub's JAX-RS application (com.atomgraph.linkeddatahub.Application) and register your class there.
By convention, static files (e.g. stylesheets, images) are placed in the src/main/webapp/static/ folder in the codebase, which then becomes /usr/local/tomcat/webapps/ROOT/static/ within the webapp deployed in the Docker container, and is available as ${base}static/ over HTTP.
-This means that if you want to deploy static files as part of your LinkedDataHub app, you will have to either mount them into the /usr/local/tomcat/webapps/ROOT/static/ folder in the container, or copy them into a custom Docker image to the same effect.
-You can allow Java debugger access using configuration properties.
-LinkedDataHub uses XSLT 3.0 for UI rendering both on the server- and on the client-side.
-You can specify a custom server-side XSLT 3.0 stylesheet in the dataspace configuration.
-You can reuse LinkedDataHub's stylesheets by importing them as explained in the change layout user guide.
-The default server-side stylesheet layout.xsl and its imports can be found in the src/main/webapp/static/com/atomgraph/linkeddatahub/xsl/bootstrap/2.3.2 folder.
-Client-side stylesheets are used to implement the interactive parts of the UI (event handling, AJAX calls etc.) using the IXSL extension - provided by the SaxonJS library. This way the frontend is made completely declarative and does not require almost any Javascript code, with exceptions of 3rd party libraries such as Google Charts and OpenLayers.
-Client-side stylesheets share common document-, resource-, and property-level templates with the server-side stylesheets. The client-side stylesheet may not contain any XML entity declarations (limitation of
- SaxonJS) and must be compiled into a SEF file before it can be run by the SaxonJS runtime.
- By default, the URL of the client-side SEF stylesheet can be specified as the $client-stylesheet param of the document-level server-side xhtml:Script template.
You can modify or extend the default RDF datasets used by LinkedDataHub. However, a better practice is to use the - CLI scripts to create documents and to import CSV and RDF data.
-How to change host, port, or the path LinkedDataHub service runs on
-System base URI is the URI on which the LinkedDataHub service is accessible.
-A common case is changing the system base URI from the default https://localhost:4443/ to your own.
-Lets use https://ec2-54-235-229-141.compute-1.amazonaws.com/linkeddatahub/ as an example. We need to split the URI into components and set them in the .env file using the following parameters:
-PROTOCOL=https -HTTP_PORT=80 -HTTPS_PORT=443 -HOST=ec2-54-235-229-141.compute-1.amazonaws.com -ABS_PATH=/linkeddatahub/-
Dataspace URIs need to relative to the system base URI in order to be reachable.
-Step-by-step tutorials for everyday tasks with LinkedDataHub
-LinkedDataHub is a Knowledge Graph application platform by - AtomGraph that fully exploits the federated features of RDF and SPARQL. - It can also be used as an RDF-native content management platform.
-LinkedDataHub is an open source project that has its roots as a Linked Data publishing framework. However, since the 3.x release we have focused on Linked Data consumption, as we consider - publishing a solved problem but continue to see a shortage of user-friendly consumption tools.
-LinkedDataHub is also a low code RDF Knowledge Graph application platform. It offers comprehensive development and management features:
-Checkout the user guide on application building.
-- -
-As an RDF-native CMS, LinkedDataHub provides a number of features for end-users:
-Architecturally LinkedDataHub is a read-write RDF Graph Store combined with a rich Linked Data/SPARQL client. LinkedDataHub does not persist RDF data itself but rather serves it from, and stores it in, - a backing triplestore, which by default is the Apache Jena Fuseki.
-Every document in LinkedDataHub's dataspace is also a named graph in the Graph Store and has both RDF and HTML representations. The client is implemented using XSLT 3.0, a standard, declarative data transformation language. It can connect to any Linked Data resource or SPARQL 1.1 endpoint.
-Since version 3.x LinkedDataHub does not use the Linked Data Templates anymore. However they can still be used to publish Linked Data from SPARQL endpoints - using Processor.
-You can find the changelog here.
-CLI scripts can be used perform all actions available in the UI
-LinkedDataHub CLI wraps the HTTP API into a set of shell scripts with convenient parameters. The scripts should run on any Unix-based system. - They can be used for testing, automation, scheduled execution and such. It is usually much quicker to perform actions using CLI rather than the - user interface, as well as easier to reproduce.
-Some scripts correspond to a single request to LinkedDataHub, others combine others into tasks with multiple interdependent requests, such as the CSV import.
-You will need to supply a .pem file of your WebID certificate as well as its password as script arguments, among others.
-The CLI scripts live in the bin folder and need to be added to the $PATH environmental variable. For example:
export PATH="$(find bin -type d -exec realpath {} \; | tr '
-' ':')$PATH"
- They also use the Jena's CLI commands internally, so make sure to have them on $PATH before running the scripts.
Common parameters used by most scripts include:
-Other parameters are script-specific.
-A usage message with parameters of a script is printed when the scripted is run without any arguments. There can be named parameters and default parameters, both of those can be optional. For example:
-$ create-select.sh -Creates a SPARQL SELECT query. - -Usage: create-select.sh options - -Options: - -f, --cert-pem-file CERT_FILE .pem file with the WebID certificate of the agent - -p, --cert-password CERT_PASSWORD Password of the WebID certificate - -b, --base BASE_URI Base URI of the application - --proxy PROXY_URL The host this request will be proxied through (optional) - - --title TITLE Title of the chart - --description DESCRIPTION Description of the chart (optional) - --slug STRING String that will be used as URI path segment (optional) - --fragment STRING String that will be used as URI fragment identifier (optional) - - --query-file ABS_PATH Absolute path to the text file with the SPARQL query string - --service SERVICE_URI URI of the SPARQL service specific to this query (optional)-
The optional parameters are marked with (Optional). In this case there is no default argument, but some scripts require document (named graph) URI as the default parameter, e.g. ontology document URL.
-This is how a create-select.sh invocation would look like:
-create-select.sh \ - -b "$base" \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --proxy "$proxy" \ - --title "Select concepts" \ - --slug select-concepts \ - --query-file "$pwd/queries/select-concepts.rq"-
Currently supported:
-| Purpose | -Script | -
|---|---|
| Low-level (Graph Store Protocol) | -|
| GET request | -get.sh | -
| POST request | -post.sh | -
| PUT request | -put.sh | -
| DELETE request | -delete.sh | -
| Documents | -|
| Create document | -create-document.sh | -
| Create container document | -create-container.sh | -
| Create item document | -create-item.sh | -
| Content | -|
| Append object block (instance of ldh:Object) to document | -add-object-block.sh | -
| Append XHTML block (instance of ldh:XHTML) to document | -add-xhtml-block.sh | -
| Remove block from document | -remove-block.sh | -
| Instances of system classes | -|
| Append service (instance of ldh:GenericService) to document | -add-generic-service.sh | -
| Apend result set chart (instance of ldh:ResultSetChart) to document | -add-result-set-chart.sh | -
Append SELECT query (instance of sp:Select) to document |
- add-select.sh | -
| Append SPARQL view (instance of ldh:View) to document | -add-view.sh | -
| Imports | -|
| Create file | -imports/create-file.sh | -
| Create query | -imports/create-query.sh | -
| Create CSV import | -imports/create-csv-import.sh | -
| Import CSV data | -imports/import-csv.sh | -
| Administration | -|
Add owl:import to ontology |
- admin/add-ontology-import.sh | -
| Clear and reload ontology | -admin/clear-ontology.sh | -
| Access control | -|
| Add agent to group | -admin/acl/add-agent-to-group.sh | -
| Create authorization | -admin/acl/create-authorization.sh | -
| Create group | -admin/acl/create-group.sh | -
| Make application publicly readable to any agent | -admin/acl/make-public.sh | -
| Ontologies | -|
| Create class | -admin/model/create-class.sh | -
Create CONSTRUCT query |
- admin/model/create-construct.sh | -
| Create ontology | -admin/model/create-ontology.sh | -
| Create property constraint | -admin/model/create-property-constraint.sh | -
Create SELECT query |
- admin/model/create-select.sh | -
| Import ontology | -admin/model/import-ontology.sh | -
Usage example:
-create-file.sh https://localhost:4443/ \ - -f ./ssl/owner/cert.pem \ - -p "$owner_cert_password" \ - --title "Friends" \ - --file-slug 646af756-a49f-40da-a25e-ea8d81f6d306 \ - --file friends.csv \ - --file-content-type text/csv-
See also the data import user guides.
-Find the CLI scripts on GitHub or check out the - demo apps that use them.
-LinkedDataHub dataspaces, applications, and services
-The LinkedDataHub URI address space is split into dataspaces. Every dataspace consists of a pair of LinkedDataHub applications: - end-user and administration.
-The end-user app will be available on the given base URI; the admin app will be available
- at that base URI with admin/ appended. The agent that installed the admin dataset will be the application owner.
The secretary is a special agent which represents the software application itself. It is distinct from the owner agent and is used to delegate the owner's access.
-See also the administration reference.
-All LinkedDataHub applications have the following traits:
-Base URI must end with a forward slash (/).
In addition to that, LinkedDataHub applications have one additional property:
-The base URI of an end-user application is also the base URI of its dataspace.
-Every end-user application is related to one administration application.
-Every administration application is related to one end-user application. It cannot exist standalone.
-The base URI(s) of an administration application is the base URI(s) of its end-user application with admin/ appended
- to it. Note that any URIs in the end-user application that are equal or relative to the admin application base URI will not
- be accessible.
Administration application provides means to control the domain model and the - access control of its end-user application. Only dataspace owners have access to its - administration application.
-The agent which installs the adminstration application dataset becomes the owner of - its dataspace.
-LinkedDataHub imports the default datasets for each application type - into its service. The dataset URIs are rebased to be relative to the base URI of the application.
-A service is a persistent SPARQL 1.1-compatible store from which the application's RDF dataset is accessible over HTTP. LinkedDataHub supports - generic services as well as triplestore-specific services which support easier configuration and optimized access. HTTP Basic is - suppported as an authentication scheme. Contact us regarding support for vendor-specific authentication such as API keys.
-The end-user application service must be able to federate with the administration application service using the
- SPARQL SERVICE keyword.
Generic service has the following properties:
-LinkedDataHub has extension points for vendor-specific SPARQL services, which can be used to implement proprietary authentication schemes, for example.
-The basic structure of resources in an application is analogous to the file system, but built using RDF - resources and relationships between them instead. There is a hierarchy of containers, - which are collections of items as well as sub-containers. Both containers - and items are documents. Items cannot contain other documents.
-The first level of resources in a container is referred to as its children (of which that container - is the parent), while all levels down the hierarchy are collectively referred to as - descendants.
-When a user logs in, the application loads its root container (unless a specific URI was requested). From there, - users can navigate down the resource hierarchy, starting with children of the root container. At - any moment there is only one current document per page, on which actions can - be performed: it can be viewed, edited etc.
-If you are ready to create a dataspace, see our step-by-step tutorial on dataspace management.
-Types of data imports supported by LinkedDataHub
-Built-in XSLT and CSS stylesheets
-XSLT is a functional, Turing-complete XML transformation language.
-LinkedDataHub's XSLT 3.0 stylesheets work by transforming RDF/XML response body from the underlying HTTP API. Additional metadata from RDF vocabularies is used to improve user experience.
-RDF/XML is an important RDF syntax which functions as a bridge to the XML technology stack. The stylesheets use Jena's "plain" RDF/XML output which groups statements by subject and does not nest resource descriptions. This allows for predictable XPath patterns:
-/rdf:RDF — represents the RDF graph/rdf:RDF/rdf:Description or /rdf:RDF/*[*][@rdf:about] | /rdf:RDF/*[*][@rdf:nodeID]— resource description which contains properties/rdf:RDF/rdf:Description/@rdf:about — subject resource URI/rdf:RDF/rdf:Description/@rdf:nodeID — subject blank node ID/rdf:RDF/rdf:Description/* predicate (e.g. rdf:type) whose URI is concat(namespace-uri(), local-name())/rdf:RDF/rdf:Description/*/@rdf:resource — object resource/rdf:RDF/rdf:Description/*/@rdf:nodeID — object blank node ID/rdf:RDF/rdf:Description/*/text() — literal valueXSLT stylesheet components used by LinkedDataHub:
-<xsl:include> is used to include one stylesheet into another. The import mechanism is specified in 3.10.2 Stylesheet Inclusion the XSLT 3.0 specification. The templates from the included stylesheets have the same priority as those of the importing stylesheet.<xsl:import> is used to import one stylesheet into another. The import mechanism is specified in 3.10.3 Stylesheet Import the XSLT 3.0 specification. The templates from the imported stylesheets have lower priority than those of the importing stylesheet.One XSLT stylesheet can be specified per application. In order to reuse LinkedDataHub's built-in templates, it should import the system stylesheet layout.xsl and only override the necessary templates. That is however not a requirement, the stylesheet could also use its own independent transformation logic.
-If there is no stylesheet specified for the application, the system stylesheet is used. It defines the overall layout and imports resource-level and container-specific stylesheets, as well as per-vocabulary stylesheets.
-Note that LinkedDataHub itself imports stylesheets from Web-Client, which uses the same template modes but produces a much simpler layout.
-There is also a special client-side stylesheet which is not used to render a full layout, but only manipulate DOM elements in the browser in response to user or system events. It is processed using Saxon-JS which provides IXSL (client-side extensions for XSLT). It imports and reuses some of the same sub-stylesheets as the server-side system stylesheet does, but avoids loading per-vocabulary stylesheets in order to improve page load time. Templates of the client-side stylesheet can also be overridden.
-| Prefix | -Namespace | -Vocabulary | -Description | -
|---|---|---|---|
rdf: |
- http://www.w3.org/1999/02/22-rdf-syntax-ns# |
- The RDF Concepts Vocabulary | -Namespace for the RDF/XML elements, mostly used for matching input data | -
srx: |
- http://www.w3.org/2005/sparql-results# |
- - | Namespace for the SPARQL Query Results XML elements, mostly used for matching input data | -
xsl: |
- http://www.w3.org/1999/XSL/Transform |
- - | Namespace for the XSLT stylesheet elements | -
ixsl: |
- http://saxonica.com/ns/interactiveXSLT |
- - | Namespace for the Interactive XSL extensions | -
bs2: |
- http://graphity.org/xsl/bootstrap/2.3.2 |
- - | XSLT-only namespace that is used for Bootstrap 2.3.2-based layout templates | -
xhtml: |
- http://www.w3.org/2011/http# |
- - | XSLT-only namespace that is used for generic (X)HTML templates | -
ldt: |
- https://www.w3.org/ns/ldt# |
- Linked Data Templates | -LDT processing-related concepts | -
ac: |
- https://w3id.org/atomgraph/client# |
- Web-Client vocabulary | -Client-side concepts | -
lapp: |
- https://w3id.org/atomgraph/linkeddatahub/apps# |
- LinkedDataHub application ontology | -LinkedDataHub application concepts | -
lacl: |
- https://w3id.org/atomgraph/linkeddatahub/admin/acl# |
- LinkedDataHub ACL ontology | -ACL concepts | -
Both global (i.e. stylesheet-level) and template parameters are declared using <xsl:param>. For example:
<xsl:param name="ac:uri" as="xs:anyURI"/>-
This parameter can be accessed using $ac:uri. For example:
<xsl:if test="$ac:uri"> - <xsl:value-of select="$ac:uri"/> -</xsl:if>-
LinkedDataHub sets these global parameters by default (the list is not exhaustive):
-$ldt:base$ldt:ontology$ac:uri$ac:mode$ac:forClass$lapp:Application$foaf:AgentXSLT template components:
-XSLT processing starts at the root of the RDF/XML document and produces HTML elements by applying templates on all of the RDF/XML nodes while moving down the XML tree. - In other words, it starts at the graph level, moves down to resource description elements, then to property elements, and ends with identifier attributes and literal text nodes.
-Templates are applied (invoked) using <xsl:apply-templates>. Mode can be specified, e.g. <xsl:apply-templates mode="bs2:Header">. To stay in the current mode without explicitly specifying it, use <xsl:apply-templates mode="#current">. <xsl:with-param> is used to supply parameters.
LinkedDataHub provides the following default template modes, which are used to render the layout modes:
-rdf:RDF
- bs2:BlockList renders a list of resourcesxhtml:Table renders a table with resources as rows and properties as columnsbs2:Grid renders a gallery of thumbnailsbs2:Form which renders an RDF/POST form for for creation of new resources (when $ac:forClass parameter is set) or editing of existing resourcerdf:Description
- bs2:Header renders resource header (by default with type information)bs2:PropertyList renders definition list with property names and values (by default grouped by resource types)When adding new user-defined modes, it is recommended to choose a new namespace for them as well as a user-defined prefix.
-An example of a template that matches rdf:Description:
<xsl:template match="*[*][@rdf:about] | *[*][@rdf:nodeID]" mode="bs2:Block"> - <xsl:param name="id" as="xs:string?"/> - <xsl:param name="class" as="xs:string?"/> - - <div> - <xsl:if test="$id"> - <xsl:attribute name="id" select="$id"/> - </xsl:if> - <xsl:if test="$class"> - <xsl:attribute name="class" select="$class"/> - </xsl:if> - - <xsl:apply-templates select="." mode="bs2:Header"/> - - <xsl:apply-templates select="." mode="bs2:PropertyList"/> - </div> -</xsl:template>-
ldh:ContentList mode renders the content specified by the rdf:_1 value of the current document.
There are a few special template modes such as ac:label and ac:description and related functions ac:label() and ac:description() which are
- used not to render layout but to extract metadata from resource descriptions. They can be used to retrieve a resource label and description no matter which RDF
- vocabularies are used in the data. They do so by invoking templates of respective mode from vocabulary-specific stylesheets.
Templates are overridden by redefining them in the importing stylesheet and providing the same or more specific match pattern and the same mode. The XSLT specification specifies exactly how template priorities are determined in 6.4 Conflict Resolution for Template Rules.
-The overriding template can then get the output of the overridden template by invoking either <xsl:apply-imports> or <xsl:next-match>. Read more about 6.7 Overriding Template Rules.
Always override the most specific template, i.e. if you want to change how a property is rendered, do not override the template for resource description, only the one for the property.
-Keys are a lookup mechanism. They are defined on the stylesheet level using <xsl:key> and invoked using the
- key() function. For example:
<xsl:key name="resources" match="*[*][@rdf:about] | *[*][@rdf:nodeID]" use="@rdf:about | @rdf:nodeID"/>
-
-<xsl:template match="*">
- <xsl:for-each select="key('resources', $ac:uri)">
- <xsl:value-of select="ac:label(.)"/>
- </xsl:for-each>
-</xsl:template>
- They key definition matches rdf:Description elements and uses their identifiers (URI or blank node ID). The template then looks up the RDF description of the current resource, i.e. the resource with URI that equals $ac:uri which is the absolute URI of the current request, and outputs its label.
The stylesheet is processing one main RDF/XML document at a time, which is supplied by the LinkedDataHub's HTML writer. However it is possible to load additional XML documents over HTTP
- using the document() XSLT function. To avoid XSLT errors on any possible error responses, it is advisable
- to do a conditional check using the doc-available() function before doing the actual document() call.
For example, instead of hardcoding the title of this document as Stylesheets, we can use the following code to load it and output it on the fly:
-<xsl:value-of select="key('resources', 'https://linkeddatahub.com/linkeddatahub/docs/reference/stylesheet/', document('https://linkeddatahub.com/linkeddatahub/docs/reference/stylesheet/'))"/>
- In case this document changes its title, all such references would automatically render the updated title. On the other hand, it incurs the overhead of making an HTTP request.
-LinkedDataHub's default stylesheets are using this feature extensively. In fact, one HTML page is rendered from a dozen of RDF/XML documents.
-Built-in ontologies as well as some other system and well-known ontologies, have a local copy in
- each LinkedDataHub instance. As a result, retrieving their descriptions by dereferencing their URIs using document() does not incur an HTTP request and is much faster. The URI-to-file mapping
- is defined as Jena's location mapping and can be found in
- location-mapping.n3 and prefix-mapping.n3.
Client-side stylesheets use ixsl:schedule-action to load XML documents asynchronously.
Bootstrap 2.3.2 theme is used with some customizations.
-The CSS stylesheets is specified in the xhtml:Style XSLT template mode.
The JavaScript files are specified in the xhtml:Script XSLT template mode.
LinkedDataHub only uses JavaScript for the functionality that cannot be achieved using client-side XSLT.
-Overview of configuration options
-LinkedDataHub is configured using environment variables in the docker-compose.yml file (environment-specific configuration should go into docker-compose.override.yml instead).
-Below you'll find a list of environment variables and secrets grouped by service (they are defined in the environment sections in docker-compose.yml).
-By default nginx is configured to guard against DoS by limiting the rate of requests per second, which can be necessary on a public instance. The limiting can be disabled in platform/nginx.conf.template - by commenting out all lines starting with limit_req using #.
-The certificates generated by the server-cert-gen.sh script are self-signed and therefore are shown as - "not secure" in web browsers. On a local machine this shouldn't be a problem; on public/production servers we recomment - using LetsEncrypt certificates. They can be mounted into nginx as follows:
- nginx:
- environment:
- - SERVER_CERT_FILE=/etc/letsencrypt/live/kgdev.net/fullchain.pem
- - SERVER_KEY_FILE=/etc/letsencrypt/live/kgdev.net/privkey.pem
- volumes:
- - /etc/letsencrypt:/etc/letsencrypt
- SELF_SIGNED_CERT should be set to false in this case.
-Access control and domain model management
-Administration is provided in a separate LinkedDataHub application, which is paired with each end-user - application.
-This guide walks through the main features of LinkedDataHub user interface. - See the Data model guide for definitions of LinkedDataHub components such documents, content blocks, and resources.
-User interface is only one of the interfaces LinkedDataHub provides. The other one is the - Command line interface, which supports most of the UI actions.
-The UI layout can be customized and extended using stylesheets.
-Note that user interface features are subject to access control. For example, - the search box will not be visible if the user is not authorized to access the search container.
-
-
-
-
-
The application title or logo in the top-left always links to the root container of the current application.
-Search box lets users search for resources within the current application that have the specified keyword in their titles, descriptions etc. Results are shown in a dropdown list.
-Due to current web browser limitations, it is not possible to logout using client certificate authentication. - As a workaround, you can close the browser, and click Cancel when asked to select a - certificate the next time.
-
-
-
The Create button opens a dialog through which - documents can be created.
-The path leading from the current document up the parent/child hierarchy to the root container is shown in - the breadcrumb bar, where the current document is always the last breadcrumb. The user can always - open any of the ascendant containers by clicking breadcrumbs left from the current one. The icon shows the - type of the current document (container or item). A label is displayed when the URL currently being browsed is external.
-Further to the right, the action bar displays buttons for performing actions on the current document.
-Last but not least, the settings button provides a link to the administration - application.
-Only administration users have access to the administration application.
-Document tree shows the document hierarchy of the dataspace. By clicking on a container, it expands to show its children.
-In a desktop layout mode, the document tree folds out when the mouse is moved to the left edge of the screen. In a responsive layout, it is always shown.
-The document tree also provides shortcuts to system containers:
-The active document layout mode is displayed, and can be changed using, the nav tabs at the top of the page.
-Currently supported document layout modes:
-The creation bar serves different functions depending on the current mode:
-The default RDF dataset structure used by LinkedDataHub.
-Default LinkedDataHub dataset structure follows these conventions:
-The default dataset is installed into an application service during the setup.
-By default, the URIs of the document resources represent the same parent/child hierarchy. Which means the root container's URI equals the base URI - of the application, and descendant document URIs are relative to their parent container URIs.
-Uploaded files are an exception to this rule. They are content-addressed in the uploads/{sha1sum} namespace, where sha1sum is the SHA1 hash of the file content.
-The default datasets of administration and end-user applications can be found in platform/datasets/admin.trig and - platform/datasets/end-user.trig, respectively.
-RDF terms (classes, properties etc.) used in the default datasets come from well-known vocabularies FOAF and SIOC (with additional LinkedDataHub-specific assertions) as well as system ontologies.
-The dataset of each application is stored in an RDF triplestore which is accessed as a SPARQL service. End-user applications and admin applications have separate datasets and are backed by two different services.
-RDF imports and transforming RDF using SPARQL CONSTRUCT
-An RDF import is a combination of multiple resources:
-CONSTRUCT query that transforms the input RDF into another RDFEither the transformation query or the target graph needs to be specified, not both.
-If the graph is specified, the resulting RDF data is appended to a single document.
-If the transformation query is specified, the resulting data should contain document instances in order to attach to the document hierarchy. The documents have to be URI resources. The document graphs have to be explicitly specified using a GRAPH block in the CONSTRUCT template (which is a
- Jena-specific extension of SPARQL 1.1), otherwise the import result will end up in the default graph of the application's RDF dataset, which is not accessible via LinkedDataHub.
The import process runs asynchronously in the background, i.e. the import item is created before the process completes. - Currently the only way to determine when it completes is to refresh the import item and check the import - status (completed/failed). Upon successful report, metadata such as the number of imported RDF triples is attached - to the import.
-The resulting RDF is split into documents (named graphs), which are then created one by one and validated against constraints in the process. - Constraint violations, if any, are attached to the import item.
-Lets assume we want to import SKOS concept data:
-@prefix : <http://vocabularies.unesco.org/thesaurus/> . -@prefix skos: <http://www.w3.org/2004/02/skos/core#> . -@prefix dcterms: <http://purl.org/dc/terms/> . -@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . - -:concept10 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:narrower :concept4938 , :concept7597 ; - skos:prefLabel "Right to education"@en , "Droit à l'éducation"@fr , "Derecho a la educación"@es , "Право на образование"@ru ; - skos:related :concept9 , :concept556 , :concept557 , :concept1519 , :concept5052 ; - skos:topConceptOf <http://vocabularies.unesco.org/thesaurus> . - -:concept1000 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:broader :concept389 ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:prefLabel "Talent"@en , "Talent"@fr , "Talento"@es , "Талант"@ru ; - skos:related :concept993 , :concept996 , :concept3086 . - -:concept10003 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:altLabel "Entrevue"@fr ; - skos:broader :concept4725 ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:prefLabel "Interviews"@en , "Entretien"@fr , "Entrevista"@es , "Интервью"@ru .-
This step is used to transform the RDF data that is being imported, if necessary (to a different vocabulary, for example). It also connects instances in the imported data to the documents in LinkedDataHub's dataset.
-The mapping is a user-defined SPARQL CONSTRUCT. These are the rules that hold for mapping queries:
BASE value is automatically set to the imported file's URI. Do not add an explicit BASE to the query.$base binding is set to the value of the application's baseURIOPTIONAL for optional valuesBIND() to introduce new values and/or cast literals to the appropriate result datatype or URIencode_for_uriGRAPH block in the constructor template to construct triples for a specific documentdct:title values are mandatory for documents.foaf:primaryTopic propertyWe plan to provide a UI-based mapping tool in the future.
-In this example we pair each SKOS concept from the imported dataset with a new document:
-PREFIX dh: <https://www.w3.org/ns/ldt/document-hierarchy#>
-PREFIX sioc: <http://rdfs.org/sioc/ns#>
-PREFIX foaf: <http://xmlns.com/foaf/0.1/>
-PREFIX skos: <http://www.w3.org/2004/02/skos/core#>
-PREFIX dct: <http://purl.org/dc/terms/>
-
-CONSTRUCT
-{
- GRAPH ?item
- {
- ?concept ?p ?o .
-
- ?item a dh:Item ;
- foaf:primaryTopic ?concept ;
- sioc:has_container ?container ;
- dh:slug ?id ;
- dct:title ?prefLabel .
- }
-}
-WHERE
-{
- SELECT *
- {
- ?concept a skos:Concept .
- BIND (STRAFTER(STR(?concept), "http://vocabularies.unesco.org/thesaurus/") AS ?id)
- BIND (uri(concat(str($base), "concepts/")) AS ?container)
- BIND (uri(concat(str(?container), encode_for_uri(?id), "/")) AS ?item)
-
- ?concept ?p ?o
- OPTIONAL
- {
- ?concept skos:prefLabel ?prefLabel
- FILTER (langMatches(lang(?prefLabel), "en"))
- }
- }
-}
- When the import is complete, you should be able to see the imported documents as children of the ${base}concepts/ container.
-The result of our mapping:
-@prefix : <http://vocabularies.unesco.org/thesaurus/> . -@prefix skos: <http://www.w3.org/2004/02/skos/core#> . -@prefix dcterms: <http://purl.org/dc/terms/> . -@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . -@prefix foaf: <http://xmlns.com/foaf/0.1/> . -@prefix sioc: <http://rdfs.org/sioc/ns#> . -@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> . - -:concept10 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:narrower :concept4938 , :concept7597 ; - skos:prefLabel "Right to education"@en , "Droit à l'éducation"@fr , "Derecho a la educación"@es , "Право на образование"@ru ; - skos:related :concept9 , :concept556 , :concept557 , :concept1519 , :concept5052 ; - skos:topConceptOf <http://vocabularies.unesco.org/thesaurus> . - -<concepts/c499e66a-8036-4637-929d-0d809177883e/> a dh:Item ; - sioc:has_container <concepts/> ; - dh:slug "c499e66a-8036-4637-929d-0d809177883e" ; - dcterms:title "Right to education"@en ; - foaf:primaryTopic :concept10 . - -:concept1000 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:broader :concept389 ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:prefLabel "Talent"@en , "Talent"@fr , "Talento"@es , "Талант"@ru ; - skos:related :concept993 , :concept996 , :concept3086 . - -<concepts/f41910fa-9077-4656-8f73-752fd923a79b/> a dh:Item ; - sioc:has_container <concepts/> ; - dh:slug "f41910fa-9077-4656-8f73-752fd923a79b" ; - dcterms:title "Talent"@en ; - foaf:primaryTopic :concept1000 . - -:concept10003 a skos:Concept ; - dcterms:modified "2006-05-23T00:00:00"^^xsd:dateTime ; - skos:altLabel "Entrevue"@fr ; - skos:broader :concept4725 ; - skos:inScheme <http://vocabularies.unesco.org/thesaurus> ; - skos:prefLabel "Interviews"@en , "Entretien"@fr , "Entrevista"@es , "Интервью"@ru . - -<concepts/2afb2e06-5081-4db1-9255-660fcd1b3ec8/> a dh:Item ; - sioc:has_container <concepts/> ; - dh:slug "2afb2e06-5081-4db1-9255-660fcd1b3ec8" ; - dcterms:title "Interviews"@en ; - foaf:primaryTopic :concept10003 .-
If you are ready to import some RDF, see our step-by-step tutorial on creating an RDF import.
-CSV imports and mapping to RDF using SPARQL CONSTRUCT
-CSV is a plain-text format for - tabular data.
-A CSV import is a combination of multiple resources:
-CONSTRUCT query that produces RDFCSV import in LinkedDataHub consists of 2 steps:
-The import process runs in the background, i.e. the import item is created before the process completes. - Currently the only way to determine when it completes is to refresh the import item and check the import - status (completed/failed). Upon successful report, metadata such as the number of imported RDF triples is attached - to the import.
-The mapping is done one row at a time, with each row resulting in a new created document, which should attach to the document hierarchy. The documents have to be URI resources. The server will automatically assign URIs for the documents constructed in the default graph. Alternatively, it is possible to explicitly specify the document graph using a GRAPH block in the CONSTRUCT template (which is a
- Jena-specific extension of SPARQL 1.1).
The resulting RDF data is validated against constraints in the process. Constraint violations, if any, are attached to the import item.
-We provide an running example of CSV data that will be shown as RDF conversion in the following - sections:
-countryCode,latitude,longitude,name -AD,42.5,1.6,Andorra -AE,23.4,53.8,"United Arab Emirates" -AF,33.9,67.7,Afghanistan-
The data table is converted to a graph by treating rows as resources, columns as predicates, and
- cells as xsd:string literals. The approach is the same as CSV on the Web
- minimal mode.
@base <https://localhost:4443/> . - -_:8228a149-8efe-448d-b15f-8abf92e7bd17 -<#countryCode> "AD" ; -<#latitude> "42.5" ; -<#longitude> "1.6" ; -<#name> "Andorra" . - -_:ec59dcfc-872a-4144-822b-9ad5e2c6149c -<#countryCode> "AE" ; -<#latitude> "23.4" ; -<#longitude> "53.8" ; -<#name> "United Arab Emirates" . - -_:e8f2e8e9-3d02-4bf5-b4f1-4794ba5b52c9 -<#countryCode> "AF" ; -<#latitude> "33.9" ; -<#longitude> "67.7" ; -<#name> "Afghanistan" .-
This step provides a semantic "lift" for the generic RDF output of the previous step by mapping it - to classes and properties from specific vocabularies. It also connects instances in the imported data to the documents in LinkedDataHub's dataset.
-These are the rules that hold for mapping queries:
-BASE value is automatically set to the imported file's URI. Do not add an explicit BASE to the query.$base binding is set to the value of the application's baseURIOPTIONAL for optional cell valuesBIND() to introduce new values and/or cast literals to the appropriate result datatype or URIencode_for_uriGRAPH block in the constructor template to construct triples for a specific documentdct:title values are mandatory for documents.foaf:primaryTopic propertyWe are planning to provide a UI-based mapping tool in the future.
-In this example we produce a SKOS concept paired with its item (document) for each country:
-PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#>
-PREFIX dh: <https://www.w3.org/ns/ldt/document-hierarchy#>
-PREFIX dct: <http://purl.org/dc/terms/>
-PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
-PREFIX xsd: <http://www.w3.org/2001/XMLSchema#>
-PREFIX foaf: <http://xmlns.com/foaf/0.1/>
-PREFIX sioc: <http://rdfs.org/sioc/ns#>
-
-CONSTRUCT
- {
- ?item a dh:Item ;
- sioc:has_container ?container ;
- dct:title ?name ;
- dh:slug ?countryCode ;
- foaf:primaryTopic ?country .
- ?country a <http://dbpedia.org/ontology/Country> ;
- dct:identifier ?countryCode ;
- geo:lat ?lat ;
- geo:long ?long ;
- dct:title ?name .
- }
-WHERE
- {
- BIND(bnode() AS ?item)
- BIND (uri(concat(str($base), "countries/")) AS ?container)
-
- ?country <#countryCode> ?countryCode ;
- <#latitude> ?latString ;
- <#longitude> ?longString ;
- <#name> ?name .
-
- BIND(xsd:float(?latString) AS ?lat)
- BIND(xsd:float(?longString) AS ?long)
- }
- When the import is complete, you should be able to see the imported documents as children of the ${base}countries/ container.
-The result of our mapping:
-PREFIX geo: <http://www.w3.org/2003/01/geo/wgs84_pos#> -PREFIX dh: <https://www.w3.org/ns/ldt/document-hierarchy#> -PREFIX dct: <http://purl.org/dc/terms/> -PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> -PREFIX xsd: <http://www.w3.org/2001/XMLSchema#> -PREFIX foaf: <http://xmlns.com/foaf/0.1/> -PREFIX sioc: <http://rdfs.org/sioc/ns#> - -<https://localhost:4443/countries/AD/> a dh:Item ; - sioc:has_container <https://localhost:4443/countries/> ; - dct:title "Andorra" ; - dh:slug "AD" ; - foaf:primaryTopic <https://localhost:4443/countries/AD/#id459bdd90-a309-49f9-92b2-1b9b5d110471> . - -<https://localhost:4443/countries/AD/#id459bdd90-a309-49f9-92b2-1b9b5d110471> a <http://dbpedia.org/ontology/Country> ; - dct:identifier "AD" ; - geo:lat 42.5 ; - geo:long 1.6 ; - dct:title "Andorra" . - -<https://localhost:4443/countries/AE/> a dh:Item ; - sioc:has_container <https://localhost:4443/countries/> ; - dct:title "United Arab Emirates" ; - dh:slug "AE" ; - foaf:primaryTopic <https://localhost:4443/countries/AE/#id7ad9b80b-8fbf-4696-92fa-61facf6c2066> . - -<https://localhost:4443/countries/AE/#id7ad9b80b-8fbf-4696-92fa-61facf6c2066> a <http://dbpedia.org/ontology/Country> ; - dct:identifier "AE" ; - geo:lat 23.4 ; - geo:long 53.8 ; - dct:title "United Arab Emirates" . - -<https://localhost:4443/countries/AF/> a dh:Item ; - sioc:has_container <https://localhost:4443/countries/> ; - dct:title "Afghanistan" ; - dh:slug "AF" ; - foaf:primaryTopic <https://localhost:4443/countries/AF/#id5de2fd91-158a-47d8-a302-d1af205fe59f> . - -<https://localhost:4443/countries/AF/#id5de2fd91-158a-47d8-a302-d1af205fe59f> a <http://dbpedia.org/ontology/Country> ; - dct:identifier "AF" ; - geo:lat 33.9 ; - geo:long 67.7 ; - dct:title "Afghanistan" .-
If you are ready to import some CSV, see our step-by-step tutorial on creating an CSV import.
-Learn how to read and write RDF data from/to LinkedDataHub applications over HTTP
-LinkedDataHub implements a uniform, generic RESTful Linked Data API as defined by the - SPARQL 1.1 Graph Store Protocol. It adds a few conventions and constraints - on top of it however.
-LinkedDataHub UI supports 2 authentication methods:
-See how those authentication methods can be configured or how to get an account on LinkedDataHub.
-HTTP API access using CLI scripts or curl currently does not support the OIDC method.
-All HTTP access to documents is subject to access
- control. Requesting a document with insufficient access rights will result in 403 Forbidden response. That means either:
Every document is also a named graph in the application's RDF dataset. LinkedDataHub supports the SPARQL Graph Store Protocol's direct graph identification as the HTTP CRUD protocol for managing document data.
-GSP indirect graph identification is not supported starting with LinkedDataHub version 5.x.
-The API also supports the PATCH HTTP method which is optional in GSP. It accepts graph-scoped SPARQL updates that will modify the requested document. Only the INSERT/WHERE and DELETE WHERE forms are supported; GRAPH patterns are not allowed.
Trailing slashes in document URIs are enforced using 308 Permanent Redirect responses.
| Method | -Description | -Success | -Failure | -Reason | -
|---|---|---|---|---|
GET |
- Returns the data of a document | -200 OK |
- 404 Not Found |
- Document with request URI not found | -
406 Not Acceptable |
- Media type not supported | -|||
POST |
- Appends data to a named graph | -204 No Content |
- 400 Bad Request |
- RDF syntax error | -
404 Not Found |
- Document with request URI not found | -|||
413 Payload Too Large |
- Request body too large | -|||
415 Unsupported Media Type |
- Media type not supported | -|||
422 Unprocessable Entity |
- Constraint violation | -|||
PUT |
- Upserts a document | -200 OK 201 Created 308 Permanent Redirect |
- 400 Bad Request |
- RDF syntax error | -
| Malformed document URI | -||||
413 Payload Too Large |
- Request body too large | -|||
415 Unsupported Media Type |
- Media type not supported | -|||
422 Unprocessable Entity |
- Constraint violation | -|||
DELETE |
- Removes the requested document | -204 No Content |
- 400 Bad Request |
- Deleting the root document is not allowed | -
404 Not Found |
- Document with request URI not found | -|||
PATCH |
- Modifies a document using SPARQL Update | -204 No Content |
- 422 Unprocessable Entity |
- SPARQL update string violates syntax constraints | -
Unlike earlier versions, LinkedDataHub 5.x manages the document hierarchy automatically.
-By default, LinkedDataHub treats an RDF document as an item by giving it the dh:Item type and attaching it to the parent container using sioc:has_container. If the client wants to create a container instead, it has to explicitly add the dh:Container type on the document resource; the new container will be attached to its parent using sioc:has_container. In either case, the URI of the new document's will be relative to its parent's.
LinkedDataHub will also manage additional document metadata, such as its owner and creation/modification timestamps.
-For example, this HTTP request to create a new container (Turtle syntax):
-PUT /namedgraph/new-container/ HTTP/1.1
-Host: linkeddatahub.com
-Content-Type: text/turtle
-
-@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> .
-@prefix dct: <http://purl.org/dc/terms/> .
-
-<> a dh:Container ;
- dct:title "New container" .
- will produce the following document triples:
-@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> .
-@prefix dct: <http://purl.org/dc/terms/> .
-@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
-@prefix sioc: <http://rdfs.org/sioc/ns#> .
-@prefix acl: <http://www.w3.org/ns/auth/acl#> .
-
-<https://linkeddatahub.com/namedgraph/new-container/>
- a dh:Container ;
- dct:created "2025-03-31T21:46:21.984Z"^^xsd:dateTime ;
- dct:creator <https://linkeddatahub.com/namedgraph/admin/acl/agents/fda0009e-191b-4f07-838c-5daf2a74b35f/#this> ;
- dct:title "New container" ;
- sioc:has_parent <https://linkeddatahub.com/namedgraph/> ;
- acl:owner <https://linkeddatahub.com/namedgraph/admin/acl/agents/fda0009e-191b-4f07-838c-5daf2a74b35f/#this> .
- The HTTP request to produce a new item can be empty:
-PUT /namedgraph/new-container/ HTTP/1.1
-Host: linkeddatahub.com
-Content-Type: text/turtle
- It will create an item document with the following triples:
-@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> .
-@prefix dct: <http://purl.org/dc/terms/> .
-@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
-@prefix sioc: <http://rdfs.org/sioc/ns#> .
-@prefix acl: <http://www.w3.org/ns/auth/acl#> .
-
-<https://linkeddatahub.com/namedgraph/new-item/>
- a dh:Item ;
- dct:created "2025-03-31T20:45:42.802Z"^^xsd:dateTime ;
- dct:creator <https://linkeddatahub.com/acl/agents/d47e1f9b-c8d0-4546-840f-5d9fbb479da2/#id9d3814f2-53bc-42e9-b1ab-46cbc9a94263> ;
- sioc:has_container <https://linkeddatahub.com/namedgraph/> ;
- acl:owner <https://linkeddatahub.com/admin/acl/agents/d47e1f9b-c8d0-4546-840f-5d9fbb479da2/#id9d3814f2-53bc-42e9-b1ab-46cbc9a94263> .
- LinkedDataHub has a few built-in constraints that are not found in the standard Graph Store Protocol:
-The built in constraints are similar to, but separate from the ontology constraints.
-Every LinkedDataHub application provides a SPARQL endpoint on sparql path (relative to the application's base URI). It supports the
- SPARQL 1.1 Protocol and serves as a proxy for the backend endpoint of the
- application.
CONSTRUCT query, and stores the result into the specified named graphLinkedDataHub works as a Linked Data proxy (from the end-user perspective, as a Linked Data browser) when a URL is provided using the uri query parameter.
- All HTTP methods are supported.
If the URL dereferences successfully as RDF, LinkedDataHub forwards its response body (re-serializing it to enable content negotiation). - During a write request, the request body is forwarded to the provided URL.
-The proxy only accepts external (non-relative to the current application's base URI) URLs; local URLs have to be dereferenced directly.
-LinkedDataHub implements proactive conneg based on the request Accept
- header value. The following RDF media types are supported (for requests as well as responses, unless indicated otherwise):
LinkedDataHub provides machine-readable error responses in the requested RDF format. An example of 403 Forbidden:
@prefix xsd: <http://www.w3.org/2001/XMLSchema#> .
-@prefix http: <http://www.w3.org/2011/http#> .
-@prefix sc: <http://www.w3.org/2011/http-statusCodes#> .
-@prefix dct: <http://purl.org/dc/terms/> .
-
-[ a http:Response ;
- dct:title "Access not authorized" ;
- http:reasonPhrase "Forbidden" ;
- http:sc sc:Forbidden ;
- http:statusCodeValue "403"^^xsd:long
-] .
- GET and HEAD RDF responses from the backend triplestores (not LinkedDataHub responses) are cached automatically by LinkedDataHub using Varnish
- as HTTP proxy cache. You can check the age of the response by inspecting the Age response header (the value is in seconds).
LinkedDataHub sends ETag response headers that are derived as hashes of the requested document's RDF content. Every serialization format (HTML, RDF/XML, Turtle etc.) gets a distinct ETag value.
Caching of LinkedDataHub responses can be enabled on the nginx HTTP proxy server by uncommenting the add_header Cache-Control directives in the platform/nginx.conf.template file.
- Caching of /uploads/ and /static/ namespaces is enabled by default (since version 4.0.4).
Ontologies are sets of domain concepts. The domain can span both documents (information resources) and abstract/physical things - (non-information resources).
-Ontologies can import other ontologies, both user-defined - and system ones provided by LinkedDataHub. The imports are retrieved during application initialization, and the application's namespace ontology - becomes a transitive union, i.e. is merged with its imports and imports of the imports etc.
-Main ontology properties:
-| Ontology | -Title | -Prefix | -
|---|---|---|
https://w3id.org/atomgraph/linkeddatahub/default# |
- Default | -def: |
-
https://w3id.org/atomgraph/linkeddatahub/apps# |
- Applications | -lapp: |
-
https://w3id.org/atomgraph/linkeddatahub/acl# |
- Access control | -lacl: |
-
https://w3id.org/atomgraph/linkeddatahub/admin# |
- Admin | -adm: |
-
https://w3id.org/atomgraph/linkeddatahub# |
- LinkedDataHub | -ldh: |
-
https://www.w3.org/ns/ldt/document-hierarchy# |
- Document hierarchy | -dh: |
-
Classes are simply RDFS classes. Usually the application dataset contains class instances.
-Main class properties such as constructor and constraint are explained in the sub-sections below. Additional properties are:
-Constructors are SPARQL CONSTRUCT queries that serve as templates for class instances.
- They specify the properties (both mandatory and optional) that the instance is supposed to have, as well as expected datatypes of their
- values. Constructors are used in create/edit modes. A class can have multiple constructors.
For example, the constructor of the ldh:XHTML class:
PREFIX rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#>
-
-CONSTRUCT {
- $this rdf:value "<div xmlns=\"http://www.w3.org/1999/xhtml\"></div>"^^rdf:XMLLiteral .
-}
-WHERE {}
- An instance of these classes by default have a content property with XML literal.
-LinkedDataHub reuses SPIN constructors for the implementation, but adds a
- special syntax convention using blank nodes to indicate the expected resource type ([ a ex:Person ]) or literal datatype
- ([ a xsd:string ]). The magic variable $this refers to the instance being constructed, it is a blank node resource initially but
- gets skolemized to a URI when submitted to the server.
Note that classes inherit constructors from superclasses in runtime. Subclasses do not have to redefine constructor properties already found - in superclass constructors, only additional properties.
-Constraints are SPARQL queries or SPIN command templates that validate submitted
- RDF data during document creation and editing. Constraints are enforced for instances of model classes on which they are defined and are
- used to check class instances for violations (missing mandatory properties, malformed values etc.). For example, an instance of
- dh:Item without dct:title will fail validation because titles are mandatory for LinkedDataHub documents.
LinkedDataHub reuses SPIN constraints. Classes inherit constraints from superclasses.
-SHACL constraint validation is supported as well.
-LinkedDataHub allows definition of new properties.
-Packages are reusable bundles of ontologies and stylesheets that provide vocabulary support with custom rendering for specific RDF vocabularies.
-Version: Packages were introduced in LinkedDataHub 5.2.
-Note: Packages are declarative only (RDF + XSLT). They contain no Java code and integrate at installation-time, not runtime.
-A LinkedDataHub package is a reusable component that bundles together a vocabulary ontology and XSLT templates into a single installable unit. Packages enable:
-Each package consists of two files:
-ns.ttl - Package ontologyowl:imports and attaches blocks to properties using ldh:view (forward relationships) or ldh:inverseView (inverse relationships). Blocks are typically ldh:View resources with SPARQL queries that render related data for property values.layout.xsl - XSLT stylesheetbs2:* (Bootstrap 2.3.2 components) and xhtml:* (XHTML elements). See Stylesheets reference for details on XSLT customization.Package files are organized in the LinkedDataHub-Apps repository:
-packages/ -├── package-name/ -│ ├── ns.ttl # Ontology with views -│ └── layout.xsl # XSLT stylesheet-
Package metadata is published as Linked Data that resolves from the package URI (e.g., https://packages.linkeddatahub.com/skos/#this).
Package metadata is published as Linked Data that resolves from the package URI using standard LinkedDataHub properties:
-lapp:Packagerdfs:labeldct:descriptionldt:ontologyac:stylesheetExample package metadata:
-@prefix lapp: <https://w3id.org/atomgraph/linkeddatahub/apps#> . -@prefix ldt: <https://www.w3.org/ns/ldt#> . -@prefix ac: <https://w3id.org/atomgraph/client#> . - -<https://packages.linkeddatahub.com/skos/#this> a lapp:Package ; - rdfs:label "SKOS Package" ; - dct:description "SKOS vocabulary support with custom templates" ; - ldt:ontology <https://raw.githubusercontent.com/AtomGraph/LinkedDataHub-Apps/master/packages/skos/ns.ttl#> ; - ac:stylesheet <https://raw.githubusercontent.com/AtomGraph/LinkedDataHub-Apps/master/packages/skos/layout.xsl> .-
The package ontology file contains two layers:
-Imports the external vocabulary using owl:imports. See Ontologies reference for ontology management details.
<https://raw.githubusercontent.com/AtomGraph/LinkedDataHub-Apps/master/packages/skos/ns.ttl#> a owl:Ontology ; - owl:imports <http://www.w3.org/2004/02/skos/core> .-
SPARQL-based views attached to properties from the imported vocabulary:
-skos:narrower ldh:view ns:NarrowerConcepts .
-
-ns:NarrowerConcepts a ldh:View ;
- dct:title "Narrower concepts" ;
- spin:query ns:SelectNarrowerConcepts .
-
-ns:SelectNarrowerConcepts a sp:Select ;
- sp:text """SELECT DISTINCT ?narrower
- WHERE { GRAPH ?graph { $about skos:narrower ?narrower } }
- ORDER BY ?narrower""" .
- Views are rendered when displaying resources that have the specified property. Use ldh:view for forward relationships (resource has property) or ldh:inverseView for inverse relationships (other resources point to this resource via property).
XSLT templates using system modes to override default rendering:
-<!-- Hide properties from default property list --> -<xsl:template match="skos:narrower | skos:broader" mode="bs2:PropertyList"/> - -<!-- Override XHTML head elements --> -<xsl:template match="*" mode="xhtml:Style"> - <!-- Custom styles --> -</xsl:template>-
Available system modes include:
-bs2:* - Bootstrap 2.3.2 components (PropertyList, Form, etc.)xhtml:* - XHTML elements (Style, Script, etc.)Installation requires Control access to the administration application. See the step-by-step installation guide for detailed instructions.
-Installation will fail if these files do not exist.
-When you install a package, the system performs the following steps:
-ns.ttl) and PUTs it as a document to ${admin_base}ontologies/{hash}/ where {hash} is the SHA-1 hash of the ontology URI${admin_base}ontologies/namespace/)layout.xsl) and saves it to /static/{package-path}/layout.xsl where {package-path} is derived from the package URI (e.g., com/linkeddatahub/packages/skos/ for https://packages.linkeddatahub.com/skos/)/static/xsl/layout.xsl by adding import:
- <xsl:import href="../com/atomgraph/linkeddatahub/xsl/bootstrap/2.3.2/layout.xsl"/> <!-- System --> -<xsl:import href="../com/linkeddatahub/packages/skos/layout.xsl"/> <!-- Package (added) -->-
Important: After installing or uninstalling a package, you must restart the Docker service for XSLT stylesheet changes to take effect:
-docker-compose restart linkeddatahub-
Do not use --force-recreate as that would overwrite the stylesheet file changes.
Packages can be safely uninstalled, which removes:
-Note: Uninstalling a package does not remove user-created data that uses the package's vocabulary.
-Packages use installation-time composition, NOT runtime composition:
-Package installation and uninstallation is performed via system endpoints on the admin application. See packages/install and packages/uninstall in the HTTP API reference.
After installing the SKOS package:
-webapp/ -├── static/ -│ ├── com/ -│ │ └── linkeddatahub/ -│ │ └── packages/ -│ │ └── skos/ -│ │ └── layout.xsl # Package stylesheet -│ └── xsl/ -│ ├── layout.xsl # End-user master stylesheet -│ └── admin/ -│ └── layout.xsl # Admin master stylesheet-
Developers can create custom packages for their own domain vocabularies. The process involves:
-Create ns.ttl with vocabulary import and views:
<https://raw.githubusercontent.com/you/repo/master/packages/schema.org/ns.ttl#> a owl:Ontology ;
- owl:imports <http://schema.org/> .
-
-# Attach view to a property
-schema:knows ldh:view :PersonKnows .
-
-:PersonKnows a ldh:View ;
- dct:title "Knows" ;
- spin:query :SelectPersonKnows .
-
-:SelectPersonKnows a sp:Select ;
- sp:text """
- SELECT DISTINCT ?person
- WHERE { GRAPH ?graph { $about schema:knows ?person } }
- ORDER BY ?person
- """ .
- Create layout.xsl with XSLT templates using system modes like bs2:* and xhtml:*. See the Stylesheets reference for template customization patterns.
<xsl:template match="schema:knows" mode="bs2:PropertyList"/>-
Publish package metadata as Linked Data at your package URI:
-<https://packages.linkeddatahub.com/schema.org/#this> a lapp:Package ; - rdfs:label "Schema.org Package" ; - dct:description "Schema.org vocabulary support" ; - ldt:ontology <https://raw.githubusercontent.com/you/repo/master/packages/schema.org/ns.ttl#> ; - ac:stylesheet <https://raw.githubusercontent.com/you/repo/master/packages/schema.org/layout.xsl> .-
Ensure the metadata contains ldt:ontology and ac:stylesheet properties pointing to the package resources.
Use the CLI to test your package installation:
-install-package.sh \ - -b "https://localhost:4443/" \ - -f ssl/owner/cert.pem \ - -p "$cert_password" \ - --package "https://packages.linkeddatahub.com/schema.org/#this"-
A curated list of available packages can be found in the LinkedDataHub-Apps repository. Each package directory contains the package ontology (ns.ttl) and stylesheet (layout.xsl) files.
LinkedDataHub access control is based on the W3C ACL ontology.
-There are 4 access modes (classes of operation) that map to HTTP methods:
-| Mode | -Those allowed may | -HTTP method | -
|---|---|---|
| Read | -read the contents (including querying it, etc) | -GET |
-
| Write | -overwrite the contents (including deleting it, or modifying part of it) | -PUT, DELETE |
-
| Append | -add information to [the end of] it but not remove information | -POST |
-
| Control | -set the Access Control List for this themselves | -- |
An agent is a person or a software agent that can be authorized to have certain modes - of access to certain applications.
- -A group is a named group of agents to which an authorization can
- be given. It is a subclass of the foaf:Group
- class.
There are several default groups:
-Only agents that belong to the owners group will have access to the administration application.
- Note that an agent being a member of one of the above groups does not automatically provide it with an
- authorization. A valid authorization for the whole group has to be present.
An authorization explicitly grants access for an agent or a group - of agents to access a specific end-user application document or a class - of its documents.
-An agent has to be authorized using the Control mode to be - able to login to the administration application.
-Here are the default authorizations for groups and their respective access modes:
-| Group | -Read access | -Write/append access | -Full control | -
|---|---|---|---|
| Owners | -Read | -Write | -Control | -
| Append | -|||
| Writers | -Read | -Write | -- |
| Append | -|||
| Readers | -Read | -- | - |
Public access authorization allows access for non-authenticated agents.
-If access is denied due to missing authorization, the agent can ask for it by issuing a request to the application's - owners. It indicates the request URI and access mode in question. - The owners can then accept the request by creating an authorization with the provided information - (possibly extending the requested access to a group of agents or a class of resources), - or simply ignore it.
-Blocks that embed/transclude any dereferenceable URI
-Blocks other than HTML content is called an object and has to have a URI that dereferences. Objects are embedded (transcluded) into the HTML page. You can use any RDF resource or uploaded file as an object.
-LinkedDataHub will first attempt to load RDF data from the object URI and render it as block. If that fails, it will simply embed it using the HTML <object> element. Object blocks can be used to embed queries, charts, and other LinkedDataHub system resources.
Built-in block types use a UI convention where their UI is split into left, main, and right content areas. The layout of the main content may depend on the active mode of the block. Left and right sections are block-type specific.
-@prefix ldh: <https://w3id.org/atomgraph/linkeddatahub#> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . -@prefix ac: <https://w3id.org/atomgraph/client#> . - -<https://localhost:4443/concepts/example/#object-block> - a ldh:Object ; - rdf:value <http://dbpedia.org/resource/Copenhagen> ; - ac:mode ac:MapMode .-
| Action | -Description | -
|---|---|
| Create | -Click the Object button at the bottom of the page (in content mode). Enter the object's value URI. Click Save. | -
| Update | -Click the button in the top-right corner of the block (it will only appear when you move the mouse close to that corner). Change the object's value URI. Click Save. | -
| Delete | -Click the button in the top-right corner of the block (it will only appear when you move the mouse close to that corner). Click the button to delete the block. | -
| Action | -CLI script | -
|---|---|
| Create | -add-object-block.sh | -
| Update | -- |
| Delete | -- |
Blocks that embed XHTML markup as an RDF literal
-XHTML block is simply a fragment of XHTML, stored as a canonical XML literal (rdf:XMLLiteral) in the RDF document. It can be edited using a WYSIWYG editor.
@prefix ldh: <https://w3id.org/atomgraph/linkeddatahub#> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . - -<https://localhost:4443/concepts/example/#xhtml-block> - a ldh:XHTML ; - rdf:value "<div xmlns=\"http://www.w3.org/1999/xhtml\">\n<p>A paragraph</p>\n</div>"^^rdf:XMLLiteral .-
Content blocks can only be managed when the Content layout mode is active.
-| Action | -Description | -
|---|---|
| Create | -Click the XHTML button at the bottom of the page (in content mode). Create the XHTML content in the WYSIWYG editor. Click Save. | -
| Update | -Click the button in the top-right corner of the block (it will only appear when you move the mouse close to that corner). Change the XHTML content in the WYSIWYG editor. Click Save. | -
| Delete | -Click the button in the top-right corner of the block (it will only appear when you move the mouse close to that corner). Click the button to delete the block. | -
| Action | -CLI script | -
|---|---|
| Create | -add-xhtml-block.sh | -
| Update | -- |
| Delete | -- |
Content blocks and the basic data content actions.
-Content list is a layout mode introduced in LinkedDataHub 3.x. It allow composition of rich, structured web documents from multiple types of content blocks.
-The blocks are attached to the document resource using RDF sequence properties rdf:_1, rdf:_2 etc.
- The index number in the property URI indicates the position of the content block on the document's content list.
The blocks have 2 main types: XHTML (ldh:XHTML) and object (ldh:Object).
The blocks can be re-arranged by dragging one of them and dropping them in a different position.
-Paginated views based on SPARQL queries
-
-
-
Views are interactive, paginated, and optionally ordered results of a SPARQL SELECT result set.
- What is rendered in the UI is not directly the tabular result however, but descriptions of the resources selected
- by the result set. That is achieved by on-the-fly SPARQL query rewriting: the SELECT is wrapped into
- a DESCRIBE query; DESCRIBE reuses the same variables from the SELECT projection.
- This will not work for all SELECT queries.
To render paginated lists of resources, legacy applications would normally have a dedicated API endpoint that - supports pagination, ordering etc. In LinkedDataHub, views achieve the same functionality by simply building the - SPARQL query string on the client-side. Therefore views can be seen as client-side "containers".
-Views can be rendered in multiple layout modes: properties, list, grid, table, map, chart etc. They also show - the total number of results and allow result ordering by property.
-View results can be rendered using the same layout modes as the document layout modes.
-On the left side, views provide faceted search which acts as a filter that narrows down the view results.
-By default the facets are generic and inferred from the triple patterns of the SPARQL SELECT query used by the view. They can be customized using XSLT.
Parallax navigation is a rather unique navigation approach that lends itself perfectly to graph data. It is enabled for container content and shown as Related results on the right side of the view. Parallax allows "jumping" from a result set to a related result set using the selected RDF property. It works in - combination with faceted search which can be used to filter the initial result set.
-For example, facets can be used to filter a set products that belong to a certain category, and then parallax can be used to jump to a set of companies that provide those products, and then further on to - a set representatives of those companies.
-
-
-
Interactive SPARQL queries
-
-
-
Queries are SPARQL 1.1 query strings that can be executed interactively. They can be defined with a SPARQL - service that they execute against, otherwise they execute against the application's own SPARQL service.
-It is only possible to save valid SPARQL 1.1 query strings. SPARQL updates are currently not supported.
-Interactive charts based on SPARQL queries
-
-
-
Charts can render results both types of SPARQL results:
-SELECT resultsDESCRIBE and CONSTRUCTIn that sense they are similar to the chart layout mode in - views, but charts also store the chart type as well as the category and series information: variable names in the - case of tabular results, and property URIs in the case of graph results.
-The default chart type is the table. Other chart types might not apply to all result data; for example a - scatter chart will need a numeric or datetime values for both category and series.
-Items are analogous to files in a filesystem
-Document properties such as sioc:has_container, dct:created, dct:modified, acl:owner are automatically managed by LinkedDataHub.
@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> . -@prefix ldh: <https://w3id.org/atomgraph/linkeddatahub#> . -@prefix ac: <https://w3id.org/atomgraph/client#> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . -@prefix dct: <http://purl.org/dc/terms/> . -@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . -@prefix sioc: <http://rdfs.org/sioc/ns#> . -@prefix acl: <http://www.w3.org/ns/auth/acl#> . - -<https://localhost:4443/concepts/example/> - a dh:Item ; - rdf:_1 <https://localhost:4443/concepts/example/#xhtml-block> ; - rdf:_2 <https://localhost:4443/concepts/example/#object-block> ; - dct:created "2025-06-02T19:49:48.126Z"^^xsd:dateTime ; - dct:creator <https://localhost:4443/admin/acl/agents/865c2431-8436-4ae8-b300-2a531a013cd0/#this> ; - dct:title "Example" ; - sioc:has_container <https://localhost:4443/concepts/> ; - acl:owner <https://localhost:4443/admin/acl/agents/865c2431-8436-4ae8-b300-2a531a013cd0/#this> . - -<https://localhost:4443/concepts/example/#xhtml-block> - a ldh:XHTML ; - rdf:value "<div xmlns=\"http://www.w3.org/1999/xhtml\">\n<p>A paragraph</p>\n</div>"^^rdf:XMLLiteral . - -<https://localhost:4443/concepts/example/#object-block> - a ldh:Object ; - rdf:value <http://dbpedia.org/resource/Copenhagen> ; - ac:mode ac:MapMode .-
| Action | -Description | -
|---|---|
| Create | -Create a new child document by clicking the Create button on the left of the navbar. Fill out the form. Click Save. | -
| Update | -Open the current document editing form by clicking the Edit button in the middle section of the navbar. Make changes. Click Save. | -
| Delete | -Delete the current document by clicking the Delete in the action bar (the right section of the navbar) | -
| Action | -CLI script | -
|---|---|
| Create | -create-item.sh | -
| Update | -put.sh | -
| Delete | -delete.sh | -
Containers are analogous to folders in a filesystem
-Document properties such as sioc:has_parent, dct:created, dct:modified, acl:owner are automatically managed by LinkedDataHub.
@prefix dh: <https://www.w3.org/ns/ldt/document-hierarchy#> . -@prefix ldh: <https://w3id.org/atomgraph/linkeddatahub#> . -@prefix rdf: <http://www.w3.org/1999/02/22-rdf-syntax-ns#> . -@prefix dct: <http://purl.org/dc/terms/> . -@prefix xsd: <http://www.w3.org/2001/XMLSchema#> . -@prefix sioc: <http://rdfs.org/sioc/ns#> . -@prefix acl: <http://www.w3.org/ns/auth/acl#> . - -<https://localhost:4443/concepts/> - a dh:Container ; - rdf:_1 <https://localhost:4443/concepts/#select-children> ; - dct:created "2025-06-02T19:12:26.533Z"^^xsd:dateTime ; - dct:creator <https://localhost:4443/admin/acl/agents/865c2431-8436-4ae8-b300-2a531a013cd0/#this> ; - dct:title "Concepts" ; - sioc:has_parent <https://localhost:4443/concepts/> ; - acl:owner <https://localhost:4443/admin/acl/agents/865c2431-8436-4ae8-b300-2a531a013cd0/#this> . - -<https://localhost:4443/concepts/#select-children> - a ldh:Object ; - rdf:value ldh:ChildrenView .-
| Action | -Description | -
|---|---|
| Create | -Create a new child document by clicking the Create button on the left of the navbar. Fill out the form. Click Save. | -
| Update | -Open the current document editing form by clicking the Edit button in the middle section of the navbar. Make changes. Click Save. | -
| Delete | -Delete the current document by clicking the Delete in the action bar (the right section of the navbar) | -
| Action | -CLI script | -
|---|---|
| Create | -create-container.sh | -
| Update | -put.sh | -
| Delete | -delete.sh | -
Document structure and the basic data management actions.
-The basic structure of resources in a LinkedDataHub application is analogous to the file system, but built using RDF
- resources and relationships between them instead. There is a hierarchy of containers (folders),
- which are collections of items (files) as well as sub-containers (sub-folders). Both containers
- (instances of dh:Container) and items (instances of dh:Item) are documents (instances of foaf:Document). Items cannot contain other documents.
The first level of resources in a container is referred to as its children (of which that container - is the parent, while all levels down the hierarchy are collectively referred to as - descendants.
-When a user logs in, the application loads its root container (unless a specific URI was requested). From there, - users can navigate down the resource hierarchy, starting with children of the root container. At - any moment there is only one current document per page, on which management actions can - be performed: it can be viewed, edited etc.
-RDF resources and their management
-Within documents, users can create RDF resources, i.e. instances of both built-in and user-defined RDF classes.
-Built-in classes are defined in system ontologies while user-defined classes are defined in user ontologies.
-TBD
-LinkedDataHub's built-in classes have pre-defined constructors, constraints and often a customized UI rendering (implemented by overriding generic XSLT templates with type-specific templates) as well, for example a query editor or a chart.
-| Type | -Class | -Description | -
|---|---|---|
| ASK | -sp:Ask |
- SPARQL ASK query |
-
| Application | -ldh:Application |
- Remote LinkedDataHub application | -
| CONSTRUCT | -sp:Construct |
- SPARQL CONSTRUCT query |
-
| CSV import | -ldh:CSVImport |
- Async CSV data import | -
| File | -nfo:FileDataObject |
- File to upload | -
| Graph chart | -ldh:GraphChart |
- Chart based on CONSTRUCT/DESCRIBE query results |
-
| RDF import | -ldh:RDFImport |
- Async RDF data import | -
| Result set chart | -ldh:ResultSetChart |
- Chart based on SELECT query results |
-
| Select | -sp:Select |
- SPARQL SELECT query |
-
| Service | -sd:Service |
- SPARQL service (identified by its endpoint URL) | -
| View | -ldh:View |
- View based on SELECT query results |
-
Resources that have customized UIs (such as queries, views, charts) are documented in more detail.
-Resources can only be managed when the Properties layout mode is active.
-| Action | -Description | -
|---|---|
| Create | -Click the Create dropdown on the bottom of the page. Fill out the fields in the form that appears. Click Save. | -
| Update | -Click the button in the top-right corner of the resource header (in the middle column of the content). Make changes in the form that appears. Click Save. | -
| Delete | -Click the button in the top-right corner of the resource header. Click the button to delete the block. | -
The following actions can also be performed using the command line interface.
-| Type | -Action | -CLI script | -
|---|---|---|
| ASK | -Create | -imports/create-query.sh | -
| Application | -Create | -- |
| CONSTRUCT | -Create | -admin/ontologies/add-construct.sh | -
| CSV import | -Create | -imports/create-csv-import.sh | -
| File | -Create | -imports/create-file.sh | -
| Graph chart | -Create | -- |
| RDF import | -Create | -imports/create-rdf-import.sh | -
| Result set chart | -Create | -add-result-set-chart.sh | -
| Select | -Create | -add-select.sh | -
| Service | -Create | -add-generic-service.sh | -
| View | -Create | -add-view.sh | -
Document and content structure as well as actions that can be performed on them.
-This guide describes how to get access to an application.
-After you have logged in to LinkedDataHub, by default you do not have access rights to view or edit the documents in the dataspace. The owner of the dataspace has to create - an explicit authorization to allow access for you.
-Fortunately, LinkedDataHub makes it easy to notify the owner by issuing an access request. Navigate to the document you want to gain access to, click on the - Request access button and submit the form that appears, as shown below. You can choose the desired access modes.
-- -
-Are you in? Then continue the get started guide or take a look at the UI overview.
-This guide describes how to login to LinkedDataHub.
-In order to authenticate as the owner of a LinkedDataHub instance, you need to use the WebID authentication method.
-LinkedDataHub uses WebID as the Single sign-on (SSO) - protocol for distributed applications, which is based on authentication using TLS client certficates. Using WebID, you will be able to authenticate with every LinkedDataHub application. - Read more about WebID.
-There are two ways to get a LinkedDataHub WebID: setup and signup.
-Complete the setup and run own an instance of LinkedDataHub.
-The ssl/owner/keystore.p12 file is your WebID certificate. The password is the owner_cert_password Docker secret value.
-Sign up to an existing instance of LinkedDataHub. Click the Sign up button and fill out the form with your - details to get a WebID, as shown below.
-
-
-
You will get an email with a .p12 file attached, which is your WebID certificate. The certificate's password is the one you entered in the signup form.
-You'll need a PEM version of the certificate for use with the command line interface scripts. During setup, it is stored under - ssl/owner/cert.pem. If you got the certificate by email, you need to convert the PKCS12 file to PEM using OpenSSL.
-Unlike most LinkedDataHub resources, your WebID profile will have public access as required by the protocol. Your email address will be hidden however.
-The final step is to install the client certificate into your web browser. It is done by importing the .p12 - file using the browser's certificate manager and providing the password that you supplied during signup. The - manager dialog can be opened following the steps below, depending on which browser you use:
-You need to install the certificate on all devices/browsers that you are using to access LinkedDataHub.
-Open the URL of the LinkedDataHub instance in the web browser (that you installed the WebID certificate into). Using a local setup, it runs on https://localhost:4443/ by default.
-With the certificate installed, there is no login procedure — you are automatically authenticated on all - LinkedDataHub applications. This is known as Single sign-on (SSO).
-Applications can provide public access to some or all documents, meaning you can freely browse their public resources - and perform actions that are allowed for public access. In order to access protected (non-public) resources, as well - as to access administration application, users have to be authenticated as well authorized (authorizations - can be requested).
-Authenticated agents are not guaranteed to have access to all resources. Different access levels for - different agents can be specified by the application administrators.
-Click the Login with Google button in the navbar to authenticate with your Google account.
-If the email address of your Google account matches the dataspace owner's email address that was specified during setup, you will be authenticated as the owner with full control access rights.
-Login with Google is only enabled if LinkedDataHub was configured with social login.
-To sign in, click on the Login with Google button. A unique agent URI will be assigned to you and used to authenticate you with the applications on the platform.
-Are you in? Then continue the get started guide or take a look at the UI overview.
-Setup LinkedDataHub on your local machine.
-- -
-This section assumes you will be running on your local machine, i.e. localhost. If you intend to run it on a different host, change the system base URI.
-Prerequisites:
-$PATH.Steps:
-COMPOSE_CONVERT_WINDOWS_PATHS=1 -COMPOSE_PROJECT_NAME=linkeddatahub - -PROTOCOL=https -HTTP_PORT=81 -HTTPS_PORT=4443 -HOST=localhost -ABS_PATH=/ - -OWNER_MBOX=john@doe.com -OWNER_GIVEN_NAME=John -OWNER_FAMILY_NAME=Doe -OWNER_ORG_UNIT=My unit -OWNER_ORGANIZATION=My org -OWNER_LOCALITY=Copenhagen -OWNER_STATE_OR_PROVINCE=Denmark -OWNER_COUNTRY_NAME=DK-
./bin/server-cert-gen.sh .env nginx ssl- The script will create an ssl/server sub-folder where the SSL certificate will be stored. -
docker-compose up --build- LinkedDataHub will start and mount the following sub-folders: -
You are now the owner of this LinkedDataHub instance; ssl/owner/keystore.p12 is your WebID certificate. Password is the owner_cert_password secret value.
-After a successful startup you should see periodic healthcheck requests being made to the https://localhost:4443/ns URL.
-If you need to start fresh and wipe the existing setup (e.g. after configuring a new base URI), you can do that using:
-sudo rm -rf data datasets uploads ssl && docker-compose down -v-
This will remove persisted RDF data, SSL keys, and uploaded files as well as the Docker volumes.
-Is LinkedDataHub running? Proceed to get an account.
-LinkedDataHub Cloud is a managed LinkedDataHub service, meaning that you do not have to do any setup yourself.
-Proceed to get an account to see how to login to LinkedDataHub Cloud.
-