This repository provides all you need to convert CSV files to RDF. It contains:
- A sample CSV file
- A sample XRM mapping that generates CSVW (CSV on the web) mapping files
- A pipeline that converts the input CSV to RDF
- A default GitHub Action configuration that runs the pipeline and creates an artifact for download
This is a GitHub template repository. It will not be declared as "fork" once you click on the Use this template
button above. Simply do that, start adding your data sources and adjust the XRM mapping accordingly:
- Create/adjust the XRM files in the
mappings
directory. - Copy source CSVs to
input
directory. - Execute one of the run-scripts to convert your data.
Make sure to commit the input
, mappings
and src-gen
directories if you want to build it using GitHub Actions.
See Further reading for more information about the XRM mapping language.
The default pipeline can be run with npm start
or npm run to-file
. It will:
- Read the CSVW input files
- Convert it to RDF
- Write it into a file as N-Triples (default:
output/transformed
)
There are additional pipelines configured in package.json
:
file-to-store
: Uploads the generated output file to an RDF store via SPARQL Graph Store Protocolto-store(-dev)
: Directly uploads to an RDF store (direct streaming in the pipeline) via SPARQL Graph Store Protocol
If you want to test the upload to an RDF store, a default Apache Jena Fuseki installation with a database data
on port 3030
should work out of the box.
Pipeline configuration is done via environment variables and/or adjusting default variables in the pipeline itself. If you want to pass another default, have a look at the --variable=XYZ
samples in package.json
or consult the barnard59 documentation. If you want to adjust it in the pipeline, open the file pipelines/main.ttl and edit <defaultVars> ...
.
This template is built on top of our Zazuko barnard59 pipelining system. It is a Node.js based, fully configurable pipeline framework aimed at creating RDF data out of various data sources. Unlike many other data pipelining systems, barnard59 is configured instead of programmed. In case you need to do pre- or post-processing, you can implement additional pipeline steps written in JavaScript.
barnard59 is streaming and can be used to convert very large data sets with a small memory footprint.
We provide additional template repositories:
- xrm-r2rml-workflow: A template repository for converting complete relational databases to RDF using the R2RML specification and Ontop as mapper.
- xrm-xml-workflow: TODO
- Expressive RDF Mapping Language (XRM) and the documentation for details about the domain-specific language (DSL).
- CSV on the Web: A Primer: Introduction to the CSVW mapping language, which is generated by XRM and consumed by barnard59. This is only as a reference, you do not have to learn about it, XRM generates that for you.
- SPARQL 1.1 Graph Store HTTP Protocol: The SPARQL Graph Store specification used to upload data to an RDF store like Apache Jena Fuseki