Batch generation of RDF for VIVO using Karma models and the VIVO SPARQL Update API to write the data to VIVO. This is a work in progress.
-
Use Karma to map existing data in a variety of formats to RDF for VIVO.
-
Export Karma's models for use from outside the Karma interface.
-
Use the Karma RDF generator API to generate triples and the VIVO SPARQL Update API to write those triples to VIVO in an automated fashion.
This is first pass at installation instructions. This will probably require some troubleshooting and modifications to install.
-
install Karma
-
git clone git@github.com:lawlesst/karma2vivo.git
-
cd karma2vivo
-
run
mvn clean install
-
copy
sample-batch.sh
tobatch.sh
(or your choice) and change default values to match your environment -
run
./batch.sh sample/ingest.ttl
to run a sample batch. If all goes well you should see triples printed to your screen. -
create your own
ingest.ttl
file based on the example insample/ingest.ttl
.
-
An "ingest" configuration is needed. See the sample in
sample/ingest.ttl
. There can be multiple ingest:Transform blocks in one configuration. -
There is a command line option
--sync
that will compare triples in the incoming data to those that exist in VIVO and update VIVO to match the incoming data. Warning, this will remove data. -
by removing
ingest:debug "true"
, karma2vivo will connect to your VIVO store (using the environment variables defined in sample-batch.sh) and post the generated triples to VIOV.
More to come...