CoW is a tool to convert a .csv file into Linked Data. Specifically, CoW is an integrated CSV to RDF converter using the W3C standard CSVW for rich semantic table specificatons, producing nanopublications as an output RDF model. CoW converts any CSV file into an RDF dataset.
- Expressive CSVW-compatible schemas based on the Jinja template enginge.
- Highly efficient implementation leveraging multithreaded and multicore architectures.
- Available as a Docker image, graphical or command line interface (CLI) tool, and library.
For user documentation see the basic introduction video and the GitHub wiki. Technical details are provided below. If you encounter an issue then please report it. Also feel free to create pull requests.
There are two ways to run CoW. The quickest is via Docker, the more flexible via PIP.
Several data science tools, including CoW, are available via a Docker image.
First, install the Docker virtualisation engine on your computer. Instructions on how to accomplish this can be found on the official Docker website. Use the following command in the Docker terminal:
# docker pull wxwilcke/datalegend
Here, the #-symbol refers to the terminal of a user with administrative privileges on your machine and is not part of the command.
After the image has successfully been downloaded (or 'pulled'), the container can be run as follows:
# docker run --rm -p 3000:3000 -it wxwilcke/datalegend
The virtual system can now be accessed by opening http://localhost:3000/wetty in your preferred browser, and by logging in using username datalegend and password datalegend.
For detailed instructions on this Docker image, see DataLegend Playground. For instructions on how to use the tool, see usage below.
The Command Line Interface (CLI) is the recommended way of installing CoW for most users.
Check whether the latest version of Python is installed on your device. For Windows/MacOS we recommend to install Python via the official distribution page.
The recommended method of installing CoW on your system is pip3
:
pip3 install cow-csvw
You can upgrade your currently installed version with:
pip3 install cow-csvw --upgrade
Possible installation issues:
- Permission issues. You can get around them by installing CoW in user space:
pip3 install cow-csvw --user
. - Cannot find command: make sure your binary user directory (typically something like
/Users/user/Library/Python/3.7/bin
in MacOS or/home/user/.local/bin
in Linux) is in your PATH (in MacOS:/etc/paths
). - Please report your unlisted issue.
Start the graphical interface by entering the following command:
cow_tool
Select a CSV file and click build
to generate a file named myfile.csv-metadata.json
(JSON schema file) with your mappings. Edit this file (optional) and then click convert
to convert the CSV file to RDF. The output should be a myfile.csv.nq
RDF file (nquads by default).
The straightforward CSV to RDF conversion is done by entering the following commands:
cow_tool_cli build myfile.csv
This will create a file named myfile.csv-metadata.json
(JSON schema file). Next:
cow_tool_cli convert myfile.csv
This command will output a myfile.csv.nq
RDF file (nquads by default).
You don't need to worry about the JSON file, unless you want to change the metadata schema. To control the base URI namespace, URIs used in predicates, virtual columns, etcetera, edit the myfile.csv-metadata.json
file and/or use CoW commands. For instance, you can control the output RDF serialization (with e.g. --format turtle
). Have a look at the options below, the examples in the GitHub wiki, and the technical documentation.
Check the --help
for a complete list of options:
usage: cow_tool_cli [-h] [--dataset DATASET] [--delimiter DELIMITER]
[--quotechar QUOTECHAR] [--encoding ENCODING] [--processes PROCESSES]
[--chunksize CHUNKSIZE] [--base BASE]
[--format [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}]]
[--gzip] [--version]
{convert,build} file [file ...]
Not nearly CSVW compliant schema builder and RDF converter
positional arguments:
{convert,build} Use the schema of the `file` specified to convert it
to RDF, or build a schema from scratch.
file Path(s) of the file(s) that should be used for
building or converting. Must be a CSV file.
optional arguments:
-h, --help show this help message and exit
--dataset DATASET A short name (slug) for the name of the dataset (will
use input file name if not specified)
--delimiter DELIMITER
The delimiter used in the CSV file(s)
--quotechar QUOTECHAR
The character used as quotation character in the CSV
file(s)
--encoding ENCODING The character encoding used in the CSV file(s)
--processes PROCESSES
The number of processes the converter should use
--chunksize CHUNKSIZE
The number of rows processed at each time
--base BASE The base for URIs generated with the schema (only
relevant when `build`ing a schema)
--gzip Compress the output file using gzip
--format [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}], -f [{xml,n3,turtle,nt,pretty-xml,trix,trig,nquads}]
RDF serialization format
--version show program's version number and exit
Once installed, CoW can be used as a library as follows:
from cow_csvw.csvw_tool import COW
import os
COW(mode='build', files=[os.path.join(path, filename)], dataset='My dataset', delimiter=';', quotechar='\"')
COW(mode='convert', files=[os.path.join(path, filename)], dataset='My dataset', delimiter=';', quotechar='\"', processes=4, chunksize=100, base='http://example.org/my-dataset', format='turtle', gzipped=False)
The GitHub wiki provides more hands-on examples of transposing CSVs into Linked Data.
Technical documentation for CoW are maintained in this GitHub repository (under ), and published through Read the Docs at http://csvw-converter.readthedocs.io/en/latest/.
To build the documentation from source, change into the docs
directory, and run make html
. This should produce an HTML version of the documentation in the _build/html
directory.
MIT License (see license.txt)
Authors: Albert Meroño-Peñuela, Roderick van der Weerdt, Rinke Hoekstra, Kathrin Dentler, Auke Rijpma, Richard Zijdeman, Melvin Roest, Xander Wilcke
Copyright: Vrije Universiteit Amsterdam, Utrecht University, International Institute of Social History
CoW is developed and maintained by the CLARIAH project and funded by NWO.