Join GitHub today
This page provides supplemental material for our IPAW 2012 paper, "Towards Unified Provenance Granularities". If you'd like to know something that isn't here, feel free to contact Tim Lebo.
Tools mentioned in Section 3
Most of the links on this page refer to materials in https://github.com/timrdf/csv2rdf4lod-automation
- retrievals provenance: pcurl.sh and pcurl.py
- preparations provenance: justify.sh
- conversions provenance: csv2rdf4lod asserts provenance during conversion. It knows where the inputs and outputs will be stored on the web and describes them in RDF, VoID, PML, DCTERMs, etc.
- publishings provenance: pvload.sh uses pcurl.sh to retrieve an RDF file at a URL, loads it into the given named graph, and inserts provenance about the URL retrieval and graph load into the same named graph. files are placed into htdocs to suit the locations asserted by the converter. Discussed at Named graphs that know where they came from.
URLs mentioned in Section 3
The following URLs were abbreviated for space considerations:
- e.g. http://logd.tw.rpi.edu/source/mta-info/dataset/station-table/version/2010-Dec-20 (a dataset's URI)
- e.g. http://logd.tw.rpi.edu/source/mta-info/dataset/station-table/version/2010-Dec-20/station_2 (a data element URI - responds to RDF conneg)
- e.g. http://logd.tw.rpi.edu/source/mta-info/dataset/station-table/vocab/ (namespace for the dataset's vocabulary - classes and predicates)