grlc builds Web APIs using shared SPARQL queries
Switch branches/tags
Nothing to show
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
.travis Fix tests May 31, 2018
bin Make pip installable Nov 11, 2016
docker-assets Add grlc server name to docker-compose Jul 9, 2018
src string parameters Aug 2, 2018
tests Update test query summary Jun 12, 2018
upstart Fix yaml compose default path Jan 24, 2017
.gitattributes Ignore for dev/prod merge Jan 10, 2017
.gitignore ci in dockerhub Apr 21, 2017
.travis.yml
CITATION.cff Jspaaks citationcff (#109) Mar 29, 2018
CODE_OF_CONDUCT.md code of conduct Sep 25, 2017
CONTRIBUTING.md feature requests note Sep 27, 2017
Dockerfile ci in dockerhub Apr 21, 2017
LICENSE.txt Works on testpypi Feb 6, 2017
MANIFEST.in Works on testpypi Feb 6, 2017
README.md
config.default.ini
docker-compose.default.yml Add grlc server name to docker-compose Jul 9, 2018
gunicorn_config.py 20 workers May 30, 2018
requirements-test.txt Test grlc has been installed and run tests on TravisCI Mar 21, 2018
requirements.txt Use PyGithub Nov 7, 2017
setup.cfg Works on testpypi Feb 6, 2017
setup.py Test grlc has been installed and run tests on TravisCI Mar 21, 2018

README.md

Join the chat at https://gitter.im/grlc DOI Build Status

grlc, the git repository linked data API constructor, automatically builds Web APIs using SPARQL queries stored in git repositories. http://grlc.io/

Contributors: Albert Meroño, Rinke Hoekstra, Carlos Martínez

Copyright: Albert Meroño, VU University Amsterdam
License: MIT License (see LICENSE.txt)

What is grlc ?

grlc is a lightweight server that takes SPARQL queries curated in GitHub repositories, and translates them to Linked Data Web APIs. This enables universal access to Linked Data. Users are not required to know SPARQL to query their data, but instead can access a web API.

Features

  • Request parameter mappings into SPARQL: grlc is compliant with BASIL's convention on how to map GET/POST request parameters into SPARQL
  • Automatic, user customizable population of parameter values in swagger-ui's dropdown menus via SPARQL triple pattern querying
  • [NEW] Parameter values can now also be specified in the query decorators to save endpiont requests
  • URL-based content negotiation: you can request for specific content types by attaching them to the operation request URL, e.g. http://localhost:8088/CEDAR-project/Queries/residenceStatus_all.csv will request for results in CSV
  • Pagination of API results, as per the pagination decorator and GitHub's API Pagination Traversal
  • Docker images in Docker Hub for easy deployment
  • Compatibility with Linked Data Fragments servers, RDF dumps, and HTML+RDFa files
  • Generation of provenance in PROV of both the repo history (via Git2PROV) and grlc's activity additions
  • Commit-based API versioning that's coherent with the repo versioning with git hashes
  • [NEW] SPARQL endpoint address can be set at the query level, repository level, and now also as a query parameter. This makes your APIs endpoint agnostic, and enables for generic and transposable queries!
  • [NEW] CONSTRUCT queries are now mapped automatically to GET requests, accept parameters in the WHERE clause, and return content in text/turtle or application/ld+json
  • [NEW] INSERT DATA queries are now mapped automatically to POST requests. Support is limited to queries with no WHERE clause, and parameters are always expected to be values for g (named graph where to insert the data) and data (with the triples to insert, in ntriples format). The INSERT query pattern is so far static, as defined in static.py. Only tested with Virtuoso.

Install and run

Running via docker is the easiest and preferred form of deploying grlc. You'll need a working installation of docker and docker-compose. To deploy grlc, just pull the latest image from Docker hub, and run docker compose with a docker-compose.yml that suits your needs (an example is provided in the root directory):

git clone https://github.com/CLARIAH/grlc
cd grlc
docker pull clariah/grlc
docker-compose -f docker-compose.default.yml up

(You can omit the first two commands if you just copy this file somehwere in your filesystem) If you use the supplied docker-compose.default.yml your grlc instance will be available at http://localhost:8001

If you want your grlc instance to forward queries to a different service than grlc.io, edit the GRLC_SERVER_NAME variable in your docker-compose.yml or docker-compose.default.yml file.

In order for grlc to communicate with GitHub, you'll need to tell grlc what your access token is:

  1. Get a GitHub personal access token. In your GitHub's profile page, go to Settings, then Developer settings, Personal access tokens, and Generate new token
  2. You'll get an access token string, copy it and save it somewhere safe (GitHub won't let you see it again!)
  3. Edit your docker-compose.yml or docker-compose.default.yml file, and paste this token as value of the environment variable GRLC_GITHUB_ACCESS_TOKEN

If you want to run grlc at system boot as a service, you can find example upstart scripts at upstart/

Alternative install methods

Through these you'll miss some cool docker bundled features (like nginx-based caching). We provide these alternatives just for testing, development scenarios, or docker compatibility reasons.

pip

If you want to use grlc as a library, you'll find it useful to install via pip.

pip install grlc
grlc-server

More details can be found at grlc's PyPi page (thanks to c-martinez!).

Flask application

You can run grlc natively as follows:

gunicorn -c gunicorn_config.py src.server:app

You can also find an example here

Usage

grlc assumes a GitHub repository (support for general git repos is on the way) where you store your SPARQL queries as .rq files (like in this one). grlc will create an API operation per such a SPARQL query/.rq file.

If you're seeing this, your grlc instance is up and running, and ready to build APIs. Assuming you got it running at http://localhost:8088/ and your queries are at https://github.com/CEDAR-project/Queries, just point your browser to the following locations:

By default grlc will direct your queries to the DBPedia SPARQL endpoint. To change this either:

  • Add a endpoint parameter to your request: 'http://grlc.io/user/repo/query?endpoint=http://sparql-endpoint/'. You can add a #+ endpoint_in_url: False decorator if you DO NOT want to see the endpoint parameter in the swagger-ui of your API.
  • Add a #+ endpoint: decorator in the first comment block of the query text (preferred, see below)
  • Add the URL of the endpoint on a single line in an endpoint.txt file within the GitHub repository that contains the queries.
  • Or you can directly modify the grlc source code (but it's nicer if the queries are self-contained)

That's it!

Example APIs

Check these out:

You'll find the sources of these and many more in GitHub

Decorator syntax

A couple of SPARQL comment embedded decorators are available to make your swagger-ui look nicer (note all comments start with #+ ):

  • To specify a query-specific endpoint, #+ endpoint: http://example.com/sparql.
  • To indicate the HTTP request method, #+ method: GET.
  • To paginate the results in e.g. groups of 100, #+ pagination: 100.
  • To create a summary of your query/operation, #+ summary: This is the summary of my query/operation
  • To assign tags to your query/operation,
    #+ tags:
    #+   - firstTag
    #+   - secondTag
  • To indicate which parameters of your query/operation should get enumerations (and get dropdown menus in the swagger-ui) using values from the SPARQL endpoint,
    #+ enumerate:
    #+   - var1
    #+   - var2
  • These parameters can also be hard-coded into the query decorators to save endpoint requests and speed up the API generation:
#+ enumerate:
#+   - var1:
#+     - value1
#+     - value2

Notice that these should be plain variable names without SPARQL/BASIL conventions (so var1 instead of ?_var1_iri)

See examples at https://github.com/albertmeronyo/lodapi.

Use this GitHub search to see examples from other users of grlc.

Contribute!

grlc needs you to continue bringing Semantic Web content to developers, applications and users. No matter if you are just a curious user, a developer, or a researcher; there are many ways in which you can contribute:

  • File in bug reports
  • Request new features
  • Set up your own environment and start hacking

Check our contributing guidelines for these and more, and join us today!

If you cannot code, that's no problem! There's still plenty you can contribute:

  • Share your experience at using grlc in Twitter (mention the handler @grlcldapi)
  • If you are good with HTML/CSS, let us know

Related tools

  • SPARQL2Git is a Web interface for editing SPARQL queries and saving them in GitHub as grlc APIs.
  • grlcR is a package for R that brings Linked Data into your R environment easily through grlc.
  • Hay's tools lists grlc as a Wikimedia-related tool :-)

This is what grlc users are saying

Academic publications

  • Albert Meroño-Peñuela, Rinke Hoekstra. “grlc Makes GitHub Taste Like Linked Data APIs”. The Semantic Web – ESWC 2016 Satellite Events, Heraklion, Crete, Greece, May 29 – June 2, 2016, Revised Selected Papers. LNCS 9989, pp. 342-353 (2016). (PDF)
  • Albert Meroño-Peñuela, Rinke Hoekstra. “SPARQL2Git: Transparent SPARQL and Linked Data API Curation via Git”. In: Proceedings of the 14th Extended Semantic Web Conference (ESWC 2017), Poster and Demo Track. Portoroz, Slovenia, May 28th – June 1st, 2017 (2017). (PDF)
  • Albert Meroño-Peñuela, Rinke Hoekstra. “Automatic Query-centric API for Routine Access to Linked Data”. In: The Semantic Web – ISWC 2017, 16th International Semantic Web Conference. Lecture Notes in Computer Science, vol 10587, pp. 334-339 (2017). (PDF)