Skip to content
Go to file

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

SPARQL router 0.4.0

The NodeJS/Express application that powers to serve canned SPARQL queries to the world.

master (demo) develop (demo)
Build Status Coverage Status Build Status Coverage Status


SPARQL is the query language to retrieve data from RDF triple stores. I often had the issue that fellow developers or data fanatics asked for data that was in a triple store, but they don't know SPARQL.

This server application solves the issue:

  1. You write the query and gives it a name (e.g. biggest-asian-cities)
  2. You save it under /tables, /graphs or /update, depending on the query type (SELECT, CONSTRUCT, DESCRIBE, SPARQL Update)
  3. You give the URL to your fellow developer, picking the right format for their usage:
    • http:/yourhost/api/tables/biggest-asian-cities.csv for manipulations as a spreadsheet
    • http:/yourhost/api/tables/biggest-asian-cities.json as input for a Web app
    • http:/yourhost/api/tables/biggest-asian-cities.xml if they are into XML
  4. They get fresh updated results from the store every time they hit the URL!

Create a query

Get query results


  • Exposes SPARQL queries as simple URLs, with choice of result format
  • A canned query is a simple file located in /public/api/tables, /public/api/graphs or /public/api/ask depending on the query type (SELECT, CONSTRUCT, DESCRIBE, SPARQL Update)
  • Beside using FTP and SSH, you can POST a new canned query to /api/tables/{query-name}, /api/graphs/{query-name} or /api/ask/{query-name}
  • For more query reuse, possibility to populate variable values in the query by passing parameters
  • Supports content negotiation (via the Accept HTTP header)
  • Possibility to GET or POST a SPARQL query on /api/sparql and get the results, without saving it

A screenshot of the tests as overview of the features.

Configuration and detailed usage documentation

Upcoming features

Known issues



  • NodeJS (4.x, 5.x, 6.x) and NPM must be installed. They are also available in most Linux package managers as nodejs and npm.
  • An RDF triple store that supports SPARQL 1.1 and JSON-LD output.


git clone --depth=1
cd sparql-router
npm install --production

SPARQL router is also available as an NPM package.


On the wiki.

Once it's configured, you must initialize the system queries and test queries:

npm run initialize


I haven't found a proper way to mock a triple store for testing purposes. I consequently use a remote triple store. That means the tests only work if the machine has Internet access.

The configuration used for the tests is stored in config/test.json.

First, make sure, you have all the dev dependencies installed:

npm install

Tests rely on mocha and supertest for the API, and on nightwatch for the frontend.


To run the API tests:

npm test

Overview of the API tests.


To run the frontend tests:

# Make sure the dev dependencies are installed
npm install

# Start the server in development mode with the test configuration
NODE_ENV=test npm run dev

# Run the frontend tests
npm run test-ui

Start it

Using config/default.json configuration file:

npm start

Using config/myconfig.json configuration file:

NODE_ENV=myconfig npm start

Start in debug mode:

DEBUG=functions,routes npm start

Resilient deployment

If you want the app to restart automatically after fatal errors, I suggest you use forever.

When forever is installed globally, run the following command in the sparql-router folder:

forever bin/www

Use it

See this wiki page for detailed instructions: Using SPARQL router

The API documentation can be found here (development version). If you're running the app, at /api.

Actions that require authentication

The actions that are not read-only on the canned queries or the data require basic authentication.

  • HTTP PUT to create or update a query
  • HTTP DELETE to delete a query

Similar software

If SPARQL router doesn't match your requirements, you can have a look at these solutions:

  • The Datatank (PHP5) "The DataTank is open source software, which you can use to transform any dataset into an HTTP API."
  • BASIL (Java) " BASIL is designed as middleware system that mediates between SPARQL endpoints and applications. With BASIL you can build Web APIs on top of SPARQL endpoints."


Change log


  • User interface using Vue.js 1 and Bootstrap 3 (see
    • @OpenTriply's YASQE as the editor (
    • Table query results
    • Single page application (= very fast transitions)
  • Possibility to delete a query
  • Requesting .rq or application/sparql-query returns the query text instead of the query results


  • An arbitrary endpoint can be passed with canned queries (upon creation or update) and with passthrough queries
  • Metadata (name, author) can be passed with canned queries (new and updates) and with passthrough queries. Creation and modification dates are added automatically
  • The system endpoint stores the canned queries metadata
  • The default endpoint is the endpoint that is used if no endpoint is provided by the client
  • Added support for ASK queries on /api/ask
  • Started work on UI, using VueJS (just wireframes for now)
  • Updated the API documentation accordingly
  • Added extra info upon app startup (used config, endpoint, app URL, etc.)
  • App authentication can be disabled in configuration
  • README mentions the Datatank and BASIL alternatives
  • Added an npm start command for commodity
  • Improved installation instructions
  • Added pictures to explain how this thing works
  • Improved information about the demo


  • Support for SPARQL Update queries (requires authentication)
  • Possibility to populate query variable values via URL parameters! (#10)
  • Queries created and updated via HTTP POST are tested before creation/update
  • Possibility to setup user:password for the configured endpoint (Basic authentication)
  • The URL of the query is returned when creating or updating a query
  • Tested on Fuseki 2.x, Dydra, Stardog 4.0.5, OpenLink Virtuoso (LOD cache)
  • More useful error messages
  • Applied NodeJS security best practices (with helmet)


  • Enabled canned queries
  • Extension (.csv, .xml, etc.) defines the format returned by the endpoint
  • Passthrough queries via /sparql
  • Create new canned queries by HTTP POST, SSH or FTP
  • Basic auth for POST and DELETE
  • API doc written in Swagger
  • Support for HTTPS endpoints
  • CORS support


MIT license

If you use it, I'd really appreciate a public statements such as a tweet!


Application to turn SPARQL queries into APIs and use them in a simple Web app (Express + Vue)




No packages published


You can’t perform that action at this time.