LargeRDFBench: A Billion Triples Benchmark for SPARQL Query Federation
Switch branches/tags
Nothing to show
Clone or download
Latest commit bc3e5bc Jun 14, 2018


LargeRDFBench is a comprehensive benchmark encompasses real data and real queries (i.e., showing typical requests) of varying complexities, suite for testing and analyzing both the efficiency and effectiveness of federated query processing over multiple SPARQL endpoints. LargeRDFBench has been published at journal of web semantics. The pdf is available from here. The extension of the LargeRDFBench is under-review at ISWC workshop.

Benchmark Datasets Statistics

In the following we provide information about the datasets used in LargeRDFBench along with download links, both for data-dumps and Virtuso-7.10 SPARQL endpoints.

Dataset #Triples #Distinct Subjects #Distinct Predicates #Distinct Objects #Classes #Links Structuredness
LinkedTCGA-M 415030327 83006609 6 166106744 1 - 1
LinkedTCGA-E 344576146 57429904 7 84403422 1 - 1
LinkedTCGA-A 35329868 5782962 383 8329393 23 251.3k 0.98
ChEBI 4772706 50477 28 772138 1 - 0.340
DBPedia-Subset 42849609 9495865 1063 13620028 248 65.8k 0.196
DrugBank 517023 19693 119 276142 8 10.8k 0.726
Geo Names 107950085 7479714 26 35799392 1 118k 0.518
Jamendo 1049647 335925 26 440686 11 1.7k 0.961
KEGG 1090830 34260 21 939258 4 1.3k 0.919
Linked MDB 6147996 694400 222 2052959 53 63.1k 0.729
New York Times 335198 21666 36 191538 2 31.7k 0.731
Semantic Web Dog Food 103595 11974 118 37547 103 2.3k 0.426
Affymetrix 44207146 1421763 105 13240270 3 246.3k 0.506
Total 1003960176 165785212 2160 326209517 459 792.3k Avg. 0.65

Duan et al. " Apples and Oranges: A Comparison of RDF Benchmarks and Real RDF Datasets" introduced the notion of structuredness or choerence, which indicates whether the instances in a dataset have only a few or all attributes of their types set. They show that artificial datasets are typically highly structured and “real” datasets are less structured. The complete details along with type coverages can be found here. The LargeRDFBench java utility to calculate the dataset structuredness can be found here along with usage examples.

Datasets Availability

All the datasets and corresponding virtuoso SPARQL endpoints can be downloaded from the links given below. For SPARQL endpoint federation systems, we strongly recommend to directly download the endpoints as some of the datadumps are quite big and require a lot of upload time. You may start a SPARQL endpoint from bin/start.bat (for windows) and bin/ (for linux). Please note that LinkedTCGA-M(Mehtylation), LinkedTCGA-E(Exon), LinkedTCGA-A(All others), and DBpedia-subset are subsets of the live SPARQL endpoints. Further, the TCGA live SPARQL endpoints are not aligned with Affymetrix, Drugbank, and DBpedia.

Dataset Data-dump Windows Endpoint Linux Endpoint Local Endpoint Url Live Endpoint Url
LinkedTCGA-M Download Download Download your.system.ip.address:8887/sparql -
LinkedTCGA-E Download Download Download your.system.ip.address:8888/sparql -
LinkedTCGA-A Download Download Download your.system.ip.address:8889/sparql -
ChEBI Download Download Download your.system.ip.address:8890/sparql -
DBPedia-Subset Download Download Download your.system.ip.address:8891/sparql
DrugBank Download Download Download your.system.ip.address:8892/sparql
Geo Names Download Download Download your.system.ip.address:8893/sparql
Jamendo Download Download Download your.system.ip.address:8894/sparql
KEGG Download Download Download your.system.ip.address:8895/sparql
Linked MDB Download Download Download your.system.ip.address:8896/sparql
New York Times Download Download Download your.system.ip.address:8897/sparql -
Semantic Web Dog Food Download Download Download your.system.ip.address:8898/sparql
Affymetrix Download Download Download your.system.ip.address:8899/sparql

Datasets Connectivity

Benchmark Queries

LargeRDFBench comprise of a total of 40 queries (both SPARQL 1.0 and SPARQL 1.1 versions) for SPARQL endpoint federation approaches. The 40 queries are divided into four different types : 14 simple queries (S1-S14 from FedBench), 10 complex queries (C1-C10), 8 large data queries (L1-L8), and 8 complex+high data sources (CH1-CH8) queries. The detail of these queries is given in table below. All of the queries can be downloaded from (SPARQL 1.0, SPARQL 1.1). The queries full results can be downloaded from here.

The highlighted complex + high data sources queries (CH1-CH8) are included in the extension of LargeRDFBench.

Further advanced queries features can be found here and discussed in the LargeRDFBench paper. The mean triple pattern selectivities along with complete details, for all of the LargeRDFBench queries can be found here. The LargeRDFBench java utility to calculate all these queries features can be found here along with usage examples.

Usage Information

In the following we explain how one can setup the LargeRDFBench evaluation framework and measure the performance of the federation engine.

SPARQL Endpoints Setup

  • The first step is to download the SPARQL endpoints (portable Virtuoso SAPRQL endpoints from second table above) on different machines, i.e., computers. Best would be one SPARQL endpoint per machine. Therefore, you need a total of 13 machines. However, you can start more than one SPARQL endpoints per machine.
  • The next step is to start the SPARQL endpoint from bin/start.bat (for windows) or bin/ (for Linux). Make a list of the 13 SPARQL endpoints URL's ( required as input for index-free SPARQL query federation engines, i.e., FedX). It is important to note that index-assisted federation engines (e.g., SPLENDID, DARQ, ANAPSID) usually stores the endpoint URL's in its index. The local SPARQL endpoints URL's are given above in second table.

Running SPARQL Queries

Provides the list of SPARQL endpoints URL's, and a LargeRDFBench query to the underlying federation engine as input and calculate the LargeRDFBench metrics (explained next). The query evaluation start-up files for the selected systems (which you can checkout from are given below.


package : package org.aksw.simba.start;


package : package org.aksw.simba.fedsum.startup;


package : package de.uni_koblenz.west.evaluation;


package : package de.uni_koblenz.west.evaluation;


Follow the instructions given at to configure the system and then use anapsid/ivan-scripts/ to run a query.

Running SPARQL 1.1 Queries

Both ANAPSID, FedX provides support for SPARQL 1.1 queries. The procedure for running SPARQL 1.1 queries on these two systems remain the same. You can also directly run SPARQL 1.1 queries of LargeRDFBench from SPARQL endpoint online interface (see Local endpoints URL's from second table given above).

While running SPARQL 1.1 federation queries with online interface of Virtuoso SPARQL endpoint, you may encounter the following error

Virtuoso 42000 Error SQ200: Must have select privileges on view DB.DBA.SPARQL_SINV_2

You can solve this problem by opening Virtuoso conductor from http://your.system.ip.address:portno/conductor/isql.vspx (e.g., http://localhost:8888/conductor/isql.vspx). Type both user id and password as "dba". Once login, execute the following two commands.

grant select on "DB.DBA.SPARQL_SINV_2" to "SPARQL";

grant execute on "DB.DBA.SPARQL_SINV_IMP" to "SPARQL";

You should be able to run all of the benchmark SPARQL 1.1 queries by using online virtuoso online query interface. Please dont set the default named graph at virtuoso online query interface, otherwise, you may get no results.

How to calculate LargeRDFBench metrics?

LargeRDFBench makes use of 7 -- #ASK, #TP. Sources, Source selection time, Query runtime, Results completeness, Results correctness, Number of endpoint request -- main metrics (See paper for details). The first 4 can directly be computed from the source code (checkout the selected systems to see how we calculated these 4 metrics) of the underlying federation engine. While for the later 2, we provided a java tool which computes the precision, recall, F1-score of the results retrieved by the federation engine for a given benchmark query. We used virtuoso SPARQL endpoints and enabled the http log caching, thus all of the endpoint requests were stored in the query log files and we just count the total number of requests by using a simple java program which reads each log file line by line and count the total number of lines (in all log files from 13 endpoints ) as total endpoints requests.

Evaluation Results and Runtime Errors

We have compared 5 - FedX, SPLENDID, ANAPSID, FedX+HiBISCuS, SPLENDID+HiBISUCuS - state-of-the-art SPARQL endpoint federation systems with LargeRDFBench. Our complete evaluation results can be downloaded from here and the runtime errors thrown by the federation systems can be downloaded from here.

SPARQL Endpoints Specifications

Following are the specifications of the machines used in the evaluation to host SPARQL endpoints.

Benchmark Contributors

We are especially thankful to Helena Deus (Foundations Medicine, Cambridge, MA, USA) and Shanmukha Sampath (Democritus University of Thrace, Alexandroupoli, Greece) for providing real use case large data queries and useful discussions regarding large data sets selection. We are also thankful to Jonas S. Almeida (University of Alabama at Birmingham), Bade Iriaboho (University of Alabama at Birmingham), Sarven Capadisli, Maulik Kamdar (Standford University), and Aftab Iqbal (INSIGHT @ NUI Galway) for their contributions. Finally, we are very much thankful to Andreas Schwarte (fluid Operations, Germany), Maria-Esther Vidal ( Universidad Simón Bolívar), Olaf Görlitz (University Koblenz, Germany), Olaf Hartig (HPI, Germany) and Gabriela Montoya (Nantes M´etropole) for all their email conversations, feedbacks, and explanations.