Skip to content
Switch branches/tags

Latest commit


Git stats


Failed to load latest commit information.
Latest commit message
Commit time

Graph Data Benchmark (GDBench)

A micro-benchmark for benchmarking graph databases based on a social network data model.


GDBench is a microbenchmark oriented to evaluate the performance of graph database systems based on social network applications. The benchmark includes a generator that synthetically creates graphs resembling real world social networks, and define a set of representative graph and use-case oriented queries, such as getting the friends of a friend, looking for similar like pages, or finding shortest paths between people.

Structure of the benchmark

Data model.

GDBench is based on the data model described in Figure 1. The data model considers two types of entities, person and Webpage. Persons are linked to other persons by a "friend" relationship, indicating that those two persons are friends. On the other side, persons are linked to Webpages by a "like" relationship indicating that a person likes a Webpage. The attributes of a person are his/her identifier (pid), his/her name, and two optional fields, the age and the location. Finally, a Webpage has an attribute wpid as identifier, its URL, and optionally a creation date.

Figure 1. Data model of GDBench


A micro-benchmark is used to evaluate the individual performances of basic operations (such as joins and aggregations in relational databases), rather than more complex queries. In the context of graph databases we find several queries which can be considered essential in graphs. We group them in adjacency, reachability, pattern matching and summarization queries. Also, we add select queries which are also relevant in the context of graph databases.

Based on a brief analysis of the user interaction with a social network system (Facebook), and taking into account the essential graph queries defined above, we defined a representative query mix in the following Table. In spite of the apparent similarity among some of the selected queries, they present simple variations that may be implemented differently, and hence optimized, by the database systems.

Q Description Type
1 People having a name N Select
2 The name of people that likes a given Web page W Adjacency
3 The URL of Web pages that person P likes Adjacency
4 The name of the person with a given PID Select
5 The name of friends of the friends of a given person P Reachability
6 The URL of Web pages liked by the friends of a given person P Reachability
7 Get the name of people that likes a Web page which a person P likes Reachability
8 Is there a connection (path) between people P1 and P2? Reachability
9 Get the shortest path between people P1 and P2 Reachability
10 Get the name of common friends between people P1 and P2 Pattern matching
11 Get the URL of common Web pages that people P1 and P2 like Pattern matching
12 Get the name and the number of friends of a person Summarization
13 The friends of the friends of the friends a given person P Reachability

Performance metrics.

In the literature of benchmarking databases systems several performance metrics are considered: throughput, which is the rate at which operations are complete (e.g., transactions per second); response time of the operation, which is the elapsed time for the execution of the operation; and cost metric, which normalizes the price of a system with respect to the throughput. We concentrate our interest on measuring the response time for load and query operations.

  • Data loading Time. It refers to the time that a database system needs to load the data from the source file. This metric includes any time spend by the system to built index structures and statistical data. The loading time is measured in milliseconds.
  • Query execution time. This is the central performance metric of the benchmark. It refers to the time spend by a database system to execute a single query. The execution time of a query type Q is given by the average time of executing several instances of Q. The execution time of a query instance is measured in microseconds. In order to obtain measures similar to those of a working server, the benchmark executes a warm-up run before executing the timed query. The warm-up executes all the query instances once before the timed execution to populate the database cache system.

GDGenerator: A social network data generator

GDBench includes GDGenerator, a java application which allows the generation of synthetic graph data following the social network structure defined above.

The implementation of GDGenerator is based on a general-purpose graph data generator (GraphGenerator) which is based on the Recursive Matrix (R-Mat) model, but optimized to generate large graphs in a streamed fashion (i.e., reducing memory restrictions). GraphGenerator allows to create directed/undirected graphs following normal/powerlaw distributions.

To produce synthetic social-network data we take as reference the information published by current social networks applications, in particular Facebook, The number of users in Facebook is significantly larger than the number of webpages. By default, we set 50% of the nodes as persons and 50% as webpages. The identifier of a person node (i.e., the attribute pid) is an integer value in the range [1, N × 0.5], where N is the number of nodes in the graph. The names of persons and locations are selected randomly from dictionaries including 5494 first names, 88799 last names, and 656 locations. Hence, the probability of having duplicated pairs of these two attributes depends on the size of the graph and the distributions used. The occurrence of attributes age and location follows probabilities 0.6 and 0.3 respectively. The age of a person is a random integer between 10 and 73. The identifier of a node webpage (i.e., the attribute wpid) is an integer in the range [(N × 0.8) + 1, N ]. The attribute URL follows the pattern where ID is the wpid of the webpage. The probability of including a random creation date for a webpage is 0.6.

Resembling the data generator of the benchmark for the Facebook social graph (LinkBench), GDGenerator allows to obtain power-law distributions for the edges corresponding to relationships friend and like.

The current version of GDGenerator allows to specify three parameters:

  • the number of nodes in the graph (by default, 50% of the nodes are people and 50% webpages);
  • the format of the output data file (gd = graph data -default-, gml = Graphml and n3 = Notation3-RDF);
  • the statistical distribution of edges friend/like (1 = powerlaw/powerlaw, 2 = powerlaw/normal, 3 = normal/powerlaw, 4 = normal/normal).

The output of the data generator is a file named sndata.* which contains the graph data in an specific format. The data format used by default is a non-standard compressed format used called graphdata (*.gd).

The GDBench Test Driver

The GDBench test driver is available as a java library GDBench.jar which allows the execution of data loading and query tests. This is obtained by compiling the code available in the GDBench directory.

The main class of the library,, is an abstract class that defines an interface to operate with a database system. It includes methods for database definition (createDB, openDB, closeDB), data loading (openTransaction, closeTransaction, insertPerson, insertWebpage, insertFriend, insertLike) and query execution (Q1, ..., Q13). Next we describe the steps to use the GDBench.jar library.

How to evaluate a database system with GDBench

The procedure to evaluate the performance of a database system, by using the benchmark described above, is defined by the following steps.

Data generation.

The first step is the generation of the data by using the graph data generator GDGenerator to created the data file

Test driver implementation.

It consists in the development of a java class that implements the abstract class with the code corresponding to the target database system. MyGDBench is a java project that can be used as a starting point for implemeting your own test driver.

The execution of the benchmark is defined by a configuration file called gdbench.conf. If this file does not exists, it is created with the following content:


--- Parameters ---
DataLoading=true|false : Run data loading test.
QueryTest=true|false : Run query test for all queries.
Query=# : Run query test for query type # (1 <= # <= 13). If # = 0 then run query test for all queries.
Instances=# : Defines the number # of instances for query to be executed.
ExecutionOrder=s|r : Defines the query execution order (s = sequential, r = random).
Repetitions=# : Defines the number # of times that each instance query will be executed.
TestData=r|i|m : Defines the method for test data selection (r = random, i = interval, m = media-based)

Data loading test.

The benchmark allows transactional or bulk data loading.

Transactional data loading is supported by calling the method runDataLoading() provided by the class MyTestDriver (and implemented in the class TestDriver) . The application reads the data file and use transactions to insert the data in the target database according to the data loading methods defined in the class MyTestDriver (recall that these methods must be implemented by the user).

Bulk data loading depends on the support of a database system to read specific data formats. For example, RDF databases are able to read N3 and RDF-XML files. Unfortunately, current graph database systems define different and non-standard data formats, therefore different data loaders must be developed.

Query execution test.

The query execution test (which must be executed after the data is stored in the database) allows to run a collection of queries called a "query mix". A query mix is constructed by creating a given number of instances for each query. By default, the benchmark creates 100 instances but it can be defined by using the parameter "Instances" in the configuration file. The test data used to create the query instances can be generated by one of three methods (random, interval and media-based). The parameter "ExecutionOrder" allows to select between a sequential or random order of query instances in the query mix. Additionally, the user is able to run the test for a specific query type or to run the complete query mix.

The execution of the query test consider two steps: the first step is oriented to prepare the database system (warm-up); and the second is devoted to measure the performance of the system. In both cases the same query mix is executed.

Execution rules

The implementation of a test driver must hold the following rules:

  • The user is able to choose between transactional or bulk data loading.
  • The queries must be implemented in the corresponding methods defined by the TestDriver java class.
  • The number of results for a query must be calculated by traversing the result-set. This restriction does not apply for query Q12 where an aggregate operator can be used.

Benchmarking results.

The benchmark provides reports, in HTML and CSV formats, for the transactional loading test and the query execution test. The data loading report includes information about the data loaded (e.g., number of nodes and edges) and the total loading time. The query execution report contains general and detailed information about the queries executed. A first part of the report presents, for each query type, the number of instances executed plus the total, maximum, minimal, average, variance and standard deviation execution times. A second part shows the list of query instances executed during the test, including the query type, the input data, the output data, and the execution time for query instance.

Implementation samples and experiments

The source code of java projects implementing test drivers for different database systems (including Dex and Neo4j) are available here. Experimental results are also available.


Creator: Renzo Angles


  • Roberto García (DCC, Univ. de Talca): Design and development of the current version of GDBench.
  • Sebastián Arancibia (DCC, Univ. de Talca): Design and development of initial versions the data generator.
  • Sergio Silva (DCC, Univ. de Talca): Implementations of test drivers for graph database systems and RDF Stores.
  • Josep-Lluis Larriba-Pey (DAMA-UPC): Design of the bechmark
  • David Dominguez (Sparsity Technologies): Design of the bechmark
  • Arnau Prat (DAMA-UPC: Implementation of the data generator and the test drivers.


Last update: Oct 16, 2014


A micro-benchmark for benchmarking graph databases based on a social network data model







No releases published


No packages published