Skip to content
Efficient, asynchronous batching and caching in clustered environments, port of Facebook DataLoader
Branch: master
Clone or download
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Type Name Latest commit message Commit time
Failed to load latest commit information.
gradle
src
.gitignore
.travis.yml
LICENSE
README.md
build.gradle
gradle.properties
gradlew
gradlew.bat
settings.gradle

README.md

Vert.x DataLoader

Build Status   Codacy Badge   Apache licensed   Download

Note: A pure java 8 (non-Vert.x) port of this project is now an official graphql-java project: java-dataloader

vertx-dataloader-concepts

This small and simple utility library is a port of Facebook DataLoader to Java 8 for use with Vert.x. It can serve as integral part of your application's data layer to provide a consistent API over various back-ends and reduce message communication overhead through batching and caching.

An important use case for DataLoader is improving the efficiency of GraphQL query execution, but there are many other use cases where you can benefit from using this utility.

Most of the code is ported directly from Facebook's reference implementation, with one IMPORTANT adaptation to make it work for Java 8 and Vert.x. (more on this below).

But before reading on, be sure to take a short dive into the original documentation provided by Lee Byron (@leebyron) and Nicholas Schrock (@schrockn) from Facebook, the creators of the original data loader.

Table of contents

Features

Vert.x DataLoader is a feature-complete port of the Facebook reference implementation with one major difference. These features are:

  • Simple, intuitive API, using generics and fluent coding
  • Define batch load function with lambda expression
  • Schedule a load request in queue for batching
  • Add load requests from anywhere in code
  • Request returns Future<V> of requested value
  • Can create multiple requests at once, returns CompositeFuture
  • Caches load requests, so data is only fetched once
  • Can clear individual cache keys, so data is fetched on next batch queue dispatch
  • Can prime the cache with key/values, to avoid data being fetched needlessly
  • Can configure cache key function with lambda expression to extract cache key from complex data loader key types
  • Dispatch load request queue after batch is prepared, also returns CompositeFuture
  • Individual batch futures complete / resolve as batch is processed
  • CompositeFutures results are ordered according to insertion order of load requests
  • Deals with partial errors when a batch future fails
  • Can disable batching and/or caching in configuration
  • Can supply your own CacheMap<K, V> implementations
  • Has very high test coverage (see Acknowledgements)

Differences to reference implementation

Manual dispatching

The original data loader was written in Javascript for NodeJS. NodeJS is single-threaded in nature, but simulates asynchronous logic by invoking functions on separate threads in an event loop, as explained in this post on StackOverflow.

Vert.x on the other hand also uses an event loop (that you should not block!!), but comes with actor-like Verticles and a distributed EventBus that make it inherently asynchronous, and non-blocking.

Now in NodeJS generates so-call 'ticks' in which queued functions are dispatched for execution, and Facebook DataLoader uses the nextTick() function in NodeJS to automatically dequeue load requests and send them to the batch execution function for processing.

And here there is an IMPORTANT DIFFERENCE compared to how this data loader operates!!

In NodeJS the batch preparation will not affect the asynchronous processing behaviour in any way. It will just prepare batches in 'spare time' as it were.

This is different in Vert.x as you will actually delay the execution of your load requests, until the moment where you make a call to dataLoader.dispatch() in comparison to when you would just handle futures directly.

Does this make Java DataLoader any less useful than the reference implementation? I would argue this is not the case, and there are also gains to this different mode of operation:

  • In contrast to the NodeJS implementation you as developer are in full control of when batches are dispatched
  • You can attach any logic that determines when a dispatch takes place
  • You still retain all other features, full caching support and batching (e.g. to optimize message bus traffic, GraphQL query execution time, etc.)

However, with batch execution control comes responsibility! If you forget to make the call to dispatch() then the futures in the load request queue will never be batched, and thus will never complete! So be careful when crafting your loader designs.

Let's get started!

Installing

Gradle users configure the vertx-dataloader dependency in build.gradle:

repositories {
    maven {
        jcenter()
    }
}

dependencies {
    compile 'io.engagingspaces:vertx-dataloader:1.0.0'
}

Building

To build from source use the Gradle wrapper:

./gradlew clean build

Or when using Maven add the following repository to your pom.xml:

<repositories>
    <repository>
        <snapshots>
            <enabled>false</enabled>
        </snapshots>
        <id>central</id>
        <name>bintray</name>
        <url>http://jcenter.bintray.com</url>
    </repository>
</repositories>

And add the dependency to vertx-dataloader:

<dependency>
    <groupId>io.engagingspaces</groupId>
    <artifactId>vertx-dataloader</artifactId>
    <version>1.0.0</version>
    <type>pom</type>
</dependency>

Using

Please take a look at the example project vertx-graphql-example created by Bruno Santos.

Project plans

Current releases

  • 1.0.0 Initial release

Known issues

  • Tests on job queue ordering need refactoring to Futures, one test currently omitted

Future ideas

  • CompletableFuture implementation

Other information sources

Contributing

All your feedback and help to improve this project is very welcome. Please create issues for your bugs, ideas and enhancement requests, or better yet, contribute directly by creating a PR.

When reporting an issue, please add a detailed instruction, and if possible a code snippet or test that can be used as a reproducer of your problem.

When creating a pull request, please adhere to the Vert.x coding style where possible, and create tests with your code so it keeps providing an excellent test coverage level. PR's without tests may not be accepted unless they only deal with minor changes.

Acknowledgements

This library is entirely inspired by the great works of Lee Byron and Nicholas Schrock from Facebook whom I like to thank, and especially @leebyron for taking the time and effort to provide 100% coverage on the codebase. A set of tests which I also ported.

Licensing

This project vertx-dataloader is licensed under the Apache Commons v2.0 license.

Copyright © 2016 Arnold Schrijver and other contributors

You can’t perform that action at this time.