You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
Copy file name to clipboardExpand all lines: content/documentation/master/batching.md
+41-49Lines changed: 41 additions & 49 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -8,13 +8,13 @@ description: How to avoid the dreaded N+1 calls for data and make your graphql s
8
8
---
9
9
# Using Dataloader
10
10
11
-
If you are using `graphql`, you are likely to making queries on a graph of data (surprise surprise). But it's easy
11
+
If you are using `graphql`, you are likely to making queries on a graph of data (no surprises there). However, it's easy
12
12
to implement inefficient code with naive loading of a graph of data.
13
13
14
14
Using `java-dataloader` will help you to make this a more efficient process by both caching and batching requests for that graph of data items. If `dataloader`
15
15
has seen a data item before, it will have cached the value and will return it without having to ask for it again.
16
16
17
-
Imagine we have the StarWars query outlined below. It asks us to find a hero and their friend's names and their friend's friend's
17
+
Imagine we have the StarWars query outlined below. It asks us to find a hero, and their friend's names, and their friend's friend's
18
18
names. It is likely that many of these people will be friends in common.
19
19
20
20
{{< highlight graphql "linenos=table" >}}
@@ -32,7 +32,7 @@ names. It is likely that many of these people will be friends in common.
32
32
}
33
33
{{< / highlight >}}
34
34
35
-
The result of this query is displayed below. You can see that Han, Leia, Luke and R2-D2 are a tightknit bunch of friends and
35
+
The result of this query is displayed below. You can see that Han, Leia, Luke and R2-D2 are a tight-knit bunch of friends and
36
36
share many friends in common.
37
37
38
38
{{< highlight json "linenos=table" >}}
@@ -74,10 +74,10 @@ share many friends in common.
74
74
75
75
A naive implementation would call a `DataFetcher` to retrieve a person object every time it was invoked.
76
76
77
-
In this case it would be *15* calls over the network. Even though the group of people have a lot of common friends.
77
+
In this case it would be *15* calls over the network, even though the group of people have a lot of common friends.
78
78
With `dataloader` you can make the `graphql` query much more efficient.
79
79
80
-
As `graphql` descends each level of the query (e.g. as it processes `hero` and then `friends` and then for each their `friends`),
80
+
As `graphql` descends each level of the query (e.g., as it processes `hero` and then `friends` and then for each of their `friends`),
81
81
the data loader is called to "promise" to deliver a person object. At each level `dataloader.dispatch()` will be
82
82
called to fire off the batch requests for that part of the query. With caching turned on (the default) then
83
83
any previously returned person will be returned as-is for no cost.
@@ -86,7 +86,7 @@ In the above example there are only *5* unique people mentioned but with caching
86
86
*3* calls to the batch loader function. *3* calls over the network or to a database is much better than *15* calls, you will agree.
87
87
88
88
If you use capabilities like `java.util.concurrent.CompletableFuture.supplyAsync()` then you can make it even more efficient by making the
89
-
the remote calls asynchronous to the rest of the query. This will make it even more timely since multiple calls can happen at once
89
+
remote calls asynchronous to the rest of the query. This will make it even more timely since multiple calls can happen at once
90
90
if need be.
91
91
92
92
Here is how you might put this in place:
@@ -102,7 +102,7 @@ Here is how you might put this in place:
102
102
@Override
103
103
public CompletionStage<List<Object>> load(List<String> keys) {
104
104
//
105
-
// we use supplyAsync() of values here for maximum parallelisation
105
+
// we use supplyAsync() of values here for maximum parellisation
0 commit comments