New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
[BREAKING] Resolve cache access when values are dispatched #113
[BREAKING] Resolve cache access when values are dispatched #113
Conversation
When cached keys are loaded along with new keys, make sure the promises returned for the cached values are resolved in the same micro-task as those for the newly loaded values. This means calling code will not (easily) be able to observe whether a key was cached or not. If calling code uses the values to perform more loads (perhaps using different data loaders) this ensures that those loads are combined into a single batch. In the following example the same loader is used. Without this commit, once `userLoader.load(1)` is cached, the subsequent loads unexpectedly result in three requests, not two: ```js var DataLoader = require('dataloader') var userLoader = new DataLoader(keys => myBatchGetUsers(keys)); userLoader.load(1) .then(firstUser => { // Later… userLoader.load(1) .then(user => userLoader.load(user.invitedByID)) .then(invitedBy => console.log(`User 1 was invited by ${invitedBy}`)); // Elsewhere in your application userLoader.load(2) .then(user => userLoader.load(user.lastInvitedID)) .then(lastInvited => console.log(`User 2 last invited ${lastInvited}`)); }) ``` This is because `userLoader.load(1)` resolves nearly instantaneously while `userLoader.load(2)` requires a backend round-trip. This means the subsequent loads for `user.invitedByID` and `user.lastInvitedID` are not batched together. *With* this commit however both promises resolve in the same micro-task, allowing the subsequent loads to be batched. Fixes #97.
This is pretty interesting, thanks for fleshing out an implementation. It's definitely breaking, seems more like a shift in tradeoffs than an explicit improvement, and adds a lot of implementation complexity. So I'm a bit nervous about proceeding directly |
@leebyron currently DataLoader leaks that a value has been retrieved previously. This makes it harder to reason about its behavior. In my example above it leads to decreased efficiency in subsequent requests. But yes this is a trade-off, there might be other circumstances where the current behavior leads to better performance. I'd argue it violates the principle of least astonishment though. |
@novemberborn sorry to disturb you, but I don't see where it does 3 requests instead of 2: https://runkit.com/caub/5ab2bdc1a964eb001276b778 (I see one |
@caub in your example the loads aren't racing each other. I think you'll see different behavior if you do |
@novemberborn Ah ok, I updated the runkit example, but still can't reproduce it |
@caub have a look at this one: https://runkit.com/novemberborn/5ab9142935659d0012049e22 When When used in GraphQL this can prevent requests from being properly batched. Arguably the current behavior is also valid, though. |
@novemberborn Ok, so it behaves as expected, there are no extra requests https://runkit.com/caub/5ab9186d567b6f0012824a7d (you talked about "subsequent loads unexpectedly result in three requests, not two") I've read #97 as well, I'd be curious to see if you can reproduce this case, it'd help |
It can impact batching in GraphQL, in unexpected ways. It's really hard to reason about though. |
When cached keys are loaded along with new keys, make sure the promises
returned for the cached values are resolved in the same micro-task as
those for the newly loaded values.
This means calling code will not (easily) be able to observe whether a
key was cached or not. If calling code uses the values to perform more
loads (perhaps using different data loaders) this ensures that those
loads are combined into a single batch.
In the following example the same loader is used. Without this commit, once
userLoader.load(1)
is cached, the subsequent loads unexpectedly result inthree requests, not two:
This is because
userLoader.load(1)
resolves nearly instantaneously whileuserLoader.load(2)
requires a backend round-trip. This means the subsequentloads for
user.invitedByID
anduser.lastInvitedID
are not batched together.With this commit however both promises resolve in the same micro-task,
allowing the subsequent loads to be batched.
Fixes #97.
This PR is meant to give context to #97.
const
to avoid some Flow ambiguity. Since I don't see it used in the codebase please let me know if that's an issueMap
, I hope that's OKThe documentation implies that
DataLoader#load()
returns the same promise when caching is enabled. With this PR that changes and promises are only reused while the batch is still enqueued. This is kinda the point of the PR but it might be a considered a breaking change. There are no tests that explicitly guarantee this behavior though.