Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Improve optimistic update performance by limiting cache key diversity. #5648

Merged
merged 2 commits into from
Dec 3, 2019

Conversation

benjamn
Copy link
Member

@benjamn benjamn commented Dec 3, 2019

This PR restores the full benefits of result caching (#3394) when optimistic updates are pending, which should significantly improve the performance of optimistic updates. Apollo Client 2.x did not attempt to apply result caching to optimistic data, and the EntityStore (fka EntityCache; see #5618) introduced by Apollo Client 3.0 has been getting the cache key logic slightly wrong since it was first implemented, so the problem addressed by this PR falls somewhere between a bug and an optimization opportunity.

Details can be found in the commit messages, reproduced here:

Divide EntityStore into optimistic and non-optimistic CacheGroups

The result caching system introduced by #3394 gained the ability to cache optimistic results (rather than just non-optimistic results) in #5197, but since then has suffered from unnecessary cache key diversity during optimistic updates, because every EntityStore.Layer object (corresponding to a single optimistic update) counts as a distinct cache key, which prevents cached results from being reused if they were originally read from a different Layer object.

This commit introduces the concept of a CacheGroup, store.group, which manages dependency tracking and also serves as a source of keys for the result caching system. While the Root object has its own CacheGroup, Layer objects share a CacheGroup object, which is the key to limiting diversity of cache keys when more than one optimistic update is pending.

This separation allows the InMemoryCache to enjoy the full benefits of result caching for both optimistic (Layer) and non-optimistic (Root) data, separately.

Implement EntityStore#makeCacheKey to simplify result caching

Although all the components of a key returned by makeCacheKey are important, there's usually one that "anchors" the rest, because it survives the longest and changes the least. In the case of the result caching system, the current EntityStore object is that anchor.

This commit formalizes that anchoring role by making the EntityStore literally responsible for generating cache keys based on query AST objects, variables, and other frequently changing inputs.

Especially given the recent introduction of CacheGroup logic, it's easier to reason about the makeCacheKey functions if we put part of their implementation behind an abstraction. This abstraction also means we no longer need to pass a KeyTrie into the StoreReader constructor, as the KeyTrie (context.store.group.keyMaker) now resides within the EntityStore, which is provided to StoreReader as part of the ExecContext (context.store).

Copy link
Member

@hwillson hwillson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Very interesting solution @benjamn (and thanks for explaining the cache key diversity issue so clearly!).

@benjamn benjamn force-pushed the ignore-previousResult-when-reading-from-cache branch from f9c6d60 to 8c622bb Compare December 3, 2019 20:13
@benjamn benjamn changed the base branch from ignore-previousResult-when-reading-from-cache to release-3.0 December 3, 2019 20:16
The result caching system introduced by #3394 gained the ability to cache
optimistic results (rather than just non-optimistic results) in #5197, but
since then has suffered from unnecessary cache key diversity during
optimistic updates, because every EntityStore.Layer object (corresponding
to a single optimistic update) counts as a distinct cache key, which
prevents cached results from being reused if they were originally read
from a different Layer object.

This commit introduces the concept of a CacheGroup, store.group, which
manages dependency tracking and also serves as a source of keys for the
result caching system. While the Root object has its own CacheGroup, Layer
objects share a CacheGroup object, which is the key to limiting diversity
of cache keys when more than one optimistic update is pending.

This separation allows the InMemoryCache to enjoy the full benefits of
result caching for both optimistic (Layer) and non-optimistic (Root) data,
separately.
Although all the components of a key returned by makeCacheKey are
important, there's usually one that "anchors" the rest, because it
survives the longest and changes the least. In the case of the result
caching system, the current EntityStore object is that anchor.

This commit formalizes that anchoring role by making the EntityStore
literally responsible for generating cache keys based on query AST
objects, variables, and other frequently changing inputs.

Especially given the recent introduction of CacheGroup logic, it's easier
to reason about the makeCacheKey functions if we put part of their
implementation behind an abstraction. This abstraction also means we no
longer need to pass a KeyTrie into the StoreReader and StoreWriter
constructors, as the KeyTrie (context.store.group.keyMaker) now resides
within the EntityStore, which is provided as part of the ExecContext
(context.store).
@benjamn benjamn force-pushed the reduce-result-caching-key-diversity branch from ec6ab66 to 0c26be9 Compare December 3, 2019 20:21
@benjamn benjamn mentioned this pull request Dec 3, 2019
31 tasks
@benjamn benjamn merged commit 8416f62 into release-3.0 Dec 3, 2019
@benjamn benjamn deleted the reduce-result-caching-key-diversity branch December 3, 2019 20:24
@benjamn benjamn changed the title Improve optimistic result caching by limiting cache key diversity. Improve optimistic update performance by limiting cache key diversity. Dec 5, 2019
@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants