Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Various cache read and write performance optimizations. #5948

Merged
merged 11 commits into from
Feb 16, 2020

Conversation

benjamn
Copy link
Member

@benjamn benjamn commented Feb 14, 2020

Although we've been building Apollo Client 3.0 with performance in mind, making sure (for example) that you only pay for the features you use, we haven't focused on optimizing raw performance per se, until now.

Using an especially large query and response object provided by a customer/contributor, I was able to reduce initial (cold) execution times for the following write/read round-trip from ~150ms (measured using the latest published beta, @apollo/client@3.0.0-beta.34) to to ~95ms, a 37% improvement, and average (warm) execution times from ~105ms to ~25ms, a 76% improvement:

const start = Date.now();

cache.writeQuery({
  query,
  data: result.data,
  variables,
});

cache.watch({
  query,
  variables,
  optimistic: true,
  immediate: true,
  callback(data) {
    console.log(Date.now() - start, "ms");
  },
});

For this benchmark, I created a new InMemoryCache object for each run, so the results are not skewed by the benefits of result caching (see #3394) though result caching can have huge benefits for actual applications.

As always with performance, exact numbers will vary from query to query and machine to machine, but I used a 2014 dual-core 3GHz MacBook Pro, a 36KB query with lots of fragments, and a 500KB JSON-encoded result.

It's worth reviewing each of these commits separately, as they do not really follow a common theme (except for speeeed :feelsgood:).

Copy link
Member

@hwillson hwillson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Really incredible work @benjamn!

src/cache/inmemory/entityStore.ts Outdated Show resolved Hide resolved
When I wrote this code, I thought this array would usually be so short
that indexOf would be faster than Set.prototype.has, but of course the
pathological cases are what end up mattering, and I've recently seen some
result objects that cause thousands of shallow copies to be made over a
series of many merges using one DeepMerger instance.
Since shouldInclude gets called for every single field in any read or
write operation, it's important that it takes any available shortcuts to
handle the common case (no directives) as cheaply as possible.
Since any of the provided variables could be consumed by any of the fields
in a selection set that we're reading, all variables are potentially
relevant as part of the result object cache key, so we don't make any
attempt to stringify just a subset of the variables. However, since we use
the same stringified variables in every cache key, there's no need to
perform that encoding repeatedly. JSON.stringify may be fast, but the
variables object can be arbitrarily large.
Believe it or not, iterating over the values of policies.rootTypenamesById
was noticeably expensive according to Chrome devtools profiling. Since
this information almost never changes, we might as well maintain it in the
format that's most convenient.
Creating a throwaway array just to call JSON.stringify was much more
expensive than string concatenation. The exact format of these cache keys
is an invisible implementation detail, so I picked something that seemed
unlikely ever to be ambiguous, though we can easily change it later.
Since policies.applyMerges doesn't change anything unless there are custom
merge functions to process, we can skip calling it if no merge functions
were found while processing the current entity.
Instead of recursively calling processSelectionSet to handle fragments, we
can simply treat their fields as fields of the current selection set.
This change means fragment results will no longer be cached separately
from normal selection set results, which is potentially a loss of caching
granularity, but there's also a reduction in caching overhead because
we're caching fewer result objects, and we don't have to merge them all
together, and (most importantly) the result caching system still tracks
dependencies the same way as before.

It's as if we transformed the query by inlining fragment selections,
except without doing any work!
Although this may seem like a reversion to forEach instead of a for loop,
the for loop had an unexpectedly negative impact on minification, and a
Set has the ability to deduplicate selection objects, so we never
re-process the same field multiple times through different fragments.
@benjamn benjamn force-pushed the read-and-write-performance-optimizations branch from 54d6ed2 to dc61865 Compare February 16, 2020 17:22
@benjamn benjamn merged commit b094e30 into master Feb 16, 2020
@benjamn benjamn deleted the read-and-write-performance-optimizations branch February 16, 2020 19:58
@benjamn
Copy link
Member Author

benjamn commented Feb 17, 2020

These changes have been released in @apollo/client@3.0.0-beta.35, fyi.

@github-actions github-actions bot locked as resolved and limited conversation to collaborators Feb 16, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants