Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Avoid copying entire cache on each optimistic read. #4319

Merged
merged 6 commits into from
Jan 17, 2019

Conversation

benjamn
Copy link
Member

@benjamn benjamn commented Jan 16, 2019

Previously, the InMemoryCache maintained an array of recorded optimistic updates, which it would merge together into an entirely new ObjectCache whenever performing any single optimistic read.

This merging behavior was wasteful, but the real performance killer was calling this.extract(true) each time, which copied the entire underlying (non-optimistic) cache, just to create a throw-away ObjectCache for the duration of the optimistic read. Worse still, this.extract(true) was also called in recordOptimisticTransaction, to create a throw-away RecordingCache object.

Instead of creating temporary copies of the entire cache, InMemoryCache now maintains a linked list of OptimisticCacheLayer objects, which extend ObjectCache and implement the NormalizedCache interface, but cleverly delegate to a parent cache for any missing properties. This delegation happens simply by calling this.parent.get(dataId), so there is no need to extract/copy the parent cache.

When no optimistic data are currently active, cache.optimisticData === cache.data, which means there are no additional layers on top of the actual data. Lookup time is proportional to the number of OptimisticCacheLayer objects in the linked list, so it's best if that list remains reasonably short, but at least that's something the application developer can control.

Calling removeOptimistic(id) removes all OptimisticCacheLayer objects with the given id, and then reapplies the remaining layers by re-running their transaction functions.

These improvements probably would have made the excessive memory usage reported in #4210 much less severe, though disabling dependency tracking for optimistic reads (the previous solution) still seems like a good idea.

@benjamn benjamn self-assigned this Jan 16, 2019
@benjamn benjamn requested a review from hwillson January 16, 2019 23:08
return this;
}

public toObject(): NormalizedCacheObject {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It seemed important to implement the toObject method correctly, but one big benefit of this PR is that toObject should no longer be called in the course of normal InMemoryCache usage. Maybe if you happen to call cache.extract(true) while optimistic updates are active, but only then.

return null;
}

const store =
query.optimistic && this.optimistic.length
? new ObjectCache(this.extract(true))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line was previously really expensive.

@@ -174,13 +203,8 @@ export class InMemoryCache extends ApolloCache<NormalizedCacheObject> {
}

public diff<T>(query: Cache.DiffOptions): Cache.DiffResult<T> {
const store =
query.optimistic && this.optimistic.length
? new ObjectCache(this.extract(true))
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This line was previously really expensive, too.


// Re-run all of our optimistic data actions on top of one another.
toPerform.forEach(change => {
this.recordOptimisticTransaction(change.transaction, change.id);
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Every single time we called this.recordOptimisticTransaction here, the entire cache was copied.

@@ -1,23 +1,28 @@
import { NormalizedCache, NormalizedCacheObject, StoreObject } from './types';

export class ObjectCache implements NormalizedCache {
constructor(private data: NormalizedCacheObject = Object.create(null)) {}
public toObject(): NormalizedCacheObject {
constructor(protected data: NormalizedCacheObject = Object.create(null)) {}
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The only functional change in this module was making data a protected rather than private member, since we need to access it in the OptimisticCacheLayer subclass.

import { NormalizedCacheObject } from '../types';

describe('RecordingCache', () => {
describe('OptimisticCacheLayer', () => {
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was tempted just to git rm src/__tests__/recordingCache.ts, since src/recordingCache.ts no longer exists, but then I realized these tests could be adapted to use the new OptimisticCacheLayer class instead. Similar in spirit, though very different in implementation.

@@ -8,5 +8,4 @@ export * from './readFromStore';
export * from './writeToStore';
export * from './fragmentMatcher';
export * from './objectCache';
export * from './recordingCache';
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I very much doubt anyone was using the RecordingCache outside of the tests in this package, though I suppose this could be considered a minor breaking change.

@mxstbr
Copy link
Contributor

mxstbr commented Jan 17, 2019

SO PUMPED ABOUT THIS!!!!

Previously, the `InMemoryCache` maintained an array of recorded optimistic
updates, which it would merge together into an entirely new `ObjectCache`
whenever performing any single optimistic read.

This merging behavior was wasteful, but the real performance killer was
calling `this.extract(true)` each time, which copied the entire underlying
(non-optimistic) cache, just to create a throw-away `ObjectCache` for the
duration of the optimistic read. Worse still, `this.extract(true)` was
also called in `recordOptimisticTransaction`, to create a throw-away
`RecordingCache` object.

Instead of creating temporary copies of the entire cache, `InMemoryCache`
now maintains a linked list of `OptimisticCacheLayer` objects, which
extend `ObjectCache` and implement the `NormalizedCache` interface, but
cleverly delegate to a parent cache for any missing properties. This
delegation happens simply by calling `this.parent.get(dataId)`, so there
is no need to extract/copy the parent cache.

When no optimistic data are currently active, `cache.optimisticData` will
be the same (`===`) as `cache.data`, which means there are no additional
layers on top of the actual data. Lookup time is proportional to the
number of `OptimisticCacheLayer` objects in the linked list, so it's best
if that list remains reasonably short, but at least that's something the
application developer can control.

Calling `removeOptimistic(id)` removes all `OptimisticCacheLayer` objects
with the given `id`, and then reapplies the remaining layers by re-running
their transaction functions.

These improvements probably would have made the excessive memory usage
reported in #4210
much less severe, though disabling dependency tracking for optimistic
reads (the previous solution) still seems like a good idea.
…thods.

Once we stopped calling this.transformDocument in these method
implementations, they became character-for-character identical to the
implementations already provided by the ApolloCache superclass, so we can
simply inherit them as-is.
@benjamn benjamn force-pushed the avoid-copying-entire-cache-for-optimistic-reads branch from 2658796 to 4a87022 Compare January 17, 2019 19:32
Copy link
Member

@hwillson hwillson left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 👍 👍 @benjamn - so great! L(immediately)GTM!

@benjamn benjamn merged commit c2295e6 into master Jan 17, 2019
@benjamn benjamn deleted the avoid-copying-entire-cache-for-optimistic-reads branch January 17, 2019 19:57
@benjamn
Copy link
Member Author

benjamn commented Jan 17, 2019

This has been released in apollo-cache-inmemory@1.4.0. The minor version bump is just because RecordingCache is no longer exported. These changes should be backwards-compatible otherwise.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants