-
-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Feat/improved cache #4589
Feat/improved cache #4589
Conversation
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Any perf #'s?
|
||
for (i=0, l=path.length; i<l; i++) { | ||
state = state[path[i]]; | ||
let path = splitOnDot(name); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
this is also likely something we can cache/precompute
@@ -241,7 +244,7 @@ var DirtyState = { | |||
|
|||
//TODO(Igor) reloading now triggers a | |||
//loadingData event, though it seems fine? | |||
loadingData: Ember.K, | |||
loadingData: K, |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can likely just use concise functions
return this.retrieveManagedInstance('adapter', name, this.get('_adapterFallbacks')); | ||
deprecate(`Use of lookupAdapter is deprecated, use adapterFor instead.`, { | ||
id: 'ember-data:lookupAdapter', | ||
since: '2.10' |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This should be since: '3.0'
...
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
until: '3.0'
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Arrg, I am sure I wrote until: "3.0"
. Thanks for the 👀 @bmac 👍
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
to clarify, both since
and until
should be set, c?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I believe until
is the only option http://emberjs.com/api/#method_deprecate. Setting since
should have no impact on the deprecation message.
69436ca
to
187225b
Compare
A sidenote on this caching, In our tests we hit the cache 5400 times for 41 total transition paths (instead of allocating and building the transition map each time). |
d2d3104
to
793883f
Compare
Eyeballing the production build benchmarks (no filtering of outliers but I am looking at a box plot), we moved from roughly 28ms spent in store.push for 238 records of a single type to a bit less than 27ms. However, the overall mass left shifted significantly and if I spend the time to filter the outliers it looks likely this was closer to a 2-2.5ms win over 481 adapter and serializer lookups. This is a bit less than I was hoping for. |
Why is the number 238 records in the numbers? |
@fivetanley common multiple of the various record distribution scenarios. |
793883f
to
90c8b81
Compare
} = heimdall.registerMonitor('system.store.container-instance-cache', | ||
'__get', | ||
'_instanceFor' | ||
); | ||
|
||
/* | ||
* The `ContainerInstanceCache` serves as a lazy cache for looking up |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Please update the comments
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
updated
90c8b81
to
8a4876c
Compare
@igorT has brought up that this approach entangles the cache and the store via closing over I will make this change later today before we should merge this. |
8a4876c
to
ffe4e14
Compare
@igor fallbacks are now in the cache |
Any perf #? |
|
ffe4e14
to
8578c74
Compare
@igorT rebased and green |
1 similar comment
@igorT rebased and green |
return this.retrieveManagedInstance('adapter', name, this.get('_adapterFallbacks')); | ||
deprecate(`Use of lookupAdapter is deprecated, use adapterFor instead.`, { | ||
id: 'ember-data:lookupAdapter', | ||
since: '2.10', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
unused
}, | ||
|
||
lookupAdapter(name) { | ||
return this.retrieveManagedInstance('adapter', name, this.get('_adapterFallbacks')); | ||
deprecate(`Use of lookupAdapter is deprecated, use adapterFor instead.`, { | ||
id: 'ember-data:lookupAdapter', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The format for all the deprecations/warnings/assertions are ds.x.y
so I would suggest ds.store.lookupAdapter
here...
lookupSerializer(name) { | ||
deprecate(`Use of lookupSerializer is deprecated, use serializerFor instead.`, { | ||
id: 'ember-data:lookupSerializer', | ||
since: '2.10', |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
see above
@runspired sorry for being a spoilsport here... 🏃 |
8578c74
to
36bc08a
Compare
…ter. Reduces the number of lookups, sets, expense of detection, allocations for fallbacks. Also includes additional internalModel cleanup and backburner/state machine optimizations, and deprecates store.lookupAdater and store.lookupSerializer
This PR branches off of the internal-model work in #4562
Perf numbers for both this and that PR are in the works. I don't expect them to be overly substantial, as in the initial benchmarking these two areas were good chunks but not where the bulk of our costs were, but we should still see nice gains for both.