-
Notifications
You must be signed in to change notification settings - Fork 393
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
perf(engine-core): reduce fragment cache objects #4431
Conversation
} | ||
|
||
// Mapping of cacheKeys to token array (assumed to come from a tagged template literal) to an Element | ||
const fragmentCache: Map<number, WeakMap<string[], Element>> = new Map(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I considered just creating 4 WeakMaps
but it didn't seem to offer a perf improvement.
function buildParseFragmentFn( | ||
createFragmentFn: (html: string, renderer: RendererAPI) => Element | ||
): (strings: string[], ...keys: (string | number)[]) => () => Element { | ||
return (strings: string[], ...keys: (string | number)[]) => { | ||
const cache = create(null); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Note this cache was effectively tied to every tagged template literal. Each tagged template literal has a unique string[]
object, so using the string[]
as the cache key in a WeakMap effectively gives us a unique-per-tagged-template-literal cache.
|
||
return function (parts?: VStaticPart[]): Element { | ||
return function parseFragment(strings: string[], ...keys: (string | number)[]) { | ||
return function applyFragmentParts(parts?: VStaticPart[]): Element { |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I decided to name these functions because it's easier to read in a perf trace.
// Cache is only here to prevent calling innerHTML multiple times which doesn't happen on the server. | ||
if (process.env.IS_BROWSER) { | ||
setInFragmentCache(cacheKey, strings, element); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Small bug I noticed – the fragment cache is only ever used in the browser, but we were still (uselessly) setting it here even on the server.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Would it make sense to put the browser check within the function so that we don't forget about it in other places?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done: 3fdcaf4
/nucleus test |
Well that's strange! A workflow could not be found for this PR. Please try running the |
/nucleus start |
new WeakMap(), | ||
new WeakMap(), | ||
new WeakMap(), | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
A bit embarrassing, but using ArrayFill
/ArrayFrom
is really gnarly given our @lwc/shared
conventions. This is a lot more readable than those options.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ArrayFrom({length: 4}, () => new WeakMap())
Doesn't seem too gnarly to me. 🤷
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Ah, I didn't know about that shorthand where you pass in the function as the second arg! You're right, that's much nicer.
new WeakMap(), | ||
new WeakMap(), | ||
new WeakMap(), | ||
]; |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
ArrayFrom({length: 4}, () => new WeakMap())
Doesn't seem too gnarly to me. 🤷
// Cache is only here to prevent calling innerHTML multiple times which doesn't happen on the server. | ||
if (process.env.IS_BROWSER) { | ||
setInFragmentCache(cacheKey, strings, element); | ||
} |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
🤔 Would it make sense to put the browser check within the function so that we don't forget about it in other places?
Details
This is a small perf improvement to how we handle fragment caches. The main benefit is in using less memory and creating fewer objects.
Fragment caches are used in compiled templates. They're used whenever you see e.g.:
Under the hood,
parseFragment
will cache based on 1) the tagged template literal and 2) a cache key based on synthetic-vs-native shadow and whether scoped styles are present. The reason for the second cache key is that the HTML string will be different in different cases:(The main reasons for this are 1) historical – synthetic shadow scoping uses attributes whereas scoped styles use classes, and 2) performance – we avoid rendering attributes/classes unless absolutely necessary.)
The problem with the current fragment cache is that it creates one cache object per tagged template literal. Each cache object is just a mapping from
number
toElement
. This is wasteful because there are only 4number
s we have to worry about (corresponding to the 4 cases above). So we only really need 4 maps. I.e.:Benchmark
This PR is perf-neutral for our current benchmarks, but I did write a new benchmark to demonstrate the improvement. This benchmark works because it measures the time to evaluate the compiled LWC template, rather than the time to render it. The improvement is 2-6%:
This number is actually an undercount, since the benchmark actually measures the time to download the
.js
file as well as the time to evaluate it:(Sadly I could not get this benchmark to work in Best, which is why I'm not committing it to
master
. Also it's not very good due to the above issue with network vs evaluation time. Deferred import evaluation would help a lot here.)Tests
I also added a bunch of new Karma tests to ensure I didn't break anything. The tests pass with and without this PR.
Does this pull request introduce a breaking change?
Does this pull request introduce an observable change?