-
Notifications
You must be signed in to change notification settings - Fork 840
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
deterministic memory metrics #916
Comments
The memory benchmark (tries to) run the GC after initBenchmark and each runBenchmark loop: https://github.com/krausest/js-framework-benchmark/blob/master/webdriver-ts/src/forkedBenchmarkRunner.ts#L271 |
that mem metric was in line with other adjacent libs for a long time and changed suddenly without any lib or bench impl updates, so idk 🤷 |
here are 3 cycles of "create 1k" x 5, with a few seconds in between, then a forced GC from devtools at about 17500ms: you can see the GC tends to collect under pressure (e.g. on the first or middle run of the next 5x set); simply being idle after each 5x set is not always sufficient, even though the memory can technically be reclaimed. same story for create/clear: |
Over time there were two memory metric implemented. The one currently used extracts MajorGC events from the timeline and uses their reported memory value (usedHeapSizeAfter).
|
do you get similar profiles to mine in devtools on your machine when manually/continuously running these metrics? the GC strategy likely changed between 90 and 91, but it's unlikely that the memory used is now 3x for the same code. so either the original results were wonky, the new ones are wonky, or both. |
that looks consistent with mine. there's no leak happening AFAICT (forced gc at any time reclaims everything), and the peak usage should be pretty much identical to pre-91. hence this issue. i don't have a solution, but just wanted to get some more eyeballs on the situation. |
Here's where I stand:
All in all I'm pretty sure this test really consumes 9 MB memory in chromedriver. But when running in a normal chrome instance most of the time I see that memory consumption drops from about 9 MB to about 3,1 MB. I'm really close to giving up on this one which means temporarily disabling most memory benchmarks. Does anyone have an idea what else to try? |
Last resort was to try puppeteer. Here's a crude and small hack to emulate the memory benchmark with puppeteer:
It prints:
I added some sleep and page.queryObjects, which should cause a GC.
right before
So puppeteer also reports the high memory results for domvm. |
similar results for the other libs with increased mem? (e.g. preact) sorry i dont have much time to help with testing this ATM :( |
A picture says more... I updated to code above. It's now closer to the chromedriver tests. |
Asking the puppeteer guys was a great help. It turned out that for puppeteer it's important to dispose any element you get via waitForXPath (which I didn't). Adding the dispose call caused the memory increase between runs to disappear. Now to the next important question: Is there any way to dispose an element in selenium / chromedriver? I've never read about that... |
nice find! 💯 as to why this only affects some frameworks but not others -- and i kind of suspected this previously -- is perhaps because they store back-references into the vdom as ad-hoc properties on the dom elements (el._node in domvm's case), which creates a difficult GC barrier. this explains maybe why the GC is lazier than i would expect. i can try switching to an internal Map or WeakMap for this now that both are widely supported and it will likely fix the GC pressure but add overhead for maintaining the extra Map & doing lookups, might be a net win tho. 🤷 |
@leeoniya I'll check how to work around this issue. Has anyone read about a way to dispose the element handles from findElement in webdriver? It sounds like such a generic issue (and there's even an API method in puppeteer for it) that I'd really be surprised if there was no solution in webdriver / selenium. If nothing else works I'll consider moving to puppeteer. |
no worries. just glad it's been root-caused. |
And now I migrated the memory benchmark to puppeteer. Guess what it looks like now: So in summary this means that the PR mentioned above was actually okay and I didn't realize and porting to puppeteer wouldn't have been necessary. |
I pushed the code that uses puppeteer for the memory benchmarks. |
sorry for the slow reply. i think if it also fixes preact (which is almost certainly leak-free) and some others, then this is 👍 . might also be interesting to track peak mem as well, where template/block-based frameworks should shine vs traditional vdoms. i'll add my usual criticism that when you have 80k dom nodes, gap between slow and fast frameworks shrinks since most of the time is dominated by browser layout and dom related tasks that are not framework specific. |
i've been investigating whether domvm has a real memory leak (in domvm/domvm#229) or if this was simply a recent change in chrome's GC behavior. right now it's looking like the latter (see profiles in that issue).
wonder if chromedriver or lighthouse has an explicit api to collect garbage, like puppeteer [1] which can be invoked prior to taking mem measurements. would be curious to see if this makes a difference for some libs.
cc @krausest @ryansolid
[1] puppeteer/puppeteer#1394 (comment)
The text was updated successfully, but these errors were encountered: