Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.Sign up
Recycle DOM nodes #8783
After meeting with the Chrome team, we have found out that our GC times were quite severe when many DOM nodes were appended and removed from the tree (e.g. a typical scenario where you can see this in practice is scrolling).
This is a chart that shows how master behaves when scrolling down for a while:
As you can see, the browser is triggering minor garbage collections during the scroll; at some point (which, I think, coincides with some Chrome threshold), a full GC is triggered and it takes quite a long time:
With this PR we are introducing a DOM elements pool, which efficiently allocates and deallocates nodes to minimize GC times. The result is the following:
Moreover, we were able to ditch all the code that dealt with strings that contained null bytes (e.g. #8660) because now those characters are retained in the actual text nodes.
Some other pair of
It appears that the memory usage of the elements pool will never go down. It should reach an upper bound, of course, but I'm still concerned that someone could accidentally reach an edge case and then their editor is
It appears that this pool will reach an upper bound based on the most complex UI that has been displayed so far. So some things to watch out for might be?
Some minor style suggestions:
Love the way you structured this with the elements pool. I'm inclined to accept a tiny slowdown of line construction to avoid the big pauses. I'm going to build this and try it out, but in general I'm more worried about janky performance than slightly slower performance. But if you have ideas about how we can have the best of both worlds that would be good too.
@lee-dohm: that's a really good concern but I think this won't be worse than what we have currently. Let me explain why:
What do you think? Am I missing something there? On the other hand, I realized we need to make sure that the pool gets disposed properly so that its nodes can be garbage collected when the editor is destroyed.
Thanks for the suggestions guys!
Overall I feel pretty good about this, exception made for the lines construction routines which seems a bit slower. I'd like to hear your feedback as well after you try this out; in the meantime, I am going to research how we can minimize those times when/if we resort to a background thread.
Hi there, nice work! Object pools have always been an efficient way to fix performances problem and have proven to be very effective (I'm using it for both pigments and in my csv editor package and it achieved great results).
I think, in the case of text editor tiles and lines, I can imagine a much more worst scenario even if it won't happen on each run:
When changing the font size from the settings view, if you don't type the size quickly enough you can end with lines with a height of 1px for a short moment. In that case your pool will reach an amount of elements completely meaningless after you revert to a proper font size. It'll also occurs if you unintentionally change the font size with the other available means, but I believe it will have less impact as the change should be less radical than when going from 14 to 1 and then to 12 for instance.
I frequently faced these kind of sudden burst of instances back when I was working in the flash game industry. You can have special events in a game that trigger a massive amount of particles (and we all love that
Thank you for sharing this super useful insight! That's an edge case I hadn't think of and which we should definitely address.
I think we have a couple of ways in which this could be hypothetically fixed:
I went ahead and implemented 1) in daf4316. @abe33: your experience with games would be super helpful here, so please let me know if you think there could be anything wrong with this solution. Thanks again!
@as-cii Solution 1 seems fine too.
Solution 2 woud require some metrics to be really accurate and solution 3 seems more complex to me without having good metrics to analyze.
For instance if I remove all the code in a file we'll end with a ratio used/freed of 1/N where N is the number of lines used before the deletion, but we should not rebalance anything because this is kinda the expected result in this case, but at the same time, the same ratio could be obtained in the scenario I described above but in that case a rebalance would be required.
added a commit
this pull request
Sep 17, 2015
Sep 17, 2015
1 check passed
If the pool re-balance become an issue I'll throw one idea:
At each generation, free a small percent of the pool.
Suppose you free 1% of the pool per generation. Then suppose some event occurs that drastically reduce the number of node required (say snap to half of screen). After about 300 generations you'll come to about 5% of the new operating condition whatever that new condition is (0.99^300≈0.05) .
If you sync those generation with screen refresh (say about 60 per second), it means you'll adapt in about 5 seconds to the new condition. Which is relatively short in term of wasted memory for the user, but at the same time an eternity if you compare to the 30ms timescale of the garbage collection screenshot.