-
Notifications
You must be signed in to change notification settings - Fork 178
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Attribute maps #117
Attribute maps #117
Conversation
I created an issue, #118, to create some performance tests. The reason I have not done so far is that creating truly accurate tests for this is really hard. In any case, for this specific CL, I'm seeing a ~30% drop in perf. I can at least add those tests somewhere but they should be taken with a giant grain of salt. |
Bumping this again. I'm only seeing marginal drops in performance:
|
For selection on Chrome, it looks like it takes 50% more time for maps vs explore. I don't think that is really marginal. I think by splitting out core, we can allow people to make that determination for themselves though. One thing to note is that in order to get a consistent value for the time, I filter outliers, which will filter out long runs due to GC. Capturing that sort of information in a test is pretty hard to do. There are other spikes that should be ignored due to getting swapped off the CPU for another process to run which might occur due to the fact that any test needs to hog the CPU in order to get enough samples, which is bound to cause it to get bumped off at some point. |
50% yes, but 0.124ms is maginal. We're selecting from 200 rows, meaning we're spending ~0.00062ms longer per row. Even at the slower speed, we could be updating over 8000 rows and still hitting 60fps. I think the simplified API more than makes up for that.
Core being DOM alignment vs DOM+attribute alignment?
I run 4 or 5 times each, taking my perceived average. I'm not actually averaging them, but taking the number I see most often. |
So if you have 16ms for a frame and you need to set aside ~4ms for the browser layout, you effectively have only 12ms. If you assume you need something like 20x as long for a low end phone as you see on desktop, that means you only have 0.6ms total perform a diff. Now if you assume that the application itself wants to perform some logic, the amount of time the library can use is even smaller. The tests also have a somewhat simplified DOM structure and number of attributes compared to some use cases. To compensate I bumped up the number of rows to be more than a typical application would use.
Correct. There isn't really a need to force people to buy into both at the same time. I'm not sure yet how the attribute caching can be done in a good way while keeping it completely separate from core. For now, the core part would need to allocate the object holding
The code used to generate that number is doing some averaging and filtering already. So if you trigger a GC on 1/100 runs, I just drop that run. Each of the tests does a different number of runs, selection doing 400 and creation doing 200. The threshold I use for filtering out runs is if they are outside 1.5 times the 1st or 3rd quartile of runs. |
Incremental DOM will not be making this change. With a little more work the current core can be pulled out into a separate repo and you would definitely be free to build this as a separate project if you are interested. |
Following the discussion in #114, here's iDOM reworked to accept a map of attribute name/values.
Fixes #115, #23. It also opens up the possibility of exposing
updateAttributes
publicly, allowing Glimmer-like rendering without patching everything in the containing element.I'd love some outside performance testing. I'm showing an improvement over master, even with the object allocations.