Optimize events, support objects with
Changed/optimized event handling, and added support for objects with
Each of the two main commits could be taken standalone, and if the second is dropped, I can redo the included fixes in a separate PR (they're all mostly trivial).
In addition, the first commit can be backported with little modification to v1 without breakage (unless people actually rely on being able to wrap Mithril-created event handlers).
Motivation and Context
See #1939 for
As a result, I had to switch to using
How Has This Been Tested?
Reused @spacejack's tests for
And yes, I've run the tests locally.
Types of changes
- `handleEvent` is a very useful tool. - Always use `addEventListener`/`removeEventListener`, since it's required for this optimization. - Change log updated. - Drive-by: make DOM mock work with both event listener types. - Drive-by: eliminate possibility of `Object.prototype` interference.
- `handleEvent` is checked on dispatch, like in the DOM. - Had to reorder attribute key checking so `undefined` events still got removed. - Drive-by: Optimize the initial attribute key checking a little. - Drive-by: Fix changelog v2.0.0 link in TOC.
@tivac That's odd...I was seeing that in gain, so I guess we can assume it's within the margin of error. I'll note a pretty important detail: adding/removing event listeners is much more expensive in browsers than with our mock. Did you run the benchmarks in an actual browser or just Node? (I'd expect that if you're benching based on the mock, you should get a mild slowdown due to a slightly increased overhead.)
Another item of note: it no longer fast-paths things like
Either way, the diff was pretty sketchy due to not recording them in the old vnode.
@spacejack It's actually almost as cheap as normal property assignment in practice to modify the
Okay...so here's some numbers, from benchmarking in Chrome:
Before this patch:
After this patch:
I'll come back with memory numbers later.
From running the memory profiler (to create a timeline), it appears that the memory usage is slightly increased (1.3MB to 1.4MB over 25s or about ~25KB to ~28KB per sample) in the faster benchmark due to more strings being retained. So my memory claims are invalid (at least for 1-2 listeners).
Most of the increase is somehow system-related, though, not JS-land stuff - it says the total allocated went from ~40MB to ~50MB, but only about 2MB of that difference was directly JS-related, so I don't know (maybe the profiler itself poisoning the report?).
I added the "backport to v1" label. We can still keep most of the same code, just if it returns
Edit: I should've done that in the first place.