Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

♻️ improves merge proxies #3722

Merged
merged 7 commits into from Sep 11, 2023
Merged

Conversation

ekwoka
Copy link
Contributor

@ekwoka ekwoka commented Aug 24, 2023

This PR refactors mergeProxies to increase performance and reduce memory costs.

  • 🐛 Allows JSON stringifying $data (solving an issue presented in discord)
  • ♻️ Uses Reflect (to reduce code and memory)
  • ⚡ Shortcircuits on unscopables (to increase performance on every access of data in expressions)
  • ♻️ Extract proxyTraps (to reduce memory)
  • ✅ Uses legacy syntax (to pass tests)

@ekwoka ekwoka changed the title merge proxies ♻️ improves merge proxies Aug 24, 2023
Co-authored-by: Christian Taylor <christianbtaylor@gmail.com>
@calebporzio
Copy link
Collaborator

Thank you for this @ekwoka - do you have any benchmarks to go with it to more concretely demonstrate the benefits? Thanks!

@ekwoka
Copy link
Contributor Author

ekwoka commented Aug 29, 2023

@calebporzio Sure thing. Here's some tests that basically just compares both mergeProxies methods accessing at different depths in some different run times.

So naturally old and new is self explanatory, then the first number is the number of repititions, the second is the depth of the test.

The depth is used to generate a proxy out of that many objects, and similarly, in the accessing, it accesses a property on each of those objects within a with statement.

So its 100k loops on a depth of 26, and 1M loops on a depth of 10.

Screenshot 2023-08-29 at 10 51 31 AM

You can see the new one is basically always about 30% faster than the old in these test configuration So every time a property is accessed in an expression, this could make that up to 30% faster (deeper component trees will benefit more, smaller trees it's much more modest (around 3-5%).

For memory, this script just puts that number of merged proxies into an array so we can actually measure the memory size (otherwise it's basically impossible to really tell) and the time in ms. So less memory can also have a performance impact (less garbage to collect).

Screenshot 2023-08-29 at 10 56 14 AM

About 75% less memory per merged proxy (runtime memory was about 3mb so not meaningful to these measurements)

@calebporzio
Copy link
Collaborator

Great, thank you for the extra context!

@calebporzio calebporzio merged commit e1a6bf0 into alpinejs:main Sep 11, 2023
1 check passed
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants