-
-
Notifications
You must be signed in to change notification settings - Fork 147
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
feat(lifecycle): new callbacks, async tasks, and updated template controllers #171
Conversation
Code Climate has analyzed commit fb8fbd2 and detected 21 issues on this pull request. Here's the issue category breakdown:
The test coverage on the diff in this pull request is 75.4% (50% is the threshold). This pull request will bring the total coverage in the repository to 86.1% (-0.9% change). View more on Code Climate. |
This pull request introduces 1 alert when merging 1c288e0 into 978c2f0 - view on LGTM.com new alerts:
Comment posted by LGTM.com |
Does this have any implication on granularity of each View that doesn't own the |
Can you clarify the question or provide an example? I'm not quite sure. |
For views with behaviors, callback life cycle |
Oh, man, that's a problem probably. I'll have to think about that. It might be worth it to just remove caching. This new behavior is probably more important. I'm not sure. |
Well, for starters, when we return a view to cache, we would need to:
However, there's a trick to this, since we don't want to remove the nodes or return to cache at the wrong time WRT the lifecycle itself. So, we may need to tweak these APIs more. |
This stinks...massive complication I think. Ugh... |
Maybe we can make it work with a requirement that you must own the lifecycle object to return things to the cache? Thoughts? |
Caching requires lowest amount of LOC and effort for the same amount of perf gain, I think it will be very hard to justify removing caching. I remember repeat used to have it own caching mechanism and then later merged into part of templating. Maybe you can give some rationale behind that move so we can have stronger base for discussion?
From what it looks like, at any point, there is only one owner. This seems limited, and I think it's inappropriate to put that restriction on. |
I'll probably need to rethink the entire detach lifecycle again then 😭 |
For your inspiration, there is this plugin from @huochunpeng https://github.com/buttonwoodcx/bcx-aurelia-reorderable-repeat , it may / may not employ caching, but it shows the strength of having granularity in control: ability to have any arbitrary way to implement the behavior that fits your need. |
I'll explore the link...but I think it actually makes sense that something should only return to cache when it's owner controls the lifecycle. Otherwise, you get tons of really inefficient behavior for what basically amounts to an alternate form of caching. So, the idea being that if Repeat is explicitly adding and removing views, then it's going to own the lifecycle and be able to determine that certain views should go back to the cache, but if there's a view several layers above that Repeat, and it owns the lifecycle (or a Repeat that's calling it) then only it can be put in/out of the cache. There's no need to do anything with the children. That would just be extra work. Does that make sense? |
Problem is that efficient caching means components often need to know something about their ancestors and descendants. This is something that I managed to solve partly in binding with the flags, because an ancestor can tell a descendant what it's doing and the descendant can, based on that, decide whether or not to actually remove the views during I got a few ideas on how to improve this but I need to sit down for a bit on that. For now, it's important that we get a few meaningful benchmarks first so we can play around with different things and compare the impact rather than theorize about it. Perfect caching is impossible anyway as some assumptions need to be made. |
@EisenbergEffect Yes, that makes perfect sense. It seems to me only view from template controller will behave differently comparing to normal views, which means it could very much be the responsibility of the template controller implementation do decide what to do with those views. I think it's safe to assume that there shouldn't be any issue. My previous concern was probably unnecessary. Beside that, I think another potential issue could be memory leak where view nodes has reference to view model, either directly or through a model object. But maybe I'm just over-worrying. |
I think caching should be done first and foremost on template string level, preferably via some globally accessible view factory, much like the expression parser cache. Key difference being of course that you can have multiple cached elements per template string. A repeater creates 10.000 of some view, returns it to cache, then there will be 10.000 cached views for that particular string available for any other component to grab. This becomes a memory problem when you have dynamically generated templates that differ and are never reused, you'll have an ever-expanding cache. This is another reason to have caching done globally - a cache size limit will be able to keep dynamic templates in check, not just large amounts of the same single template
This is a valid concern and it should be the responsibility of |
I can give a rationale for that. "premature optimization is the root of all evil" (and yes, I'm very guilty myself). I think if we remove caching for the time being (and take the animator along with it please) we'll significantly simplify the templating code base, allowing us to focus on getting everything to work correctly first. When everything is 100%, tested, proven, and beautiful, we can build caching back in (and have the tests tell us exactly when we're doing something wrong) Please let's stop trying to get the full vCurrent feature/perf surface all working in one go and try and do things a bit more incrementally. It will take less time if we toss a few things out and add them back in later, because we can all see more clearly in the meantime. Edit 2: just to add some proof to the above assertion, I didn't rework observation from vCurrent either. I started completely from scratch with 95% of features missing. I would have lost my sanity if I tried to do it in any other way |
I think it's possible that you are missing out on some of the conversations (and painful experiences) we had with vCurrent on the core team in the past @fkleuver One of the biggest issues with vCurrent is that we waited too long to integrate caching...and as a result, created a lot of problems. Animation was also integrated in a poor way and caused complexity and oddity throughout. The vNext template system, while lacking many tests, has been rebuilt incrementally and is technically almost feature-complete, with the exception of animation. For me, the next step is working with fixing a core issue of vCurrent around how nodes are added/remove in hierarchies. This is an issue that has bothered me for over 3 years and is one I've literally waited on for 9mo since the vNext effort was started. I believe the time is right to revisit it. After that, things will be in place to handle animation. Some of the work to set that up is part of this. Some of the work was part of the template controller re-writes I did, that established the patterns that would enable this. So, this isn't out of nowhere for me, it's been on my radar for a while and I've been strategically doing other incremental work to prepare the way for this. |
I understand how you've been working towards this, and it makes perfect sense for the most part. I'm also trying to understand and help ensure that all design decisions are coherent with some of the fundamental changes that have been made in vNext. All that said, whenever you work on something it always comes out a lot better so I'll wait this one out :) |
I think I need to probably write a little spec for this, to think through some of the scenarios. The PR here (once it has tests) is not a bad first crack. It's definitely an improvement, but not quite what it needs to be. |
Codecov Report
@@ Coverage Diff @@
## master #171 +/- ##
==========================================
- Coverage 86.19% 85.28% -0.92%
==========================================
Files 78 78
Lines 6020 6257 +237
Branches 1066 1109 +43
==========================================
+ Hits 5189 5336 +147
- Misses 831 921 +90
Continue to review full report at Codecov.
|
This pull request fixes 1 alert when merging 63959e7 into 57c07e8 - view on LGTM.com fixed alerts:
Comment posted by LGTM.com |
With improvements to the lifecycle, it's now possible to implement an entire animation system through a custom attribute or custom element. The core runtime no longer requires a specific animation interface.
63959e7
to
039309a
Compare
This pull request fixes 1 alert when merging 039309a into 57c07e8 - view on LGTM.com fixed alerts:
Comment posted by LGTM.com |
@EisenbergEffect From the commit, I understand that if we want to implement a list with staggering enter animation, we need to scope lifecycle to list and create a child for each list item for both |
For staggering enter animation, you don't need to do anything with the lifecycle. The lifecycle is now generalized to simply allow a task to block node add and attached callback or block node remove and detached callback. It's a general hook that an animation system could use to prevent nodes from being removed before an animation finishes (also enables synchronous compose and other scenarios). I don't think an animation system would use that for attach since you don't want to block adding nodes. This removes the need for the animation system to have to exist in the core and allows it to exist entirely as a plugin. It's possible we'll still have a standard mechanism like before, but that doesn't need to live inside the runtime now. |
Tests broken for now. Fixups coming soon.
This pull request introduces 4 alerts and fixes 1 when merging 14bbed6 into 57c07e8 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
If the flag is set only.
This pull request introduces 3 alerts and fixes 1 when merging 3e1c98e into 57c07e8 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
This pull request introduces 4 alerts and fixes 1 when merging d540ea0 into 57c07e8 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
This pull request introduces 6 alerts and fixes 1 when merging b2fef29 into 57c07e8 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
This pull request introduces 4 alerts and fixes 1 when merging f120c74 into 57c07e8 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
This pull request introduces 1 alert and fixes 1 when merging fb8fbd2 into 9eb0764 - view on LGTM.com new alerts:
fixed alerts:
Comment posted by LGTM.com |
Background
Previously, when a renderable (custom element or view) was told to detach, it always removed its DOM nodes, even if its parent had already been removed from the DOM. This behavior is a carryover from vCurrent and is highly inefficient. This PR attempts to address the issue.
Solution
There are several pieces to this:
mount
API to replace the previousonRender
callback.Related Issues
TODO
All existing tests pass but new tests that describe lifecycle behavior need to be added. Once those are added, this PR will be ready.