Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Add solid-js v0.0.8 #398

Merged
merged 1 commit into from
May 29, 2018
Merged

Add solid-js v0.0.8 #398

merged 1 commit into from
May 29, 2018

Conversation

ryansolid
Copy link
Contributor

Solid.js is a new render library built on ES2015 proxies and precompiled JSX native DOM expressions.

Please consider adding it.

@leeoniya
Copy link
Contributor

hey @ryansolid

i remember that we had the rAF clearing convo a while back [1] and more-or-less decided that it was an optimization that was extremely micro, chrome-specific and if implemented in userland was specifically there to win benchmarks. @adamhaile removed it back then, but it appears that it has crept back in again :(

[1] #168

@ryansolid
Copy link
Contributor Author

ryansolid commented May 27, 2018

I had no idea. And from the looks of it conversation continued into the ivi post. It looks like it was suggested they were to be removed but library writers brought them back in or included into their library in a way that made it special case but more hidden. The relevant conversation is here. It sounds like Surplus, ivi, and domvm all use some variation and no one was willing to pull back on it if anyone could leverage it.

I just saw it in a couple places(I think) and thought I'd show how elegant it was in my library (computations as async functions). I suppose I could remove it if that's what we are enforcing but I was actually looking to show it off.

I suspect trying to keep it out will just drive certain libraries to hide it into their internals. So I don't see any reason to prevent these tests from being as performant as they can be. If the hack to get that performance is complicated or unintuitive people looking at the implementation can see that. If the solution in the given library is intuitive and elegant it showcases it in a different way.

EDIT:

Actually thinking about it a bit more I don't think banning it makes any sense. If someone wants it bad enough they will make it happen. Rather than getting in a certain meta scenario, let every library use whatever tools are at their disposal. In some libraries this optimization makes no sense or is hard that's a tradeoff of using their library. In others it's incidental. All in all setting arbitrary restrictions will get bent. I could make a library that always delays when emptying arrays or have an option that is passed into the computation, or any number of ways to claim this is normal behavior. People will always game benchmarks, and find small edges. The only restriction has to be the universal truth that is shared that can't be denied: the browser and the library running the tests. Anything less is showing bias.

@localvoid
Copy link
Contributor

Rather than getting in a certain meta scenario, let every library use whatever tools are at their disposal.

How many implementations are there using explicit event delegation to slightly reduce overhead? It is possible to do with any library in this benchmark.

I've implemented ivi benchmark specifically to demonstrate how it will perform in a traditional redux-like application, every row is a stateful component wrapped into a connector, it is not the fastest way, but it shows that even such abstractions has almost zero overhead.

I guess that I think about performance of libraries in a slightly different way. Library overhead can become a bottleneck only in large applications, large applications require different kinds of abstractions to deal with complexity, library that doesn't provide tools to deal with such complexity and only focuses on performance of low-level primitives won't be able to compete in real applications.

@ryansolid
Copy link
Contributor Author

ryansolid commented May 27, 2018

I more mean that the position thus far has been a little ambiguous or arbitrary about where its acceptable. And that leads a restriction like this awkward to enforce. Seeing Surplus implementation actually challenged my thinking of how a library would be open to handle optimizations like that; how to provide a general mechanism to address timing.

I feel as a library writer that's all you can do at a certain point. React didn't ship with Redux but it allowed for it. The idea of what the standard approach should be changes rapidly so looking at the primitives and the potential of interoperability is a thing.

So I do think it comes down to what the library goal is. I could care less about frameworks and full solutions outside the ability to put them together as one would see fit. Its reflective in how I approach the renderer I've been working on, as I designed it to work with almost any fine grained library. Ive only submitted my library and Knockout versions but I could whip up several more. So demonstrating the capability and ability to work in the situation required is of the most importance. I'm interested in what patterns and abstractions have what tradeoffs and developing towards better ways to do stuff.

So while in real world applications the library won't be the bottleneck. How do we improve our performance or our APIs unless we try. It's the special considerations like that rAF invokes that asks the question and influenced changes to how libraries approach it. Just because different benchmarks here do it different ways (implicit vs explicit) doesn't necessarily make approaching the problem that way wrong. More clunky perhaps but that's a trade off as well.

In any case, I'm most interested in having this library added so whatever needs to be done.

@adamhaile
Copy link
Contributor

Hey @leeoniya -- Yeah, I thought we'd decided not to use rAF, then ivi and domvm brought it back, so ... ?

FWIW, a global sync/async flag, like I believe ivi and domvm use, strikes me as a benchmark-only feature. I'd be very anxious about adjusting it in a medium-to-large app, because it would break any code that relied on the relationship between data and dom. Also, in my testing, rAF is generally slower, by exactly the amount of time it delays the rendering work. So the only context where tuning it globally makes sense is a small app with specifically defined performance metrics that happen to fall into the actions that benefit collectively from rAF -- aka this benchmark.

So my take was that any use of rAF buffered rendering should be targeted and explicit. That's what the Surplus implementation does. Since Surplus does its rendering out in the open, and since it's built on top of S's graph of signals, it buffers the signal of trs being inserted.

@ryansolid Solid.js looks interesting! I think I see a couple influences :). I think we come from similar backgrounds, too, in that I've built (and still have to maintain) a few large knockout apps. I'd love to discuss design tradeoffs if you're interested.

@localvoid
Copy link
Contributor

@adamhaile

a global sync/async flag, like I believe ivi and domvm use, strikes me as a benchmark-only feature. I'd be very anxious about adjusting it in a medium-to-large app, because it would break any code that relied on the relationship between data and dom.

Any example?

I know about some edge cases like race conditions when you trigger input->submit->input events during one frame. And most of the time it happens because app state management solutions are adding non-determinism, and many web applications just ignore such problems because it is really hard to trigger this edge case. But it is also really easy to fix when you know that you have such problem.

But with surplus you just completely ignore that you need to deal with such situations and you've chosen to ignore lifecycle hooks for components. Primary use case for lifecycle hooks is dealing with legacy DOM apis, integrating proprietary components and encapsulating this behavior, so that users of such components doesn't need to worry about such problems.

So the only context where tuning it globally makes sense is a small app with specifically defined performance metrics that happen to fall into the actions that benefit collectively from rAF -- aka this benchmark.

It is the default behavior in ivi because it has a scheduler that controls how it updates the DOM, when you can read and write to the DOM. Scheduling DOM updates is useful when you build complex applications because you need to measure DOM elements, trigger relayout and rerender during one frame. With sync rendering you won't have any guarantees that you won't trigger relayout even when you handle DOM events (use cases like DnD).

It is a feature that was designed to deal with problems in complex applications, I had no idea that it will behave in a such bizzare way in this benchmark. And I'd like to remove it from this benchmark if someone goes through all other implementation and makes it more consistent, unfortunately there are many other libraries that also has such behavior by default.

Since Surplus does its rendering out in the open, and since it's built on top of S's graph of signals, it buffers the signal of trs being inserted.

If you have scheduler that guarantees serializability, then I don't see any problem with rAF in surplus, but if you can't control how your updates will behave when you have concurrent sync and async operations that mutate DOM, then you have a fundamentally broken system

@ryansolid
Copy link
Contributor Author

ryansolid commented May 28, 2018

@localvoid Lifecycle functions have tradeoffs. It's a design decision. They produce tighter coupling between the modularity of the code and the data and tend drive multi-cycle large conditionals which obscure the data path. Not unmanageable but existent. Not every framework use lifecycle functions them especially ones driven off observable data. In those libraries it's natural to express the transformation with data, including time transformations. It is different paradigms taking different boundaries on responsibility of the library.

I have an idea to address concurrency in these sort of cases in Solid but I've debated whether it makes sense to include in the core library since it feels a bit forceful to take the control out of users hands. One of the benefits of Surplus or Solid is the transparency. You get simple modern interface while still getting down to the metal. Part of the attraction is how easy it could address an optimization like this. I prefer that when things get complicated to massaging the abstraction. But again tradeoffs.

I get it that you'd remove it if you could or it made sense but as you know it won't and you won't. Which is precisely why restrictions like this don't make sense to enforce. If any library could leverage it how could it be justifiably? It becomes a syntax thing.

@localvoid
Copy link
Contributor

localvoid commented May 28, 2018

@ryansolid

Are there any libraries like React Material UI, Angular Material UI, Microsoft Fabric, Vue Element, Polymer Elements that was built on top of a library that doesn't provide component model with lifecycle hooks? I'd really like to see what such API would look like, how much internal state you'll expose to the users of your components and how you'll deal with many browser quirks.

but I've debated whether it makes sense to include in the core library since it feels a bit forceful to take the control out of users hands.

There were many failed experiments in the past in non-web platform when developers introduced concurrency to the main UI thread and pushed all this complexity to users. Doesn't work in practice, average UI developer doesn't know how to solve concurrency issues.

EDIT: P.S. I am not against your benchmark implementation, I understand your point :) I am just really curious how you'd solve all this kinds of problems, because I am starting to think that maybe I don't understand something :)

@ryansolid
Copy link
Contributor Author

ryansolid commented May 28, 2018

@localvoid

Definitely. You just have to look back a little further in time. I never really used those sort of libraries then as I don't use them now so I might not remember them well. There was KendoJS was originally made completely from Knockout. I mean in a certain degree Bootstrap.js or JQueryUI. There were tons before. Some worse than others.

But that being said it's not like you aren't dealing with certain hooks it's just the libraries mentality is the declarative style of data transformation. So if a lifecycle hook is an event essentially and your library handles events you don't need it to be part of the container. Here there still has to be fairly global mechanisms for creation and teardown. I guess I'm saying you it's not exactly like the library is trying to be oblivious, more that it's choice of abstraction lends to favoring events and and describing data transformation versus using containers to manage time sliced imperative chunks.

I've seen lifecycle libraries come and go and come back again. In my early professional career as a .NET webforms developer in the early 2000's we used components (Pages, and Controls) that had declarative markup and lifecycle functions. It was replaced by MVC since state shared between client/server was a mess. I will say it's been much nicer this time around, but as someone who has worked through trying to eek out every bit of performance from React Native and learning along the way. I will say at a certain point the specifics that come from the abstraction are not unlike learning the specifics of different platform (like browsers). And often they don't mask you from that platform specific stuff completely anyway. I will say onboarding new developers is very nice in React and wouldn't be looking in this direction if I didn't see that.

So a lot of what I've been doing is figuring out if there are other options. This comes from the fact that I work mostly with Web Components and the separation of container from rendering or even update cycle really helps keep things light. I rely on the standards mostly for interopt which differs greatly from like Polymer. It's a deep investment in DOM technologies. So I find transparency important and the abstraction less valuable, and while WebComponents have lifecycle methods representing them as event hooks play very nice as they are often fine grained (like attributeChangedCallback). Mind you, one of my other projects Component Register is based on trying to make it so you can just plugin any library or framework into Webcomponents without having to be aware of their lifecycle functions so you can use your library components exactly how you are used to. That essentially goes against what I was saying for Solid so it's not that I've figured it all out.

So I don't think you are necessarily missing something here. It's more that this is similar to approaches that fell out of favor maybe 5-6 years ago, that were perfectly viable at that time. While there have been various experiments (absolutely love the concept of CycleJS) there hasn't been a real successful approach. When I saw Surplus it showed the potential for this type of paradigm to be performant again and I'm very excited to explore the potential here with the new knowledge/techniques we've gained from working with React and the slew of Virtual DOM libraries that have followed. I mean if React hadn't embraced JSX and precompilation this wouldn't even be a thing. But it was the same precompilation that that helped these other approaches fall out of favor. Now there is potential to use similar techniques here.

@leeoniya
Copy link
Contributor

leeoniya commented May 28, 2018

@adamhaile

Hey @leeoniya -- Yeah, I thought we'd decided not to use rAF, then ivi and domvm brought it back, so ... ?

domvm's rAF has been there as default (in the lib, not the bench impl) from v0. it's used to debounce vm.redraw() calls. the reason i forced it to be synchronous in the initial impl was to allow the console timers to be somewhat truthy. when i removed the timers - because they lie anyhow - i also removed the forced sync non-debounced redraw - thus simplifying the impl. to say that i brought it back is not really accurate, nor is it there because i discovered the benefits of raf clearing as a result of discussions here. even if domvm benefits from rAF clearing unintentioanlly, it equally suffers from it when rAF is slower (as you point out). as i've said before, i have no issues with frameworks that internally choose to optimize row clearing via rAF, because it would be a general(ish) optimization that's not "this benchmark"-specific,

in general, i'm with @localvoid in aiming to simplify implementations as much as possible and ensure that any optimizations are general and lib-level rather than deferred to the avg user and app-space. the raf clearing optimization is extremely esoteric; literally no one will ever know to use it in just one specific corner case. i think it should be removed from vanillajs as well, but that impl is meant to be ugly and obscure, heh.

previously, domvm's impl had event delegation. i considered this okay because it had been a very very common/understood pattern in the jQuery days which benefits all browsers in all cases, and not some obscure Chrome-only, one-off thing. however, i have removed event delegation in the last update [1], taking a slight perf and mem hit, again aiming to simplify the impl rather than squeeze the last drop of perf out of an awkward impl. i intend to gain the perf back via internal improvements in the future.

some libs require that the impls reach into the dom to perform certain actions, like leaving it up to the user to know to set textContent on update, when in fact firstChild.nodeValue is much faster. these are details that IMO should not be exposed to the impl because they require the user knowing these intricacies and other "tricks". redom's updates, for instance, are completely imperative while initial render is declarative; it's certainly odd.

i think that sums up my view on this whole situation, do with it what you will :)

[1] #396

@localvoid
Copy link
Contributor

localvoid commented May 29, 2018

@ryansolid

There was KendoJS was originally made completely from Knockout. I mean in a certain degree Bootstrap.js or JQueryUI. There were tons before. Some worse than others.

Looked at the KendoJS sources, it has a component model with lifecycle hooks that should be invoked explicitly. In pre-React era I've worked with JQuery, MooTools and YUI3, looked at many other libraries at that time, and I really don't get it why anyone would want to bring back this hell. I've worked on really small projects (less than 50 components) and even on small projects it was a painful experience wiring everything and invoking all lifecycles explicitly.

Even basic material design button needs some additional state for ripple animations and "destroy"/"dispose"/"unmount"/"detached" lifecycle should be invoked to stop ripple animations and remove mouseup from a window if button had a "pressed" state.

@adamhaile
Copy link
Contributor

"destroy"/"dispose"/"unmount"/"detached"

Surplus has S.cleanup() for this.

I haven't used Material UI, but I know that at least one Surplus user wrote bindings for it: https://github.com/ismail-codar/surplus-material .

In general, I find legacy libs (JQueryUI, etc) pretty easy to work with in Surplus, b/c of real DOM nodes.

If you have scheduler that guarantees serializability, then I don't see any problem with rAF in surplus.

Yes, S's dependency graph is all about making sure that things run in the proper order.

@ryansolid
Copy link
Contributor Author

ryansolid commented May 29, 2018

I know Kendo has changed a fair bit in the past few years but who knows as I said I never really used it so there might have been some lifecycle functions all along. Knockout itself though didn't have lifecycle functions. And I'm not suggesting there isn't create or teardown event handling. Every library has a way of doing that. I've used an class instance "onDispose" method on several Component/Viewmodel implementations I've done in the past. Surplus has cleanup, RxJS has complete.

It's the experiences that I've had that I think probably give me this perspective, since around the time React came out I was working for a company that had been building their own V0 webcomponent library based around Knockout (as well as a fullstack ORM, solving similar problems to what later prompted Falcor/GraphQL). We used Coffeescript which back in 2013 was like having ES6 already. It felt like developing in the future. We used components patterns to the point that while there was no particular advantage for us using React at the time. The most influential thing React did was introduce us to Flux as while the data directionality was clear path we hadn't figured out a great way to handle dependency injection.

Of course as React grew and the ecosystem developed times moved on. To the point that Knockout showed it's age but it was never React's render loop or lifecycle methods. Tooling improved, new developers were unfamiliar with patterns, and more so new developers were no longer familiar with core DOM manipulation. While KO does a great job of abstracting the DOM interaction out of the Components, it has it's quirks and generally just dated. But never once was it the paradigm that left a bad taste. This is just my perspective but using React was great experience but it was nothing special and in more complicated scenarios I found the abstraction more annoying and actually more difficult to explain.

I think I've already pointed a lot of the tradeoffs. To me this approach is just natural. I think a lot of this difference stems from Virtual DOM centric approaches versus not. To me the Virtual DOM consolidates updates into a single stream. This requires the render loop to go through specific stages as it gathers and transforms the data. If this gets too large or needs to be optimized you break out more components. With observable data each piece is it's own thing responsible for it's own specific transformations. This just lends to more fine grained resolution with independence from how you package your components.

I do see looking at some old posts on here there has been some friction on position with Surplus and it's crazy to me. Data driven declarative approaches in JS in the client existed more than 5 years before the advent of the Virtual DOM. Not all was good but it's not like this stuff didn't exist before. In some ways this approach is more declarative than most VDOM implementations since even the data is declarative based on pure function transformations, and the mental model isn't a series of steps that run imperatively over and over. I respect that you guys probably feel as strongly about the VDOM as I feel about not the VDOM and even if you all don't recognize it, this is a real thing.

@adamhaile Love to chat sometime about implementation choices. I sent you a PM.

@krausest
Copy link
Owner

I removed the RAF hack from vanillajs. Maybe that inspires the other implementations.

@localvoid
Copy link
Contributor

@adamhaile @ryansolid

Thanks, now I am getting it. I am actually don't care if it is a virtual dom, fine-grained KVO bindings or some hybrid solutions as long as it has primitives to deal with complexity. And it seems that I was wrong about surplus :)

One of the reasons why I am stuck with Virtual DOM is because I need a parallel tree with additional information to implement synthetic events and on top of them implement gesture recognition system with gesture disambiguation that can resolve conflicts even with native gestures. I've been thinking about this problem for quite some time, and recently found a solution with some ugly hacks how to implement it, so it can resolve conflicts with native gestures and don't wait for responses from the main UI thread when native pan is recognized.

Also, I am worried about use cases like rich text editors where it would be insanely hard to track changes, so it is usually implemented with several passes that rebuild all data structures on each input and then it becomes very natural to generate vdom and use diffing to find minimal changes.

And of course traditional client-server use cases when you don't have any information about data changes and server just sends data snapshots.

Tradeoffs...

@localvoid
Copy link
Contributor

Changed ivi to sync rendering #399 . Explicit event delegation should be next :)

@ryansolid
Copy link
Contributor Author

ryansolid commented May 29, 2018

Yeah using synthetic events is probably more difficult for these fine grained libraries and don't particularly make sense. And while a VDOM library could write on element bindings but use event delegation under the hood, I think with these libraries doing it in the open makes more sense. Unifying touch events has been a problem, so much to the point that there is a new DOM spec(pointer events) for that (of course not fully supported). I live in a world of using native DOM and polyfills (I've been using Webcomponents since 2013 that's the reality of it). It is a very browser/DOM forward approach which has clear tradeoffs when you consider stuff like React Native. Not to say NativeScript isn't a possiblity here but that's a whole other thing.

Event delegation brings up a good point as I can see VDOM libraries are trying to mask it in most cases and in fine grained I think you may want to embrace it. The design goals/developer experience expectations are different here. From this perspective when you guys are on one hand saying you shouldn't do this, don't encourage it, but then on the other hand doing it yourself behind the scenes, it feels like discrimination. I recognize that in some cases because how wholistic VDOM approaches are the tradeoff is considerable so to just have another library one liner something to get the effect without the cost seems like gaming it. It sort of is, but the library is built to work well in these sort of situations especially in the case of rAF. The fact that it is so trivial to implement the hack (as natural as functional composition or chain mapping) says something about the library. The thing is Surplus or I could easily identify this situation and make the backend only do rAF in that condition, behind the scenes. But it's a bit counter-intuitive to the design goal of the library, is my current thinking, but we could. Does that mean one would not use it if were available to them. As I said in my first response I don't see how we could possible limit or restrict something that other libraries are using. If it looks like a wart that's a thing, but at this point I actually think it showcases the flexibility of the library.

I have to admit Rich Text editting is an area I haven't hit and is one that I am going to be particularly interested to try in the future. But I'm glad you bring up the client/server snapshot thing. This is actually one of the big reasons for my approach with Solid. My state objects using Proxies address some of the issues classically libraries like MobX have been facing (more pronounced there since it interacts in the React Ecosystem so much). I internally have diffing mechanisms to handle data coming in from snapshot sources like localStorage, and for pulling data from immutable observables like Redux, or Apollo. I've been doing a lot of work to make sure that Solid plays nice with these systems. Solid State objects can be accessed like POJO's and while optimized for fine grained changes are designed to handle tree diffing. The process is generally you write a map transformation on your store's observable (or apollo watchQuery) and pass that into state.select to sync to a key in the local state object. As the observable fires only the data you care about will be pulled out via the map and diffed against the current state before passing fine grained change detection updates to the DOM. In so it removes a lot of the boilerplate around the Connect middleware. I've been exploring ways to make a more streamlined HOC out of it but I'm still evaluating Provider/Consumer pattern in my webcomponent library as this whole thing is very much outside of the scope of Solid.js which only aims to provide the primitive.

Just so you all know, so you understand my position here, I could have written my benchmark using just Observables and it would look a lot like Surplus and would have been a bit more performant (better on memory and build size), but I've submitted using Proxies and my state object because I want to demonstrate a fine grained system that looks on the surface like a VDOM system with simple POJO data. I actually hope that users of my library view manually using Observables as an advanced topic and think about state similar to how they would in React or ImmutableJS. I'm still attempting to demonstrate something on the API side and this isn't just a performance grab. I want show you can have it all.

@krausest It's pretty clear to me that the rAF thing is at a bit of standoff. You have one side stating that states it's hackiness should not be included in an intentional way yet internal still taking advantage of it in their libraries. You have the other side saying this is how you address these sort of optimizations in a library like this and that it demonstrates how the library could elegantly include such optimizations while simultaneously providing a data driven abstraction. In all cases the most vocal parties are using this optimization. As I'm sure you realize it's pretty impossible to enforce this so I'm gathering this stays? I'm just wondering is the merging of this PR hinging on the resolution of this issue?

@leeoniya
Copy link
Contributor

leeoniya commented May 29, 2018

@ryansolid

If it looks like a wart that's a thing, but at this point I actually think it showcases the flexibility of the library.

is there an example of a lib in these benchmarks that is unable to be this "flexible"? it appears that most (all?) impls can be trivially modified to wrap the setState or data store mutation & redraw in a rAF call to get this effect. in your opinion, what lib & impl combo demonstrates inflexibility?

for the record, even with the rAF side-effect, domvm is not close in terms of perf to ivi, surplus, inferno or solid; i'm not here to improve my own situation by insisting that everyone else shouldn't be using this obscure hack. by beef with it is just that, it's a hack that's only here, but would never be in a real codebase. fwiw, event delegation has some overlap in this regard too, in that probably every impl can do it, but some choose not to for either impl clarity or component isolation guarantees. the crucial difference being that event delegation is well known, well understood, and is cross-browser.

@localvoid
Copy link
Contributor

Unifying touch events has been a problem, so much to the point that there is a new DOM spec(pointer events) for that (of course not fully supported).

I am working on a slightly different problem and pointer events are completely useless in scenarios that I am trying to solve. Imagine use case with a long-press DnD inside a container with a native scrolling, use cases like this can't be solved with pointer events because of limitations of the API, and it is a known issue. I am working on something similar to Flutter Gesture Disambiguation, but on top of a bunch of ugly hacks and workarounds to deal with native browser gestures. It is just a such a pain to solve all this problems again and again on a case-by-case basis.

From this perspective when you guys are on one hand saying you shouldn't do this, don't encourage it, but then on the other hand doing it yourself behind the scenes, it feels like discrimination.

Can you at least create some ugly declarative API with selectors :) And then tell everyone that this is how they should register event handlers. That is how I've tried to rationalize this when I also used explicit event delegation in benchmarks :) But do you seriously believe that this is how it should be used in a complex application when there are many layers of components below? Imagine how painful it will be to make codereviews for changes in components when for some reason there is a container that depends on the internal DOM structure of such components to handle events.

The thing is Surplus or I could easily identify this situation and make the backend only do rAF in that condition, behind the scenes.

I am actually doing something similar because of the different reasons, by default I am using rAF and in some scenarios I am triggering sync redraw in trusted event handlers, because some DOM mutations wouldn't be allowed otherwise.

I have to admit Rich Text editting is an area I haven't hit and is one that I am going to be particularly interested to try in the future.

Imagine even simpler scenario that I've been working recently: selecting text regions and adding annotations. I've used a simple data structure with a text and an array of annotations (overlapping regions), it is easy to add and remove annotations to such data structure, then I just wrote a simple algorithm that converts annotated region into two objects that represent region edges, then I've sorted all this edges by their position and time, and then wrote another simple function that generated hierarchical(overlapped regions can generate deep trees) structure that I've mapped to a virtual dom tree. And I think that with a library that is built on top of a fine-grained bindings this problem will be significantly harder to solve, unless you try to flatten DOM structure, but maybe I am missing something :)

@krausest krausest merged commit f2f9c01 into krausest:master May 29, 2018
@krausest
Copy link
Owner

Pretty fast - nice work. Results are here
I'm really not sure how I should handle that rAFgate thing. I'd prefer to see it removed in user code (or at least be used in all cases i.e. have the if v.length removed). Remove rows scored ~194 msecs with the rAF hack removed.

But please keep in mind that results are really close. The compare with function (here against solid) marks significant better or worse results and most cells are white...
bildschirmfoto 2018-05-29 um 20 21 04

@ryansolid
Copy link
Contributor Author

ryansolid commented May 30, 2018

@leeoniya You are absolutely right. I was pretty happy with being able represent it as a pure async function with a conditional promise, which is not typical in the Knockout school of things where you need to create side effects. It's mostly an syntax thing that when I came across the opportunity to show it I was excited to. But wrapping a state update in the test runner would do it too. It removes the ability to make it behinds the scenes be possible to be cancelled in the same way mind you. But there are explicit ways to write that out as well. (I'm talking the clear then add issue presented in the other thread).

@localvoid Your example seems interesting. Classically the issue with Observable nested data is the overhead of wrapping it and unwrapping it. So I have been exploring proxies to see if that can help with more complicated structures. Admittedly I don't think I'm fully following your example but the idea would be to handle the majority of the work in the data before considering the rendering. Diff the data and what renders just reflects those changes. In general I find the real strength of fine grained is partial updates, but places with lots of additions or replacements VDOM performs a bit better. Although not exactly parallel I see a lot of similarities with the whole immutable/mutable debate or MobX vs Redux. I'm interested to see if there are places where both could be leveraged. You can always have an immutable atom inside mutable state.

I'm also starting to realize it's possible that I haven't really given the whole picture since for apples to apples comparison with like a React, Solid.js is only concerned with 2/3 things React does. This is the same stance Knockout took. It is truly just the View from the MVC standpoint. Whereas I view the the container/component part of React akin to what KO called a ViewModel which belongs to the Controller side of things. A lot of the concerns that have been coming up on the container side of things is left up to the implementor, but I'm definitely releasing a Web Component module that adds some of that missing modularity/encapsulation as well as patterns for event delegation. I mean you could write a whole application using Solid just like the benchmark, and I didn't want to lock it in, but using something like webcomponents gives complete render and local state isolation. The reason I don't see it as applicable to this benchmark is the webcomponent boundaries fit with the implementors discretion. A whole app could be 1 component if you were crazy enough to do so, or perhaps a page, or just a button. But those choices have minimal performance implications. Less components is probably more performent since it removes overhead.

In any case some good feedback here. One thing that I'm going to bring back is ensure cancellation of async selectors when it recalcs before the async value comes back. In this way the whole clear add issue discussed before is a non-issue. It would be more expensive in that scenario but essentially it would be like the clear was cancelled and the new list would be reconciled against the old.

In terms of rAF, I will look at what I can do here. I was considering taking the RxJS path where you can specify a scheduler as an argument in observable creation. Instead of writing the conditional, it would apply to all add/remove row changes. So it's no longer considering length essentially. So instead of

Solid(() => state.data).mapS((row) => ...render stuff).map(...conditional promise)

It'd be something like:

Solid(() => state.data, {notify: 'nextAnimationFrame'}).mapS((row) => ... render stuff)

I'm gathering that would be adequate. However I feel it has a different purpose. Like right now in the clear then immediate add scenario I essential cancel the requestAnimationFrame similar to how you'd handle an as you type search field. However here I feel the correct behavior is to defer everything and never cancel. This is more of if you wanted to animate something at a set interval essentially queuing up the changes. The best way still feels the explicit mapping at which point it's already there, so knowing what we do why wouldn't we check on length? I guess for the same reasons we don't in the VanillaJS benchmark.

Anyway that's where I'm at now. I'm aware of this so, I will continue thinking along this path.

@krausest Thanks, honestly that was better than I thought. I develop on an Core i5 laptop and I find different libraries do better there. I do realize the delta on the rAF is a large part of the good score since it appears my library is particularly good on the clear even comparison to the other libraries that use it. I definitely want to spend some time on my thinking on rAF and see if there is a position I feel makes sense here.

@adamhaile
Copy link
Contributor

Hey all --
Sorry to be out of the conversation -- a client is having a crisis and I've gotten pulled onto their project.

@leeoniya

is there an example of a lib in these benchmarks that is unable to be this "flexible"? it appears that most (all?) impls can be trivially modified to wrap the setState or data store mutation & redraw in a rAF call to get this effect. in your opinion, what lib & impl combo demonstrates inflexibility?

That would probably get the same perf gain but it's not the same as what Surplus (and I think Solid) are doing. Plus it could have bad behavior if there were two data updates in the same frame and the rAF switched their order. Surplus's use of rAF is a lot more targeted. The data is still modified synchronously, the <tr/>s are still created/deleted/updated synchronously, and it's only their insertion/deletion into the <tbody/> that is buffered behind an rAF call, and only if we expect it to be a clear. (If there are indeed two events in the frame, it might become something else by the time it runs.)

The cool part about all that is that none of it requires special hooks or flags into the renderer. It's a natural capability opened up by the approach. As @ryansolid said, it's as simple as function composition.

@ryansolid
Sorry to not get back to your DM yet. Very cool and I definitely want to talk, I've just been swamped.

Since you mention scheduling, I'll just throw out that I've had two different versions of scheduling in S in the past, but don't have them currently, for reasons I'll explain at the end. S used to have a few chainable prefix operators to modify a computation's behavior. The scheduling one was .defer(). It first worked like:

S.defer(requestAnimationFrame).S(() => ... code ...);

With that form, the supplied function was called when it was time to update the computation, and it was passed the "real" update function. It could then call the actual update right away or schedule it as it saw fit.

The second form took another signal that functioned like a traffic light:

const data = S.data("foo"),
    signal = S.data(null):
S.defer(signal).S(() => console.log(data()));

When data() changed, the computation wouldn't run until signal() fired. So like a traffic light it accumulated pending computations until it fired and they all went ahead and updated.

The second form was actually how the first was implemented internally, so the change was to expose that.

I removed it for a few reasons:

  • I initially designed it with rAF rendering in mind, but as explained in these threads, that turned out to be not so hot an idea

  • it added more complexity to the internal implementation than I felt the benefit warranted

  • it wasn't clear how it composed with other features. If a computation is disposed, but it has a deferred update still pending, should that update be able to run? Were child computations disposed when the old run was invalidated or when the update finally ran? Even worse, usage in actual apps said that sometimes we wanted one behavior, sometimes the other. So now we needed a bunch of options. Ugh.

  • in actual apps, the places where we wanted this feature were very few, like 5 places in a 30kloc app.
    There are good and clear ways to get the same effect within the body of the computation for those few cases.

  • the feature is very data-flowy, and S is not a data flow framework. It's big idea is the global timeline of discrete, consistent, immutable instants, aka synchronous reactive programming.

So S sticks to the kind of time it knows and leaves other kinds of scheduling to libraries that specialize in it.

That was a lot longer than expected! Sorry to others in this thread, Ryan and I can take it offline :).
-- Adam

@adamhaile
Copy link
Contributor

Finally had a chance to pull and look at solid.

I have to ask, is this considered ok?

  onSelect(newSelected) {
    var selected;
    if (newSelected === (selected = this.state.selected)) {
      return;
    }
    if (selected) {
      selected.className = '';
    }
    newSelected.className = 'danger';
    this.state.set({
      selected: newSelected
    });
  }

The 'danger' classNames on the tr's aren't set from data, they're set from the target of the click event.

@krausest
Copy link
Owner

krausest commented Jun 5, 2018

@adamhaile vanillajs and re-dom actually do it in the same way https://github.com/krausest/js-framework-benchmark/blob/master/frameworks/redom-v3.10.1-keyed/src/app.js#L159-L163
Anyways taking a short cut in select row doesn‘t give too much of an advantage.

@adamhaile
Copy link
Contributor

Hey @krausest - It's a fairly significant optimization for create and delete. The way that fine-grained libs like Solid and Surplus work, the main cost they encounter over vanillajs is the cost of creating and deleting edges in the dependency graph when they add or delete rows. Surplus creates two edges per row, one to handle updating the row's label and one to update the className:

    <tr className={row.id === store.selected() ? 'danger' : ''}>

Solid's approach doesn't create a dependency for the className, because there's no data to connect it to -- select state is stored purely as a cached <tr> from the view. That cuts the cost of bookkeeping in half.

Just to put some numbers on it, here's how Surplus comes out with the same approach.
screen shot 2018-06-05 at 9 47 05 pm

@ryansolid your thoughts? I believe that Surplus is faster than Solid given equal approaches. We can both opt to do it or not, but I think the benchmark is a better benchmark if we both don't.

@ryansolid
Copy link
Contributor Author

ryansolid commented Jun 6, 2018

Yeah I see, I actually hadn't drawn the connection to create/delete when I did it months ago before I even had thrown it into the test harness (I was thinking it would optimize select) but that makes a lot of sense. As I said in the private message Surplus is generally faster across the board. I wasn't expecting Solid to do so well. This was something I did really early on when I had to do work on the Proxy Object to not break on DOM Elements (for refs), so I just threw it in. It isn't unlike a ref.

This sort of "optimization" I do a reasonable amount of time at work with Knockout so I didn't think much of it. In those situations it isn't even a consideration of an optimization but a convenience. Although it breaks the convention, it's in event handlers. You already have the native dom element. Since the move to Webcomponents and retirement of most custom bindings it becomes natural to do more custom DOM behavior there. Especially if the DOM is essentially handing it to you. It doesn't take any more consideration than Event Delegation. I didn't consider it breaking the data driven from the perspective of I'm not relying on the DOM element to store the state but rather I store the Element itself as the state. The className setting is incidental compared to setting the element as the selected state. It's still driven by data logic, but it's no more consequential than any other side effect.

I'm not opposed to removing it everyone believes it is for the sake of the benchmark. Even if to me I see zero issues with this (I'd argue it's much less controversial than rAF). But I will say it's really cool seeing that with this optimization Surplus and VanillaJS are essentially the same. I see this in Surplus and I'm excited. Although this is the transparency escape hatch again to a degree, it shows how effortlessly you can get all the benefits of a data driven, well architectured, modular modern library, with unmeasurable overhead over VanillaJS. I mean you did it. That might actually be the most dangerous thing for the Benchmark. Where do you go when you've reached the baseline. Really exciting stuff.

This is buried now on a closed thread but I'd love to see what @leeoniya or @localvoid have to think. I gather given the position on Event Delegation they'd agree with you. But unlike say rAF I will go to work tomorrow and do this and not think anything of it. Not that I wouldn't do rAF using this sort of library but I'd only entertain it if I was looking to optimize.

@localvoid
Copy link
Contributor

I think that it is quite sad that even when benchmark test cases are ideal for libraries with direct data bindings, you are still trying to use direct DOM manipulation, rAF hack that is useless in real apps and event delegation. It is like you are trying to hide deficiencies of the library.

@ryansolid
Copy link
Contributor Author

It's not about deficiencies as we all know it could be written the other way. It could be that the approach here is so close to the metal that the temptation to break the abstraction as desired is way too tempting. That when dealing with things as primitive as Observable streams controlling some of these aspects directly is natural. It's like using inline Assembly in a C program and talking about how great C is since it can do that without executing another binary. In one way that showcases C, But it isn't an indicator of C's performance even if it were essential to choosing C as your platform from a performance standpoint. So it's more like fast forward a few years and you have the Java vs C++ conversation around memory allocation and garbage collection.

So maybe I don't belong here. I've always in the "No Framework" open source everything camp. From that perspective a Framework is the combination of libraries to create a given solution. At baseline I'm going to do VanillaJS and to manage complexity I'm going to look for tools that provide the right level of abstraction for the scenario. The key difference here is that VanillaJS is not ASMx86 it's something that is constantly improving and is completely human readable. I write libraries that attempt to approach the standard in a way perhaps one day they will be obsolete. Webcomponents, ES Observables, Promises, ES Proxies. I try to use the available tools the best I can. Over time I've watched my libraries get smaller as polyfills have gone, and standards have been accepted.

So what is interesting to me is where the logical separation of those libraries are in terms of being able to opt in to different techniques and technology and what's the overhead and tradeoffs of the abstraction. Last week I may have been making my Web Components with a VDOM library, this week I only opt into a different renderer. Maybe techniques in change detection have or VDOM have improved I just use what I see fit. Professionally obviously it isn't that rapid. But in maintaining long lived large applications the last thing I want is an Angular 2 or even a React 16.3. The frameworks are going to improve and change how they do things and for all the benefits you still can get stuck. I'd much rather patch the gaps between libraries with a little bit of custom work, beef up the abstraction here or there to prevent confusion/direct developers than be at the mercy of a larger framework.

So maybe I'm just the C++ developer who is like I'm OO, I can do what you can do Java. I like that I manage my own memory and easily opt into ASM if I desire. Or perhaps I'm experimenting on what the future of VanillaJS could look like. When I see results like what Adam posted above I'm almost giddy. I wasn't trying to cheat the system, I don't need to be fastest (Solid isn't). What I see is the gap between the abstraction and the baseline is further shrunk with what i see as minimal tradeoff. And I learned more about the cost of certain abstractions. Honestly it's because a Row has no meaning in this benchmark that I probably thought nothing of this. If you were selecting Users in a list you're right, you'd never hold the DOM element, however when managing complex transitions/animations through onAnimationStart/End events you might.

@leeoniya @localvoid @krausest @adamhaile
So to the relevant stuff:

I just released a new version of Solid so I'm going to submit a new PR. I think the stance I'm going to take here is I won't use any techniques that other libraries aren't using and if the application seems specialized I will do my best to do it in generalizeable way unless it's something I comfortable abstracting in the library. Ultimately we want our best attempt at an Apples to Apples comparison here. Personally I would way prefer if it was no holds barred since is the only absolute truth of the matter. But I understand it bias' libraries designed to give you lower level control. I will go with a "gentlemen's agreement" for the time being. More specifically I think the stance should be:

  1. rAF should only be applied in a general way. If you are using it on clear it should be used for all adding/removing/moving of items in the list. For fine-grained implementations that means no length === 0 clause. That means if VanillaJS so chooses it can use rAF but only if it's for all of run, runLots, swapRows, remove, and clear operations. I'm gathering that is true of the ivi, or domvm approach.

  2. Storing the selected row in the state and toggling it's class should be forbidden. We should consider even at the cost of performance that VanillaJS doing something different there (storing the id or row data and doing a lookup). I think it comes down to the purpose of VanillaJS here a bit mind you. Since this is exactly the type of optimization I might consider to keep Vanilla out front.

  3. Event Delegation is a real thing. I mean events bubble by design and it's tool at our disposal. I realize it's more DOM related that a VDOM library would like but on the other hand Web Components are just DOM elements so on that side there is no escaping it. Also if there are Virtual DOM libraries that do event delegation in the background in their synthetic event system this can't be off the table and we shouldn't look down on Event delegation.

If that seems agreeable I will submit a new PR shortly. @adamhaile I'm especially interested if you think that is a fair way to approach rAf.

@leeoniya
Copy link
Contributor

leeoniya commented Jun 6, 2018

@adamhaile

The 'danger' classNames on the tr's aren't set from data, they're set from the target of the click event.

whoa, i didnt even realize any impl other than vanilla was doing this 👎

@ryansolid

It's like using inline Assembly in a C program and talking about how great C is since it can do that without executing another binary. In one way that showcases C, But it isn't an indicator of C's performance even if it were essential to choosing C as your platform from a performance standpoint.

This is a good way of putting it, IMO. There are plenty of opportunities for many impls to pop out of their slow cases and drop down to just hacking together a fast vannila-style "solution" as a workaround just for this bench. To me, any necessary breaks in uniformity or additional complexity of an implementation has a strong smell of just gaming for this benchmark.

I'm on board with your "gentlemen's agreement" points 1-3, since i've been trying to follow (or move towards) them myself and provide a good faith, idiomatic, data-driven implementation - even if it's somewhat slower. We'd probably want to double-check & update other impls as well to make sure everyone's on the same page.

@localvoid
Copy link
Contributor

localvoid commented Jun 6, 2018

Also if there are Virtual DOM libraries that do event delegation in the background in their synthetic event system this can't be off the table and we shouldn't look down on Event delegation.

And this "event delegation in the background" also requires so that each event handler is registered in synthetic events system, and then additional overhead to dispatch events. With synthetic events I can also use explicit event delegation and reduce memory allocations by 3 per each row.

@leeoniya
Copy link
Contributor

leeoniya commented Jun 6, 2018

@localvoid

i don't follow your logic. implicit (lib-level) and explicit (impl-level) delegation are pretty different, aside from conserving resources.

with implicit delegation, to the user it will appear as if they are binding 3 listeners when in effect they can be binding 0 listeners and only storing contextual data on a vnode to be passed along to the handler when a single top-level listener triggers the callback on the vnode closest to the event target. it's 100% transparent to the user unless the app relies on some decidedly poor practices like multiple listeners for the same event type on the same dom branch, possibly at different levels and some convoluted preventDefault/stopPropagation logic.

explicit delegation is different in that it likely breaks component isolation, relies on specific dom structures or classNames being present on dom nodes etc. domvm's impl had a lib-assisted form of this delegation, but it has been removed and now attaches listeners directly. however i'm evaluating moving to implicit delegation, since it is transparent to the user in 99% of use-cases and has no impact on the impl semantics.

maybe we're talking about different things, i dunno.

@ryansolid
Copy link
Contributor Author

See the interesting thing to me is with Webcomponents communicating up with events is standard communication. There aren't render props. You can can't guarentee at all how the child components internals work so you rely on fairly standard things. Even something as commonplace as trying to pass in more complicated templates that don't fit slot replacement mentality has several challenges to make cross framework. I use custom events a reasonable amount. I'm gathering implicit delegation can be standardized but my point is that these events could be anything, so the decision of how they are used can vary.

This implicit vs explicit things seems to be the sticking point here, since I'm gathering the goal with many libraries is almost ignore the actual dom exists and abstract from it completely. The thing with that is internally they could be doing any of these things. I'm gathering you guys are fine with that as long as it doesn't end up in the end user domain.

Is it the fact that it requires explicit knowledge of delegation or that it requires manually manipulation more of the problem? Like if on a binding you indicated that it should be delegated some way(like those crazy symbols Angular uses), but wrote the expression as normal on the element representation so the end user was not looking for classNames on elements, would that seem reasonable?

Given the realization of how critical each individual hook is in a list on fine grained libraries I'm going to eventually explore the potential of writing the template mostly as normal but indicating something should be delegated. If a computation is going to need to fire on every row when ever the value changes anyway you don't need 1000 events you need 1. So I'm looking at behind the scenes not only event delegation but binding delegation to see if there is any potential wins there. At that point though it seems that it's unfair to set any restrictions on Vanilla JS.

So I don't know. I don't think VanillaJS can be held to the same standard although I did look at it to decide the best way to do stuff in the benchmark. I think this is an easy trap for anyone looking to submit an implementation. But if we figure out ways to generalize or incorporate in meaningful ways optimizations into our libraries it's unfair VanillaJS couldn't benefit.

@localvoid
Copy link
Contributor

localvoid commented Jun 7, 2018

@leeoniya

maybe we're talking about different things, i dunno.

I guess so :) I am talking about overhead that involved in registering synthetic event handlers. Yes, I am registering just one native event handler, but I still need to register synthetic event handlers, and each synthetic event handler would also require a wrapper object (ivi impl), and when nodes are removed from the document I also need to unregister synthetic event handlers. It is not as cheap as explicit event delegation.

EDIT: I forgot that there are two event handlers per each row, so it is actually 5 additional memory allocations per each row.

@ryansolid

I don't think that vanilla should follow the same rules.

I thought that this benchmark is about measuring the cost of abstractions, because we already know how fast is vanilla implementation. So why hide this abstractions, use web components to encapsulate internal DOM structure and behavior of rows :)

I'm gathering you guys are fine with that as long as it doesn't end up in the end user domain.

Encapsulation. If benchmark implementation avoids it everywhere, it is a red flag that either library doesn't have tools to build complex apps, or someone just tries to hide something.

@adamhaile
Copy link
Contributor

Lots of discussion, so just to throw in my 2c:

  • I'm a big fan of a "gentleman's agreement" approach. @krausest has done the community a big service with this benchmark, and the worst way we can repay him is by giving him heartburn from our griping in his issues. Let's try to solve this on our own.

  • I've said this before, but I don't think vanillajs should have to play by the same rules as the other benchmarks. I think a) it should be renamed to something like 'baseline', and b) it should be full-on optimized to be the fastest known way to drive the DOM through the tasks. It's the rabbit we greyhounds chase, not the fastest greyhound. I'm happy to take a first crack at this if the idea has general appeal.

  • I like @ryansolid's points 1-3 but would generalize Aurelia specific logic #2 a bit to say that being "data-driven" means that it should be possible to recreate the view from the data. Storing select state with a tr is just one way that principle can be violated.

  • I think it's unfortunate that rAF is an advantage in this benchmark when, so far as I've seen from my testing, it's overall slower and raises complications. I'm happy to hear counterarguments if someone disagrees or has seen otherwise. It seems a bit weird to say that it's OK to use rAF if you're willing to (IMHO) damage your framework for the sake of this benchmark by making it a framework-level capability. I'm still curious why it speeds up clears, but would argue that in general, why don't we just agree not to use it.

@leeoniya
Copy link
Contributor

leeoniya commented Jun 7, 2018

I like @ryansolid's points 1-3 but would generalize #2 a bit to say that being "data-driven" means that it should be possible to recreate the view from the data. Storing select state with a tr is just one way that principle can be violated

👍

i think vanilla can do whatever it needs. i still believe the edge case rAF thing to be ultra obscure and chrome specific even for vanilla.

fwiw, i'd be fine forcing domvm into pure synchronous redraw mode if everyone thinks it'll be more fair, though i find it awkward from an impl perspective to avoid the default framework behavior.

@ryansolid
Copy link
Contributor Author

ryansolid commented Jun 7, 2018

Yeah from what I've seen only large DOM operations benefit from rAF. I was testing with more rows and it improved things almost across the board when I got to 20k or 30k rows. But below that when it starts becoming the sweet spot (which is probably dependent on the processing of the client) the results were more variable even if they were occasionally better. Something like remove one row in this benchmark is noticeably worse with rAF. The large inserts are variable, and the replace is the tiniest bit faster.

The problem is that the effect it has on the clear outweighs all the other negatives. My experience is on average you will score better on this benchmark if you use it, even for all, versus not using it just because the clear result is such a big deal. So the second people opt into it in any capacity it skews results. I don't see how you prevent libraries unintentionally hitting this landmine if from a convention over config perspective their intention is prioritize smoothness or something like that. Since there is a performance hit for using it across the board I would expect libraries that care about performance not to be using it by default but we've seen this to not be the case. I suggested the use for all to try to find some common ground but the second someone in their implementation decides that they variably apply it behind the scenes based on some heuristic(like delta of row changes) things aren't the same again.

I think that make is really hard to have any lasting middle ground. On the extremes, it's like allow it at which point we have implementations just doing the chrome hack straight up. Or disallow it universally which means that libraries that offer rAF updates would need to have a way to disable it. The only middle ground we have for today is based on having a good idea of each others implementations today so we can do something seemingly reasonable. But the next time someone comes in like myself, this will happen again.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants