-
Notifications
You must be signed in to change notification settings - Fork 295
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: DOMChangeList #270
Comments
This introduces quite a few concepts for manipulating the DOM for something that's ultimately an optimization. This seems like an optimization that, with slightly an improved DOM API could accomplish the exact same thing with only minor additions to the API (perhaps ~~~a suspend and resume layout plus~~~ a way to mix elements so you can generate them external to the current thread and have the rendering engine apply them at once similar to your proposal but significantly less complex). |
What would the existing APIs that trigger a layout flush do while layout was paused? Would pausing layout force an immediate layout flush to update all layout caches? If not, how would they compute the "old" layout once mutations have begun? One of my concerns with proposals to augment the existing DOM APIs with some sort of transaction is that you have to decide what happens to all of the DOM's APIs while the transaction is in flight. In contrast, the proposed API is a closed set of operations, disallowing any other interleaved operations, making it possible to understand the complexity of the entire API. |
Great to see this! I'm coming at this in comparison to Dru's proposal at https://github.com/drufball/async-append/blob/master/EXPLAINER.md, so let me share some thoughts: As far as I can tell this proposal focuses mostly on trying to minimize allocations (but see below) and on providing a proxy API that can be used in a worker, instead of focusing on allowing chunking and scheduling of parsing/style/layout/paint as Dru's proposal does. I can't tell at first glance whether it can serve both purposes, or whether its design prevents that. There's a brief paragraph under "Is this actually faster?" number 3 that indicates it is intended to be compatible, but I'll defer to the Blink engineers. I think the proposal might have a misapprehension about how DOM nodes and wrappers are related. Remember that wrappers are not created until they are accessed. With this in mind it seems like the proposal just substitutes one set of wrapper allocations (NodeTokens and DOMTreeConstructions) for another (Nodes and DocumentFragments). Looking at the example code seems to bear this out; counting wrapper allocations it seems to have the same amount as normal DOM manipulation code. (And remember a wrapper is just a pointer; a wrapper to a NodeToken and a wrapper to a Node are both the same cost.) If you assume the DOM nodes will eventually exist anyway, as Yehuda discusses, Yehuda's proposal actually seems to have more allocations: NodeToken backing + NodeToken wrapper + Node backing, vs. Dru's proposal which has Node wrapper + Node backing. One concern I have about the idea of running these things in a worker is that the IPC involved in getting back to the main thread (in the case of hundreds or thousands of nodes/nodetokens) would probably be a significant cost. I'm not sure whether it would be greater than or less than the cost of just doing the work on the main thread; similarly I am not sure whether the IPC would be significant enough to cause jank. But maybe all you have to IPC is a single memory location of the queue in which case it is presumably fine. So, going through the three high-level motivations Yehuda lists:
Also going through the "Applying a Change" bullets:
So it seems to me like the proposals are very similar with the main divergence being the desire to allow building the queue of mutations in a worker in Yehuda's proposal, which necessitates creating a parallel object hierarchy in order to build the queue over in the worker, before then transferring the queue to the main thread before (or during?) a commit. To me it feels like an empirical question whether the gains from building the parallel DOM-like tree in the worker instead of building a DOM tree in the main thread outweigh the costs of transferring it. There's also the empirical question of whether building a DOM tree in the main thread actually takes long enough to cause jank; if it takes e.g. 1-2 ms for thousands of nodes, then it'd probably be best to just eat the cost and avoid the conceptual burden of introducing a parallel object hierarchy, or at least save it for a future extension once we have gotten rid of all the more low-hanging perf fruit and are grubbing for a few milliseconds. |
No a pause wouldn't flush only a resume. This is how other frameworks operate. Regardless I struck the suspend / resume from my comment as it unnecessarily complicated my position.
Transaction capability typically takes on a begin then rollback or commit pattern. This proposal doesn't do that. This proposal creates multiple new APIs to do what the existing DOM APIs can already do today except that it applies them in bulk when you're finished. This proposal, in my opinion, could be significantly simplified by simply making it: The destination is the HTML node to apply all your changes to and the source is your separately built DOM. No reason the underlying engine couldn't handle all the complexity your introduced. The DOM API is inelegant; I don't think it's a good idea to incorporate increased complexity for a theoretical optimization. If possible, re-use or work towards a better designed API that lends itself better to transactions. Transactions would be pretty cool. |
@domenic most of the "allocations" in this proposal are not actually allocations. Integers are created which should be much more light-weight than actual objects and don't impact GC and such. (And also, I think it might be a good idea to define @KrisSiegel the API is not meant to be elegant. It's meant to be fast. Having discussed this with implementers it seems unlikely we'll get proper transactions. Doing that efficiently would be very hard and unlikely to materialize anytime soon. |
@annevk I don't think the WHATWG should consider an API only for speed. The DOM APIs are public for web developers to use so usability of APIs shouldn't be forgotten or tossed away (this is how we ended up with jQuery and a new front end framework every two weeks). The DOM API is inelegant enough as is. I would like to see more re-usage of existing APIs where possible (which I see no reason why you couldn't do so while still accomplishing the goals of the proposal). I'm curious how efficient the passing of node objects to and from web workers would be. I could see that being useful in more scenarios than just this one. Yeah I wouldn't be expecting transactions for any type of GUI state. It would be cool but yeah implementing it efficiently while maintaining generic capabilities is very difficult. |
Transactions is pretty much impossible due to how |
cc/ @drufball @esprehn @n8schloss @domenic has summarized some of my thoughts up pretty well, but I'd just like to add my 2c. :) This proposal has three parts that are potential performance benefits. (1) Worker Construction of Tree. It may be worth thinking about this part of the problem as a separate API which integrates nicely with this proposal, and regular DOM APIs? (2) Tree Construction @wycats , you mentioned some experiments with It looks like that this is a pretty simple API to actually implement (if you are just worried about the bindings type cost); getting numbers from someone wouldn't be too hard? If there is a significant performance difference: this should then be compared to a polyfill of the current DOM API on top of this API. I.e. when a sync DOM API is called, perform the mutation list synchronously. Blink's implementation has the ability to create IDL APIs that live in javascript instead of cpp. If this is a huge performance win, then theoretically engines could change their implementations to mitigate this. Again, we really need numbers to be convinced here. (3) We think it's really valuable for engines to be able to resolve One thing that I think is missing in this API is the "DoItNow" button. I.e. calling this api would perform all the changes synchronously like the current DOM APIs. As an example, say your UI can tolerate 100ms (~6frames) of the async DOM not appearing, but after that it would prefer to jank the main thread, as opposed to delaying further. A sync "commit" function would do this. One way to do this with the current API would be: changes = new DOMChangeList();
/* build up changes here */
changes.execute().then(() => {
// changes aren't actually performed until you call "commit" here?
// e.g. all the style/layout/etc work is performed. "commit" just makes it appear in the DOM tree.
changes.commit();
});
requestAnimationFrame(() => {
changes.commit(); // Promise above is now not resolved, as already done "commit".
}); We see really large benefits from this part. ^-^. |
That was my intent. Getting many of the benefits people are after with a simple-to-implement API should be valuable.
That is definitely the intent of this proposal. By making
Good request! I'd have no problem adding it if engines thought it wouldn't introduce new problems 😄
Great! |
Hmm, can you expand on this? I didn't see any integers in this proposal. Going through the sample code: // Usual DOM API:
let article = document.getElementById('main-article'); // Node wrapper (1)
let text = node.firstChild; // Node wrapper (2)
text.textValue = "Hello world"; // no allocations
node.setAttribute("data-active", "yes"); // no allocations
node.removeAttribute("data-pending"); // no allocations // ChangeList API:
let articleToken = document.getToken(document.getElementById('main-article')); // Node wrapper + Token backing + Token wrapper (3)
let changes = new DOMChangeList(); // ChangeList backing + ChangeList wrapper (5)
let textToken = changes.firstChild(token); // Token backing + Token wrapper (7)
changes.updateTextData(textToken, "Hello world"); // no allocations
changes.setAttribute(articleToken, "data-active", "yes"); // no allocations
changes.removeAttribute(articleToken, "data-pending"); // no allocations
document.applyChanges(changes); // promise (8) I see 2 allocations for normal DOM APIs (both simple wrapper allocations, which are "an integer" (pointer) size each) and 8 allocations for the ChangeList example. And that isn't including the transfer cost. |
I also reviewed that in detail before I began work on this proposal.
That is certainly an important characteristic of this proposal. It also has several other goals, both related to efficiency and expanding capabilities. (one example is supporting the superset of the HTML syntax and the DOM API, which I talk about in the proposal; another is allowing the end-developer to directly control the staging of work across threads by using workers).
I'll reply inline.
I don't think you should see ChangeList as a "proxy", but rather as a collection of instructions that can be freely transferred until it's applied.
Since the application of a
This is something I spent a lot of time talking to @lukewagner, @smaug--- and @annevk about, and our intent was for this API to serve both purposes. As I said in the proposal: "If there is some reason that a concrete implementation of this design might not be able to accomplish these goals, it's almost certainly something we should discuss."
I'm very eager to hear from them about this. I'd be very happy to have a call with anyone interested in this area as well.
This particular constraint was one of the most important aspects of this design, and I spent a lot of time thinking about it and discussing the details with others. The intent of
It looks like Dru's proposal creates several new async APIs for existing operations. It also postulates a "wrapper" that could be used with the regular APIs. I understood Dru's proposal as an early exploration with several different possible directions; can you flesh out which of those directions your comments are talking about?
From my original statement about the union of trees: "the HTML parser supports more tag names, while the DOM API supports more trees, such as custom elements nested inside a table)." DOMChangeList also supports using the HTML parser, but the HTML parser limits which nodes can be made children of certain elements. We could perhaps fix
I have heard this thinking before and I think it might misapprehend the reasons people want to do build DOM trees and calculate DOM mutations in a worker. It's not so much shrinking the cost of the DOM work, but rather all of the JavaScript work that is necessary to calculate the needed mutations. In practice, a lot of application work is done on the UI thread simply because of the close proximity of the DOM APIs.
There are many small paper-cuts that the ChangeList proposal addresses in one, fairly contained proposal. It attempts to create an API that, when used, has noticeably fewer common gotchas for web developers, and does not require library, framework or application developers to reverse engineer which APIs produce hazards, and keep that knowledge up to date as implementations change. I am especially interested in feedback from implementors that there is something about this proposal that is difficult to implement, or that would make reasonable initial implementations slower than the equivalent use of DOM APIs. Any suggested changes to this proposal that would improve |
@wycats We really need numbers for point (2) - (bindings related costs are really the problem) before going too deeply into API design however. Without numbers on if this will actually get us a significant performance benefit, it's impossible to tell if this is the right design or not. I.e. we could get a simpler API if this wasn't a significant problem. For example just the Are you able to get numbers for (2)? Or work with someone to get numbers for this? |
Well, one issue with the above proposal is that it uses a type notation (TypeScript?) that implies type checking. This would necessitate creating Web IDL interfaces in order to get type checking so that you couldn't pass arbitrary values. It in fact uses a generics notation that implies we'd need (when converted to Web IDL) TextNodeToken, CommentNodeToken, ElementNodeToken, etc. So even if it's conceptually a simple internal value, it would need to be implemented as several wrappers + backing classes where the backing classes have those simple internal values.
What I meant is that both of Dru's proposed APIs have minimal extra surface. For example they allow you to create Nodes and e.g. set their text data or attributes using the usual DOM APIs, whereas yours involves creating NodeTokens and using the parallel interface for such operations. In other words, instead of creating a parallel DOM API with a full parallel object hierarchy of NodeTokens and APIs that operate on them, Dru's proposal only duplicates the few APIs necessary for queue-building.
This is a great point; it's good to bring it up. It certainly makes the empirical question harder, but still worth investigating.
I've given some fairly concrete feedback about the increased allocation cost, which hopefully can make this clear. I'll defer to those more involved on the implementation side to say more. |
The intent of the TypeScript notation was just to show clearly what kind of NodeToken I expect each part of the API to produce. The only time I expect any kind of checking to be mandatory is when applying the ChangeList, because for example it would be a bug to
Thanks for the clarification. One of the problems I'm hoping to address is that not all operations are safe or reasonable to use inside a transaction, and certainly many operations are not safe if you want the resulting change list to be transferrable. By starting with a minimal API that nontheless covers mutations of the DOM as a data structure, we can build up an API that doesn't have surprising or problematic behavior. I don't expect to need a much larger set of lowest-level APIs; Much of DOM1-4 is described in terms of a lower-level set of primitive DOM operations, and my intent is to flesh out the base layer of those APIs in a way that would both work across workers and support asynchronous application of the change list, smart scheduling by the engines, and avoid user-space interleaving of dangerous operations. |
I've spoken with Bill McCloskey at Mozilla, who has an intern looking into building an initial implementation of this proposal. Since I don't work at Mozilla, I obviously can't hold them to this, but hopefully we'll see some useful results from their work. But still, it doesn't seem like it should be necessary to implement a feature to discuss general performance implications of the design. Generally when we find ourselves getting close to a broad-strokes design that people find promising, that's when implementation work starts and we start getting more fine-grained performance feedback.
I can give more information later, but the TLDR is: Instead of building up DOM and interleaving dynamic work, Ember has experimented with (and shipped, as part of Glimmer 1) an optimization that uses The entire benefit of this optimization is to avoid reifying static nodes in JavaScript, and we saw very significant improvements to templates with minimal dynamic content (and therefore shipped the optimization). We still need to walk through some static nodes to reach the dynamic ones, and the |
Sorry have to disagree with you here. For most APIs proposing a broad-strokes design first is great. However when explicitly proposing a performance API, it should come with the burden of proof (or be obvious) that it will solve the performance problem. Again if bindings isn't really the problem, we could get away with a simpler asyncAppend + WorkerNode design potentially, which is a significantly different API surface.
Yes please! |
Something like this has been mentioned in very broad strokes a few times. I would be very interested to see what such an API would look like in more detail. |
I generally agree with the principle that there should be some reason to believe that the API will improve performance. In this case, there are multiple good reasons to believe that, in addition to the reduced cost of bindings:
These are all "paper cuts", but they're precisely the kind of paper-cuts that we see adding up in real applications, and have a lot of difficulty avoiding at scale in practice (indeed, I learned about the impact of many of them by watching great talks by Google engineers at Google I/O!). For what it's worth, Ember applications (for example, LinkedIn's Mobile app, which is where I've been focusing much of my performance effort lately) can easily perform thousands or tens of thousands of discrete operations during initial render, so small costs (especially associated with garbage collection) can quickly add up. |
Wouldn't custom element callbacks run inside the operation making all the intermediate states visible? I don't think we can skip running the constructors or attached callbacks and do a layout, most elements are mutating their state in there, we'd be doing a wasted layout. This API doesn't handle the creation of ShadowRoots either. I've been profiling many JS applications recently and the wrapper creation is rarely the place where costs add up, certainly not enough to justify this style of API. Specifically today I looked at Amazon, Facebook, Polymer iron-list, and several AMP pages. None of them would see much of a performance improvement from this API. Note that we're moving to much wrapper heavier apis all over the platform as well, see for example the Typed OM in CSS, custom paint, layout, etc. I'd really hesitate to attempt this kind of optimization around Nodes when so much of the rest of the platform is moving to create more objects anyway. We need to just make wrapper creation faster if that's an issue you observe. We certainly can think about what it means to construct nodes inside a worker, perhaps with a stream interface, or some kind of WorkerNode interface where you build a tree and then transfer it. Perhaps we need to start by untangling the various problems you're trying to solve here. |
It seems like there's general agreement that some kind of transactionality is desirable, but with a key unresolved question being whether we should strive to avoid exposing intermediate states. I find the idea of exposing those states pretty concerning, and I think it's worth trying to achieve an API that avoids it, which AIUI is what this proposal is aiming at. I also find the idea of transactional change records an appealing primitive for modeling the DOM -- it seems like a good "explain the platform" approach: an execution trace is a sequence of DOM changes. So where other APIs @esprehn mentions might use wrappers to talk about existing nodes, it makes sense for transactional change records to operate at a lower conceptual layer. Also, I'd be pretty worried about arguments saying that absent empirical data, we should have our primitives do more things and bet on our optimizations getting smarter. I think our default orientation should be the reverse: our primitives should be simpler and try to do less. |
I think Ian was saying that we need to see numbers supporting the idea that this specific incarnation of the API will lead to more performance benefits than a different incarnation. You're right that this proposal would have a lot of performance wins. But looking at the improvements you mentioned, it seems like they would also be received from any general
To make sure I am clear, the point here is that by proposing a separate API surface, we could exclude any operations that would be invalid coming from a Worker because they depend on a relayout on the UI thread? This reasoning is compelling to me, but is there a way we could introduce similar benefits without introducing an entirely new API surface?
It sounds like this point is still a bit unclear. 1) this proposal may require more bindings? 2) it's unclear if reducing bindings would lead to significant performance improvements.
This benefit could be achieved through any
This benefit could be achieved through any In short, I don't think there's any disagreement that we'll see performance wins by allowing DOM in a worker and asynchronous DOM appending. But we need to see compelling numbers that this version of the API leads to more wins than other versions before committing to it. |
Before I do that, I'd be very interested to read a detailed |
More than that, the API gives developers a way to express a batch of DOM operations that by definition cannot accidentally interleave accesses that would force a layout flush. Because the batch is applied without interleaving JavaScript code, the API creates a natural staging between blocks of pure DOM mutation and blocks of CSS-dependent changes.
If it's impossible to implement the
As long as the only mutations we're talking about are various forms of
I really want to see a detailed version of a WorkerNode-style proposal. I must have missed previous discussions fleshing it out; it's very hard for me to compare the impact of such a proposal against this proposal. I would really love a pointer to more details that could help me do a careful cost accounting of the different proposals. |
The TypeScript might have been too confusing. To reiterate, it's just there to illustrate the conceptual model. In an implementation, all So basically, we're just creating an Also, the handwaving about (Furthermore, having new primitives helps address the mismatch between the HTML parser and DOM APIs in terms of name validation.) |
Can you give more detail, perhaps IDL? Are you actually using Remember that the in-JS allocation cost of a wrapper is the same as the allocation cost of an integer (on a 64-bit system); both are 64-bit integers. |
interface DOMTreeConstruction {
unsigned long openElement(DOMString name, NamespacePrefix = "html");
void closeElement();
void setAttribute(DOMString name, optional DOMString value = "", optional NamespacePrefix = "");
unsigned long appendText(DOMString text);
unsigned long appendComment(DOMString text);
Bounds appendHTML(DOMString html);
};
enum NamespacePrefix { "", "html", ... }; (Maybe Spoofing does not matter. That would simply result in an error when the "C++ side" uses the instruction set to create a tree. And since we cannot support rollback and proper transactions anyway due to I'm not entirely sure what you're saying. The cost to create a promise or a node is equal to creating a number? |
Not a promise, but for a Node, the JavaScript-side cost is equal, yes. (It is just a pointer to the C++ backing object.) |
Sure, but you have to allocate the backing object too. Are you not counting that? This approach doesn't require a backing object since it's just an instruction set. |
You have to allocate the backing Node in both approaches, as Yehuda pointed out in his OP. |
No you don't and I don't think he did. All we're doing is building up an instruction set. Actual node creation happens during commit. |
Hi, I have been working on something similar. I created a format called vDOMSON that is a subset of JSON. My dream is to standardize how vDOM nodes are represented, so we can completely omit My first attempt is to convince the snabbdom project to adapt vDOMSON. What is cool about this approach is that the Web Workers do not need to import whatever vDOM lib that is used, since vDOMSON just is a subset of JSON. Furthermore the vDOM representation is fully decoupled from the patching algorithm. I am transmitting these vDOMSON trees through the postMessage API, depending on the browser and message size stringify or structured copy is fastest. What I would like to do to improve performance further. Instead of messaging the entire vDOMSON tree I would run the patch in the web worker and only transmit the necessary changes to the main thread where the actual DOM is touched. I don't know if transferring objects from a sub-thread to the main thread could beat a really fast structured copy, the structured copy is a simpler approach and I haven't hit any noticeable performance bottlenecks. |
With the arrival of shared array buffers and atomics, this IPC overhead should be little to none now since the instruction backing buffer can be transfered via a SAB. |
If they actually use numbers though, we run into the forgeability problem pointed out in the first part of #270 (comment) |
@domenic How much can existing implementations create new value types internally? I think it's important that these values aren't real pointers, but any kind of unforgeable (but copyable) value would be fine. |
For those following along, I've started an implementation of the tree construction part of this proposal for use in Glimmer: https://github.com/glimmerjs/glimmer-vm/tree/master/packages/%40glimmer/dom-change-list. If there's interest in helping to work on this outside of Glimmer, I'd be happy to extract it into its own repository. I'll also be publishing regular npm snapshots pretty soon. |
@wycats I have being implementing library for encoding and applying |
@wycats To be specific, one thing that I'm really interested in, which this proposal does not cover, is event handling and I could really use your input on that if you have any thoughts. |
@Gozala what's the easiest way to get in touch? |
I looked into the concerns around
(Given the nature of integers in So in conclusion I think both designs are viable. |
I think it's worth pointing out this experimental library I just put together: https://github.com/AshleyScirra/via.js This allows full access to all DOM APIs using Proxys. It effectively builds an internal list of DOM commands and posts them to the main thread to be carried out. It happens entirely in JS land. I think its main advantage over DOMChangeList is it allows you to use all APIs the same way you do in the DOM, rather than a limited tree-construction subset with its own API. For example you can do The main problem is that it leaks memory. GC is not observable so it's not possible to identify when the placeholder Proxy objects on the worker get collected and drop their corresponding references on the main thread. I think this can be solved with a special If I read this proposal right, it sounds like this WeakKey idea is like NodeToken but generalised to represent any DOM object at all. This significantly broadens the use cases it covers. For example we want to move our entire HTML5 game engine in to a worker, but the lack of APIs like Web Audio is a much bigger problem than whether or not we can create nodes. Via.js lets you do both: build up lists of DOM commands, and access arbitrary APIs only present on the main thread. |
Could I get some sort of status update on this concept in general? (Evolutions, meeting notes, etc.?) Haven't really heard or seen anything about it anywhere, either here or on the WICG discourse. (I'm not active on any of the W3C/WHATWG mailing lists, though.) |
There's no active mailing list. There was a meeting at a Mozilla All Hands, but that wasn't really conclusive one way or another. The main problem here is lack of implementer interest. |
I wrote a blog post that covers some of the options to better support implementing this entirely in a library: https://www.scirra.com/blog/ashley/38/why-javascript-needs-cross-heap-collection |
@annevk Sorry, I meant any of the W3C/WHATWG mailing lists. (I edited my comment accordingly.) |
Also, I'm not a huge fan of the API proposed - it's a bit too verbose and too much of a shotgun for something that's really just performing transactions on nodes. I'd personally prefer there was a way to just force browsers to batch something without having changes exposed to the main thread until the next animation frame, and a way to run changes in the layout pass to reduce the impact of style recalcs (which account for 90% of the perf issues with DOM frameworks). Edit: Hidden for easier scrollingHere's my thought on such an API, one that's a bit less ambitious: // `execute` runs after animation frames, after all previous transactions have been
// executed, but before intersection observers have been run. Any transactions started
// here are deferred to the *next* animation frame.
//
// When `execute` called, if any properties of a node all of these are true, then an
// `InvalidStateError` should be thrown:
// 1. The node being mutated is in the calling document.
// 2. The node is live, or would have been made live in the transaction.
// 3. UI-visible attributes of that node are read/modified, including ones like
// `parentNode` and even if indirectly (like in a transaction).
// 4. The node *is not* in `read` (if read) or `write` (if modified).
dictionary LayoutOptions {
AbortSignal? signal;
sequence<Node> read = [];
sequence<Node> write = [];
// I know this is invalid syntax, but `this` is supposed to be the layout options object
Promise<any>? execute();
}
// Transactions assume the state right after the animation frames are run *are* the
// initial state - they don't actually save anything first, and they act as if the
// operations work on the state after animation frame callbacks run.
[Exposed=Window, Constructor]
interface DOMTransaction {
void insertBefore(ParentNode node, ChildNode? node, ChildNode next);
void insertAfter(ParentNode node, ChildNode? node, ChildNode prev);
void replace(ChildNode node, ChildNode? other);
void setAttribute(Element elem, DOMString key, DOMString? value, DOMString? ns);
void toggleClass(Element elem, DOMString classes, boolean force);
void setStyle(HTMLElement elem, DOMString key, DOMString? value);
void setNodeValue(Node node, DOMString value);
// The tree is locked between when animation frames and intersection observers
// would be run. The fulfillment value is that of `callback.execute()` or
// `undefined` if the method wasn't passed.
Promise<any> end(optional LayoutOptions? options);
} There's a few specific omissions and design points with my proposal here that I thought I'd note:
|
It's not transactions. We cannot do transactions given |
Okay. 👍 |
DOMChangeList
andDOMTreeConstruction
Motivation
There are three high-level motivations for this API:
Element
s andNode
s.Concepts
NodeToken
A
NodeToken
is an opaque value that represents aNode
. It can be efficiently transferred from one worker to another.It serves two purposes:
Node
that already exists in another worker, and can serve as the starting point of a series of mutations expressed in aDOMChangeList
.Node
that was produced as part of the process of building aDOMChangeList
.Applying a Change
The intent of this API is to have these performance characteristics:
DOMTreeConstruction
orDOMChangeList
) significantly reduces GC-managed allocations compared to the current DOM APIs.If there is some reason that a concrete implementation of this design might not be able to accomplish these goals, it's almost certainly something we should discuss.
Details
DOMTreeConstruction
DOMChangeList
Example
Using the usual DOM API:
Using the ChangeList API:
FAQ
Is this actually faster?
The intent of this API is to create a low-level interface that is as close as possible to the underlying implementations. It attempts to avoid introducing new costs while reducing a number of paper-cuts that exist in today's usage.
cloneNode
show that skipping those wrappers provides a performance benefit, butcloneNode()
can't satisfy as many use-cases as this API.Isn't the real issue that people are interleaving DOM manipulation and layout?
That is certainly a major issue, and this API puts developers on the path to success by encouraging them to stage DOM manipulation work separately from APIs that can trigger painting or layout.
Because the API guarantees that no user script can interleave during the application of changes, there is no way to "mess up" and trigger an immediate flush of any deferred work.
Unresolved Questions
ClassList
and thestyle
property through this API? It may be difficult to represent these kinds of changes with the operations already proposed (since this API does not allow direct imperative access to the DOM), and a few additional APIs probably wouldn't do damage to the constraints.The text was updated successfully, but these errors were encountered: