Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

JavaScript bundlers, HMR and customElements.define() #829

Closed
kyr0 opened this issue Aug 21, 2019 · 21 comments
Closed

JavaScript bundlers, HMR and customElements.define() #829

kyr0 opened this issue Aug 21, 2019 · 21 comments

Comments

@kyr0
Copy link

kyr0 commented Aug 21, 2019

Hi,

I'm working on SpringType, a TypeScript-based pure web component framework.

An important part of the developer experience is HMR, hot module replacement, so that code changes in web components are detected, re-transpiled and injected in the current developers browser context on the fly.

It's all shiny and just some LoC away until it comes to re-defining the web component constructor in the browser context. By the means of the current spec and impl it seems to be impossible to do so, because 1+n calls to window.customElements.define() with the same element name but different constructor function aren't allowed.

That is kind of sad because it would be amazing to be able to patch a whole DOM tree on the fly. It seems like I'm not the only one having this issue.. In fact even the webpack guys are using some hack to get around this:

https://github.com/aruntk/wc-loader/blob/master/src/index.js#L19

They seem to just use a different element name each time they re-register a web component when HMR comes into play. But this comes with nasty side-effects, like CSS selectors would return "wrong" / different results as they wouldn't match as probably expected...

Basically, I could just do a whole page reload each time I detect a code change of a web component, but well, this is an argument against web components in general... to be forced to do a page reload in times of HMR and Virtual DOM algorithms isn't very cool :)

I just wanted to ask what the status of the debate is towards this?

Thank you! :)

@kyr0 kyr0 changed the title JavaScript bundlers, HMR and customElement.define() JavaScript bundlers, HMR and customElements.define() Aug 21, 2019
@rniwa
Copy link
Collaborator

rniwa commented Aug 22, 2019

I think you can reasonably fake it using Proxy to override new and define your own callbacks for each type but there is no way to update the list of observed attributes, which seems like a serious oversight.

We should probably provide a way to update the list of observed attributes for a given custom element.

@annevk @domenic

@annevk
Copy link
Collaborator

annevk commented Aug 26, 2019

So the observed attributes are stored in the registry, right? So this would mean some kind of way to push updates to the registry for a given name? Should you only be able to update the attributes or also which callbacks are important?

@justinfagnani
Copy link
Contributor

For the HMR use-case callbacks can be dealt with by defining all of them and having them delegate to the new definition. It certainly would be nice if that wasn't necessary though. For attributes this could also be addressed with a wild-card option.

What are the hazards of allowing re-definition of elements? Could we add an option to define() that allows a replacement? This would help a little, though it doesn't actually give anyone HMR, that still needs to be implemented on a custom basis, IMO.

We recently got a limited form of HMR working within Google by patching customElements.define to pass a duplicate definition back to the originally defined class, which can then patch itself as it sees fit. LitElement's implementation then updates styles and re-renders templates. That's the custom part. I do wonder if there could be a standard class-level callback for redefinition, ie:

class MyElement {
  static observedAttributes = [...];

  static redefine(newClass) {
    ...
  }
}

@kyr0
Copy link
Author

kyr0 commented Aug 26, 2019

Thank you for the many responses. Based on the suggestion of using a Proxy by @rniwa, @vegarringdal and me developed a general purpose polyfill for HMR with custom elements. It's not yet perfect, but it works without noticeable side-effects for now.

You can try it out by installing the npm package: custom-elements-hmr-polyfill

The essential implementation details are:

Here: customElements.define runtime override

and here: lifecycle function call delegation

Apart from that we created an interactive example:
https://codesandbox.io/s/custom-elements-hmr-polyfill-4vd3o

If you clone the repo and execute npm run bootstrap && npm start you will see the actual polyfill working.

Bildschirmfoto 2019-08-28 um 16 29 44

Bildschirmfoto 2019-08-26 um 23 17 13

In this example, fuse-box v4 does the HMR part.

But exactly as @rniwa noted, the limitation is observedAttributes. If the code of a custom element changes and returns different attribute names to observe, this change is not reflected.

I'd suggest a function like customElements.update(name: string, constructor: Function, options?: ElementDefinitionOptions | undefined). Calling this function would invalidate the native internal cache and also call the static getter observedAttributes again.

Doing so would imply no regression and just enable a new feature safely.

Alternatively customElements.define could just allow for re-definition. Thinking about it, I can only find very obscure cases when such a change could lead to a regression.

One might also think about an event being thrown when this happens like customelementdefined on window (to invoke a (selective) DOM element re-flow) but I feel that could be a bit too much for such a limited use case? On the other hand it might be tricky to find out when a re-definition happened without such an event (only polyfills could solve that problem again).

@rniwa
Copy link
Collaborator

rniwa commented Aug 27, 2019

It would be very strange & problematic to allow re-definition of the same element name.

I think the least problematic solution here might be to allow wild-card (*) for the observed attributes since the only use case here is during development and not really in production.

Another somewhat exotic solution is to make use of an author-defined custom element registry since we can just create a new registry each time you need to load scripts.

Both solutions are equally useful outside of this use case so it's probably worth pursing either or both.

@annevk
Copy link
Collaborator

annevk commented Aug 27, 2019

We explicitly did not allow * as it's an easy performance mistake to make that's not necessarily always noticeable.

@rniwa
Copy link
Collaborator

rniwa commented Aug 27, 2019

We explicitly did not allow * as it's an easy performance mistake to make that's not necessarily always noticeable.

Right, that was the past discussion due to some folks from Google objecting to it. However, I'd prefer allowing * better than allowing re-definitions of the custom elements because the latter leads to weird things to consider like what happens when the constructor of the old custom element definition gets called after re-definition, etc... In fact, I object to any proposal that involves re-definition / update of the custom element constructor itself.

@kyr0
Copy link
Author

kyr0 commented Aug 27, 2019

@rniwa Yes, allowing * in observedAttributes to listen to all attribute changes sounds like an elegant quick fix. @annevk It might be an option to write a warning to the console about the performance implications? As already said, it's meant to be used in development only, so the warning would inform the developer to replace the * by some specific definition at some time.

@kyr0
Copy link
Author

kyr0 commented Aug 31, 2019

We've also polyfilled observedAttributes so that they're mutable too now (on re-definition):

https://github.com/vegarringdal/custom-elements-hmr-polyfill

Two distinct ReflowStrategys allow the atomic and full re-flow of the DOM to re-create elements that have changed. Buffering is implemented to limit the round-trips.

Anyway, it would be nice to see the standard supporting HMR for Web Components in a way that we wouldn't need any polyfills but in general the issue is pragmatically solved now.

@Leedehai
Copy link

Leedehai commented Nov 6, 2019

A big shout-out to @kyr0 and @vegarringdal's polyfill effort! This would've been a headache for me just months ago.

So I guess there'll be no changes on the W3C front? This inconvenience is better to be addressed by the JS bundlers via polyfills, not by the standards (unless this register-only-once principle is deemed meritless)

@kyr0
Copy link
Author

kyr0 commented Nov 6, 2019

@Leedehai Thank you :)

is better to be addressed by the JS bundlers via polyfills, not by the standards

However, the polyfill is not able to cover all the nifty details. We've managed to solve a lot of issues but there is still one, which we can't address:

If a custom elements class definition is mutated at runtime, the (JS) developer expects the instances to mutate as well, reflecting the change. Just like JS object instances change, when you change its prototype. However, when this happens with the polyfill, the developer has to choose from a list of "refresh" strategies atm. One of the major tradeoffs here is, that it is incredibly hard to correctly re-create the state of the component as it was before.

For example:

  • We've detected a re-definition of FooBar
  • So every <foo-bar> needs to be replaced in DOM
  • So we create new instances. There was an attribute on one of them called "w3c" and it was set to "cool". So we call setAttribute("w3c", "cool") and got every possible state loss issue covered. Do we?
  • No. Because there is JS state as well. And you can have arbitrary property changes in your custom element class. This means, that the DOM it generates might belong to some class member properties that might be hidden from the outside.
  • This is a box of pandora, because we have no easy solution where we could say: This and that property is a "state" that has to be re-created as is.

So, the standard doesn't define this as of now and the effect is, that we can't completely solve the issue with a polyfill. Even if I would loop thru a list of attributes defined by the HTML standard, so I would find custom added properties on instance level before the clone happens... I could still run into issues here when closure functions are involved. I really cannot solve all cases safely and it has a huge impact on performance as well (all these loops on a per instance level). After all these findings, I went back to a page refresh. And later, I switched my frameworks implementation not to use web components anymore. I use an approach on components now which is much like React in theory, but it's more efficient. For style encapsulation I use CSS modules now and I'm working on a bundler that does the magic it involves at compile-time.

On the other hand, it would be nice if I could have sticked to web components. But there are so many limitations and I need so many polyfills :/ Well, and the "custom element" / component issue has been solved in user-land already for years. So if we look, for example, in direction of react... there is a defined "state" and because of this they know, in any event of a on class definition mutation, what is a state to clone and to re-apply and what isn't. I do the same in SpringType, but allow class properties to be decorated with @state to allow more than one property to be known as a state and get around the deep-tree state mess.

@bathos
Copy link

bathos commented Nov 6, 2019

If a custom elements class definition is mutated at runtime, the (JS) developer expects the instances to mutate as well, reflecting the change. Just like JS object instances change, when you change its prototype.

The contract for define() treating constructor / constructor.prototype as dictionary input is pretty misleading, yeah — but its current behavior is important. If the element’s public API were used for live lookup, invariants which are achievable today would cease to be — and production code would break, because today it’s possible (and in certain usages, essential to) delete the dictionary-members-disguised-as-methods immediately after definition.

However, if wrapping classes in an instrumentation layer in advance of registration, it seems like the instrumented wrapper callbacks could delegate to what the tool sees as the ‘current’ callback definitions, no?

If redefinition were permitted, I would hope this would at least be done in a way that doesn’t take away any of the invariants which are achievable today — e.g. through an explicit { redefineable: true } opt.

@jhnns
Copy link

jhnns commented Nov 6, 2019

Hi from the webpack team 👋. First of all: hats off to @kyr0 for this awesome PoC.

Using Proxies sounds perfectly fine to me. In fact, that's how most HMR implementations work (like https://github.com/gaearon/react-hot-loader). Not allowing to redefine the custom element makes perfectly sense from the browser's point of view.

Without having thought about all the small details, I would assume an HMR implementation to call the disconnectedCallback on the old component, remove all the markup from the DOM and then call manually the constructor and the connectedCallback on the new component. If the HMR implementation knows that it can't replace the web component, it's ok to reject the update and trigger a hard reload.

I also think that it's safe to assume that the developer did not do anything weird. Unlike CSS, it's impossible to do safe HMR with arbitrary JavaScript. It's ok to make HMR work in 90% cases. When in doubt, most developers will hit reload anyway.

@trusktr
Copy link

trusktr commented Nov 30, 2019

standard supporting HMR for Web Components in a way that we wouldn't need any polyfills

I think I like the idea of re-definitions, but not sure what the complexities are.

Userland code would be clean and nothing in a custom element class definition would need to be HMR-specific, because on re-definition the current elements' disconnectedCallbacks are simply called and the elements would clean themselves up before being downgraded then upgraded into the new elements (carrying along existing own props for sake of simplicity).

HMR code would only need to re-define the element, outside of the class definition.

This inconvenience is better to be addressed by the JS bundlers via polyfills

This puts a ton of complexity in the userland code, as @kyr0 describes.

Not allowing to redefine the custom element makes perfectly sense from the browser's point of view.

Trying to understand why. Seems like it was just easier to do it that way from an implementation perspective. What are the invariants @bathos?

I would assume an HMR implementation to call the disconnectedCallback on the old component, remove all the markup from the DOM and then call manually the constructor and the connectedCallback on the new component.

That's basically what I'm imagining a native redefinition feature would do.

@trusktr
Copy link

trusktr commented Nov 30, 2019

Thinking about stuff like ReflowStrategy.RERENDER_INNER_HTML from custom-elements-hmr-polyfill, why is it needed? Wouldn't disconnectedCallback on downgrade constructor/connectedCallback on upgrade during redefinition handle that?

The reference to the DOM element that got redefined (downgraded then re-upgraded) would remain the same and in theory this should keep parent state in tact.

For example, a parent element may stick children into an element that will be redefined in the future, and may even track the to-be-redefined element in some arrays, Maps, and other places in memory. Those children of the future-redefined element will remain as children of the redefined element because it is the same reference to the same object in JS after redefinition, and all arrays, maps, etc, will continue to reference the same object. When the said element gets redefined it can re-handle any children on upgrade (constructor/connectedCallback) like it did before.

Code using jQuery, having the same reference to any redefined element, should continue to work the same, for example.

There may be cases where re-upgrade causes some property values to reset, but that's ok. Maybe a bubbling redefined event could be suitable for this, so code that owns the reference to a redefined element can re-inspect properties, etc, if needed.

@bathos
Copy link

bathos commented Nov 30, 2019

What are the invariants

The first set I was referring to concern storing the reaction functions rather than performing live lookups. If one deletes the callback hooks from the public interface, it can be guaranteed that any invocation of them is legitimate (e.g. connectedCallback is known to only be invoked when the element connects). This is one part of the API design which has made it possible to author elements whose internals are actually private.

The latter is that elements which instantiate elements generally have expectations about what those elements will be. Currently those can be relied on, as either definition would throw or if it succeeded you know what you’re getting.

However from a backwards compat perspective, both alternative behaviors could be introduced safely if they were opt-in via the second argument.

@bathos
Copy link

bathos commented Nov 30, 2019

The reference to the DOM element that got redefined (downgraded then re-upgraded) would remain the same.

The concept of downgrading isn’t sound. State exists in association with objects by identity, whether as keys in weak or strong collections, as private fields, as own properties, as internal slots of platform objects, or just as bindings within closures. This sort of thing works only when adhering to certain idioms that deliberately constrain JS — it doesn’t generalize.

@trusktr
Copy link

trusktr commented Nov 30, 2019

If one deletes the callback hooks from the public interface, it can be guaranteed that any invocation of them is legitimate (e.g. connectedCallback is known to only be invoked when the element connects).

I do imagine that with a redefinition feature, the engine could very well drop the references to the old hooks, and store the new ones, thus it can maintain that privacy that you want. (Maybe it'd be better to have some way to expose #private fields to the engine?)

elements generally have expectations about what those elements will be. Currently those can be relied on

That's true. Well obviously programs are going to outright break if people randomly change the types of the elements that are defined. I don't think people will have success doing that anyways. It'll just be something that everyone knows not to do. It'll be like with a type system (f.e. TypeScript): you can swap out an item for another one that can be assigned to the same base type.

introduced safely if they were opt-in via the second argument.

That's a good idea. In production mode the application could not pass it in, to lock things in place. If someone wants to abuse it, and totally change the types of elements, I think they'll get what's coming to them. It won't be long before they realize they simply shouldn't do that. And having TypeScript in place can trigger type errors once the ambient element type definitions are mapped.

The concept of downgrading isn’t sound. State exists in association with objects by identity, whether as keys in weak or strong collections, as private fields, as own properties, as internal slots of platform objects, or just as bindings within closures

I'm imagining that it would be fine, because on downgrade the referenced object (element) remains the same.

After downgrade, the element will still be the same referenced object in a WeakMap key, the same object at some position in an array, etc. It doesn't matter where the reference is, it'd be the same reference, and all those places would continue to rely on that same reference.

It's similar to currently: we can get references to non-upgraded elements, manipulate them before ever calling customElements.define, and in the future we once they are defined we can use the upgraded behavior of the same referenced elements, grab those same elements from any Maps, Arrays, etc. It's the same concept that already exists, just happening more than once.

@bathos
Copy link

bathos commented Nov 30, 2019

After downgrade, the element will still be the same referenced object in a WeakMap key, the same object at some position in an array, etc. It doesn't matter where the reference is, it'd be the same reference, and all those places would continue to rely on that same reference.

Known truths about the value would be invalidated. The assumption inherent in these assertions is that one would only ever be running code in an environment where all other code is one’s own / trusted. What you’re describing would invert ownership in a way that prevents maintaining a boundary between trusted and untrusted code.

Also note that an element’s own properties could have been defined as unconfigurable or the object could have [[IsExtensible]] false, and that attempted reinitialization of private fields throws a TypeError. Therefore ‘reconstructing’ the same object is guaranteed to throw if it has private fields.

It's similar to currently: we can get references to non-upgraded elements, manipulate them before [...]

There’s a parallel, yeah. But this reversed scenario doesn’t create the same problems. Actions which would prevent the constructor from succeeding would, well, prevent it from succeeding. This is different from actions which (silently, in part) can invalidate the internal state of existing elements remotely from any source.

@trusktr
Copy link

trusktr commented Nov 30, 2019

Aaargh, Custom Elements are too complicated.

Here's an idea: we should introduce a new type of modern Document that ShadowRoots can opt into using, and this new Document would drop all the bad parts of traditional DOM and keep only the good parts, and work with Proxies, etc.

@rniwa
Copy link
Collaborator

rniwa commented Apr 20, 2023

F2F resolution: This is a duplicate of #820

@rniwa rniwa closed this as completed Apr 20, 2023
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

No branches or pull requests

8 participants