Skip to content

Latest commit

 

History

History
964 lines (519 loc) · 210 KB

nov-30.md

File metadata and controls

964 lines (519 loc) · 210 KB

30 November, 2022 Meeting Notes


Remote attendees:

| Name                 | Abbreviation   | Organization       | Location  |
| -------------------- | -------------- | ------------------ | --------- |
| Frank Yung-Fong Tang | FYT            | Google             | Remote    |
| Waldemar Horwat      | WH             | Google             | Remote    |
| Bradford C. Smith    | BSH            | Google             | Remote    |
| Shu-yu Guo           | SYG            | Google             | Remote    |
| Shane Carr           | SFC            | Google             | In-person |
| Michael Saboff       | MLS            | Apple              | Remote    |
| Mathieu Hofman       | MAH            | Agoric             | In-person |
| Richard Gibson       | RGN            | Agoric             | Remote    |
| Ashley Claymore      | ACE            | Bloomberg          | In-person |
| Robert Pamely        | RPY            | Bloomberg          | In-person |
| Rob Palmer           | RPR            | Bloomberg          | In-person |
| Philipp Dunkel       | PDL            | Bloomberg          | In-person |
| Jason Williams       | JWS            | Bloomberg          | In-person |
| Andrew Paprocki      | AMP            | Bloomberg          | In-person |
| Luca Casonato        | LCA            | Deno               | In-person |
| Andreu Botella       | ABO            | Igalia             | In-person |
| Joyee Cheung         | JCG            | Igalia             | In-person |
| Nicolò Ribaudo       | NRO            | Igalia             | In-person |
| Romulo Cintra        | RCA            | Igalia             | In-person |
| Ujjwal Sharma        | USA            | Igalia             | In-person |
| Linus Groh           | LGH            | SerenityOS         | In-person |
| Yulia Startsev       | YSV            | Mozilla            | In-person |
| Eemeli Aro           | EAO            | Mozilla            | In-person |
| Daniel Minor         | DLM            | Mozilla            | In-person |
| Tom Kopp             | TKP            | Zalari             | In-person |
| Christian Ulbrich    | CHU            | Zalari             | In-person |
| Patrick Soquet       | PST            | Moddable           | In-person |
| Alex Vincent         | AVT            | es-membrane        | Remote    |
| Kris Kowal           | KKL            | Agoric             | Remote    |
| Justin Ridgewell     | JRL            | Vercel             | Remote    |
| Linus Groh           | LGH            | SerenityOS         | In-person |
| Daniel Rosenwasser   | DRR            | Microsoft          | Remote    |
| Caridy Patiño        | CP             | Salesforce         | Remote    |
| Jordan Harband       | JHD            | Invited Expert     | Remote    |
| Philip Chimento      | PFC            | Igalia             | Remote    |
| Istvan Sebestyen     | IS             | Ecma               | Remote    |

Intl Enumeration for Stage 4

Presenter: Frank Yung-Fong Tang (FYT)

FYT: Okay. hi everyone. My name is Frank, I work for Google on V8 internationalisation and also last year also spend a lot of time there are working on Temporal and today, I'm going to talk about two different proposals. The first ones are asking for stage four advancement. This one is called Intl Enumeration API for stage 4. So the charter of this API is Is to let Intl, which already exists for about 10 years to able to return to the caller the list of supported values of certain option that's already pre-existing. this in ecma402 API. including calendar, collators, currency, numbering systems time zone and unit.

FYT: Soo, it's with several design. A stage 1 and 2 that eventually. what happened? It won't is become one method. with one argument could just string key and the key could have this following six values. in the future, maybe it will be extended to include more values for currently to, This is only the ability in scope. so you you passing that with with a key of a stream. And it is not all the sixth string. Well, throw the (?) arrow and it will return an array, which is what kind of value is available for that particular particular type.

FYT: The history is that we reach stage one in June ???. Stage two in September 2020. And then in July 2021 it reached stage 3 and I have gave a update last year December's about a year ago and during this period of time several browser implemented with the Velvet past and there are issues discuss and resolve. So, is the spec text, I did make some editorial changes to the NaN of the actual operation but basically the the rest are the same which we named it. to make it clear whatever return is canonical value. So, this is the spec text.

FYT: We do make some normative change here, which is a very simple one that we didn't make it clear to address feedback from Andrea from Mozilla has that to make clear. the numbering system is returning a unique canonical numbering system. so that's the only thing we change in turn on I cannot turn on normative is just adding the word canonical. same thing just two other abstract operation. just from last year with the only thing we changed really just adding canonical there.

FYT: Basically, this is the spec text and this is the function look like Intl supportedValuesOf and passing the string and it will just depend on a value called those at your operation. So in turn of the other supporting information / documentation for the web. MDN is there you can click on the link to get MDN documentation. here is just a screenshot of the summary of a browser already supported in which version and also have a link to the test262 reports. showing that some of the browser increment that I took the screenshot early on so maybe some other engines or polyfill or whatever Improvement more so I apologize if there have been updates since then. So this is a little early screenshot is already showed very complete.

FYT: so let's review what is required to advance from stage 3 to stage 4. So my intent is that we can pass stage 4 today. If there are no objections so that we this API will be officially included in the 2023 version of ECMA-402. the purpose for stage four again is to additions ready for the inclusion for the formal ecmascript standard. and add should be included in the soonest. practical standard reversion, which will be targeted on 23 version. and this will be a final version. We also have a pull request ready, this will need review from the editor to look through that. I believe in this case should be route would go and reach our to look it over that with a pull request. Things they're merging in 402. So the pull request, (?) is the request. so, any questions are I cannot see the view. before I jump to ask for the consensus of the stage for is any question. I can I see the view, so the chair. Please help me to coordinate the queue.

BT: Sure. The queue is currently empty. but we’d like to wait a few seconds to see if anyone Pops in. We can do that. Any. question or feedback? Shane's on the queue. with an explicit +1.

SFC: Yeah, I just want to recognize all the time that Frank has put into this proposal over the last couple of years. and, you know, working on reaching consensus about many different topics on it and I'm really excited to see it shipping in browsers and going to stage 4.

BT:Thank you. A few more folks in the queue.

USA: Yeah, I want to Echo that. This proposal has involved a lot of good work, and thank you Frank for pulling it through to the finish line.

DM: Spidermonkey team explicitly supports this.

BT: Awesome. Thank you. Thank you All right. That is the extent of the queue.

FYT: So if there are no other questions or feedback as well like to formally ask TC39 to request a committee to approval for the advancement to stage 4.

BT: Right. Frank is asking for stage four, and do we have any any objections? I hear a lot of explicit support. All right, I hear no objection. So I think that is stage 4 granted Congratulations.

Conclusion/Resolution

  • Stage 4

Intl Locale Info stage 3 update

Presenter: FYT

FYT: So the next was Intl Locale Info API. Originally when tomb, I think probably one have month ago when I put on agenda. This particular API. I was thinking about asking for state or advancement but about a month ago I think we find a tissue so we are not asking for stage 1 of the assessment today. but we do need your advice abouthow to resolve Issue together.

FYT: So this API is Intl local info API. Again. here are the motivation. So this is a proposed that would try to expose the Locale inputs already have a Locale already construct, and we want to expose some of the information to the user including for that particular locale week data? Which day is the first day of a week for and UK Monday in the calendar is the first day of the week, but US is Sunday right? And we came formation when the we can start and some some culture out there, a lot of particular cultures, Jewish or Islamic cultural their weekends, not standard a and Sunday. some then are actually Friday and stuff. and we even have cases that Friday and Sunday, are the case. also are recycled 24-hour cycle 12 hour cycle

FYT: So again, the history, we advanced to stage 1. in to September 2020 and January 2021 we reached stage 2. and 1/2 years ago, you've reached stage 3 and I have come back here and give us an update. I think doing the stage 3. We received a lot of very good feedback from several folks, including Andre and other couple people And we, most of those issue are not huge but for example, we have to tweak how we represent the week in wins with you, get out there. So discontinuous week in for will have the change the API slightly there. So that's why this API has been there for a while.

FYT: Basically, in this API would be adding seven getters to the Intl Locale object. here is one of the issue we try to get your help from that. Currently those things are getters and we - let me talk about the recent change. So one of the recent change, which I think we got resolved, we handed over while is when we try to return what the particular locale when (?) localization is co our billable for that locale. We return a list, an array. We used to say that array is stored in the order of the preference usage of that collation in the Locale. And I think Andre has pointed out, They're actually currently do not have such information the CLDR data, we actually available to us or do not have the preference. There are order, but the order is not guaranteed to be. preference order with a locale. So after a lot of discussion at the the committee we reached agreement, that we make a change to. I think we can come back here last night, about advice for General picture And I think the agreement is that in that particular case it will sorted in alphabetical order for that. So, we make that change and that got resolved in pull request 63, that's one of the reason original widest Gap kind of extend for quite a long time.

FYT: But then very recently we find another issue which I think is a blocking issue, the issue is this. so we currently that's seven function, we currently implement as a getter. The problem is that the return value are array, or object of object, okay? And so every time it gets called, we create this object. but it got pointed out that actually we don't cache that object, everything go time. those are getter, there are not function. function. And it will create new object return, right? Those shouldn't be changed. So the issues that got filed, you can look at issues 62 the issues that we believe that maybe some issue there. I'm not 100% sure how important is that issue. I feel that could be important. So I think I don't want to rush it. Seems like they are two solutions, one is I'm not sure that to are good. One solution is that instead of of a getter we change to a function. So he every time I will create an object and return, a different object the other solution will just freeze it. So every time we create this object with freeze it so nobody can change it. but that things like not enough, right? So you you're still real current different objects, would freeze it. And, another part of that is, could be maybe internally with your cache is always always return, the same thing and never got change because it’s frozen, but that is something I really don’t want to do because that means the engine have to be able to remember that. already created thing in a cache, which will waste memory for something that really you don't need to use it, right? So I really try to avoid that thing. I don't have a good solution, So I do want tc39 to give advice first. Is that that issue 62 reasonable. to address, or is a getter which returns a new object okay? Right, there's the first issue I want to ask advice.

FYT: The second thing is that if that's not a stat of a, we need to change it. Which route is better? Is that we talk about TG2 and the conclusions that we should bring in here to ask what we feel is better to have TC39 to give us a guidance. That is so so there are two options so far. maybe there are other option, but two option is we know about is, first, to change all all the getter to just function to every time. caught it returned to create the object returned it. Although, every time you're going to say anything, but you don't need to remember it and the user cannot expect That will be exactly same object, you can expect content not to change, but not the same object. The second thing is that you freeze it and return, which I have a pull request but I don't really think that is self solve the issue. I suspect that will be a showstopper for going to stage 4 because that mandates days. Maybe a surgical move for this thing in a limited way, but I want to ask for feedback.

FYT: but before I discuss that let me show you what spec text currently looks like, I make a little bit. So yeah. So this is the current spec text. Basically, we have a function just a few very basic stuff to create an array. and for locale, alone. Tell you basically return a list of those calendars that's used in those Locale. For example, in Islamic country, there are their calendar could be have Islamic calendar, in Japan, there will be Japanese calendar and the Gregorian calendar. For example, collations same thing, you basically return a list of array. And hour cycle code. What is again as array of the the hour cycle code Here. But remember array are also object right? But here are some more (?) has said, things in times of what time zone are. of that Locale depend on the region code to return. And then also we have the character direction of the locale. If, for example, Arabic the default will be right to left instead of left to right. We also have the week info of locales. This is the spec text of that so return which day is the first day and the weekend. and a minimum days. of week in that calculation. Here's a tricky part. So now, those things are currently getter, but those create an array or object. Which I think that becomes a problem as raised by Darren, I think, about that. that and the more obvious ones are those things are array, but more obvious art is to write the text info for return. We call this a getter, but internally we call ObjectCreate. and we set that information there. and and weekinfo to, we can Info, We create objects and stuff in Array. And some string there are some number there. but every time we call it we return a different object.

FYT :So this is the first ask I think I need some discussion about that is how do you would do with this issue? Is that? So I think to question first is this issue really an issue for is not a problem for Gather to return different object every time caught it, And second is iif that’s an issue, what’s the right way to resolve that.

All right, then is on the Queue with

DE: so, I agree with the comment that that Beth has made that getters should act like data properties at a high level in terms of returning the same object from multiple calls. I am not totally convinced that a caching option would be too expensive. We're talking about for each Locale I guess allocating, what is it like ten words extra space for the caching? That doesn't sound like too much. but it would also be fine to change these to functions that get called. So I guess that's a bit ugly. I don't see how freezing an object relates to solving the problem at all. It just seems unrelated. so, that's my feedback on that particular kind of superficial issue. Overall I'm very happy with this API. Like happy with the work you've done. Championing it and support moving to Stage for once we have this issue resolved

FYT: Let me respond to Daniel about the freezing part because you mentioned that, I think the issue is that you could cache it and return the caller. If it's not freeze, the color can change it. And then the question is what, what kind of thing you get your freak, your cache? Shit. Right.

DE: So yeah, that's fine.

FYT: Yeah. I mean but I do want to point out. That's the comment. and Darren make it there about a freeze.

DE(?): I don't understand. Yeah. then it could be Okay. To freeze it? I didn't.

DE: Sorry I somehow didn't didn't follow that. The freezing was in conjunction with caching. So yeah. that that could be okay for it to be frozen. I would be fine with that. I would also be fine with changing to a function, either way. way. but I'm also fine with just not freezing it.

MAH: All right, I at Matthew with the reply. I was confused about the location concern. Couldn't you? new lazily allocate the objects when you access the when it's accessed from that specific Locale and also in general slightly confused why? This is a getter and not incrementation detail on how that object is allocated?

FYT: sorry, I don't get your second question.

MAH: I mean. what would be the difference in having it as a value that points to another objects. it's yeah. And I mean, without going through the the getter hoops.

FYT: Sorry. I still don't get. Get get. So are you asking me why does it originally reach it as a getter or I don't quite understand your question.

MAH: Yeah, I guess, I guess I don't understand why it's reading as a getter

FYT: All right, that's my mistake. Have you found a stoic? I didn't realize that that I'm even less. They did this API I will return just a number or string. Could be be together, right? Right. But when I returned his object than together become problematic, because not a primitive. String or number, right? So it's my fault. Let me put it this way. The reason is, I didn't think about this object shouldn't do that. That's why I asking whether we should change it. In terms of the lazy thing. that the issue is that no matter how you saved it, you at least need to keep a pointer. Right. And that pointer is really don't most of the time nobody go to use that pointer. Right. unless someone called us API. So basically at least you need to save those pointer, you have to have a space to save those pointer and that's something I try to avoid. if possible. right? We should assume most of the time They are other uses of Locale may not need to call this method and if my, if our design, will require for every single those local object created even for other usage have to stay spend that. pointer to those things then I think is unnecessary burden for the memory usage. That’s something I try to avoid.

BT: Dan has another item on this topic as well.

DE: Yeah. When I went to speak to both of those questions that Matthew raised. On the first one, why use a getter instead of just having a property that's secretly lazy. I think such an implementation would work better in some engines than others, at least historically. I think people can correct me if I'm wrong. SpiderMonkey I think had to make a decision for such secretly lazy properties, and V8 kind of didn't tend to do that. That particular thing. At least not in cases like this. so, we want something that's going to be efficiently implementable across engines. So that's a reason to avoid expecting that engines will make a property secretly lazy. About the lazy allocation of these properties. I think there's two different kinds of memory usage that we should analyze a little bit differently from each other. One kind is the memory usage of like ten words of storage or so that are initially, all undefined for these. for these things, it seems really important to me to lazily allocate the actual arrays that holds the information, because that's a whole lot of allocations. But on the other hand, oversizing the allocated object a bit to hold these initially null pointers, I think that's a lot cheaper and I think that that is the kind of thing that we could afford. So, I'm not sure if we have to be, you know, completely optimized with respect to that. So finally, Frank mentioned that getter should only return Primitives and not objects And this is a - I'm not convinced that we should adopt this invariant in general. I think in you know we have a pretty small standard libraries right now. So there aren't a lot of gators that we can look at for reference but in the web platform, there's lots of usage of getters that return objects. and I think that's a - I think that's a useful pattern

FYT: I do want to clarify, I'm not suggesting what that that I just think currently I don't see any, at least I cannot find anything it to Susan to and forward to doing that. And I think that it have some Kind of additional issue, for example, just got raised. Comparing to you've gathered to return primitive type And I saying that, can I do it? I just think that's something you. I can't lack of Precedence to follow.

DE: Yeah. Right. So I'm just saying it we shouldn't let that hold us back. That's my opinion. opinion.

MAH: Yeah, I want to make a quick comment on something. I heard them say like the fact that the suspect lays out some steps shouldn't strongly constrain. that strongly constrained implementations like if the suspects is its data, property, this any implementation should be able to do something lazily. We saw the same thing with registered symbols and how they're implemented. And so on like, implementations can deviate It's not gospel like the steps that are written. As long as it’s not observable.

DE: I agree that implementations can deviate. I also think we should try to organize the the specification so that it's not especially inconvenient to implement things efficiently if we can figure out a way to do so.

MAH: I would rather keep simple spec steps and maybe have editors notes clarifying you don't have to follow these steps. Here is a suggested implementation?

DE: Yeah, I don't think that getters are especially complex, I think they're a great pattern to use and make sense here. sense here.

?: sorry, I didn't see the queue. Anyone still on the queue?

BT: Yeah, there's a couple folks on the Queue still. We got EAO.

EAO: Yeah. Just wanted to point out that if we had Records and Tuples that would be a great use case for this, but as we don't, it rather sounds like we avoid so much difficulty here, We just use something like it. getWeekInfo. As a function here. So it's really a method rather than getters.

BT: All right. thank you.

SFC: Yeah. I just wanted to point out the similarity of this to Temporal issue 1808. which we've discussed a lot in the Temporal champions regarding should we freeze objects when they're exposed via a getter. The current path that we're moving toward which PFC presented yesterday in the Temporal update was to basically just have a function, because that makes it super clear to everyone that whenever you call the function, you get a new object back each time. and yeah I just wanted to provide some background on that. My personal opinion is that I think that… we discussed four options in the TG2 meeting. We discussed making it a function, leaving it the way it is, making it return a frozen object, or waiting a year and making it return a record/tuple. And I don't think people want to wait a year, but maybe could, but it's probably more than a year, but like we could wait for Record and Tuple because I really would like Record and Tuples. It seemed like it would solve so many problems and I'm really excited, you know, I would like it to be could depend on that, but I think, I think using a function is fine personally, that's that's my personal preference, but yeah, I just wanted to provide some context on that's all.

BT: Awesome. Thank you, Shane. Brad

BSH: Yes, I'm just wondering what would be the argument against Mickey the function? Well, why shouldn't these just be functions? I don't see what, what's the benefit of them being Getters instead of functions?

FYT: So let me respond with that. The car. the I think the only reason is because that's how I wrote it as a getter. So that's a status quo, right? So unless unless the regional Change it there. if there are no reason to change it, there's a reason not to change it, right? So, so I'm not opposed to that, I just saying, that's the only reason we haven't changed to get function.

BSH: I would have a preference for the being functions because, well, I've been working on JavaScript compiler. We really, really like to be able to assume that if you say a.b that that does not do anything. Surprising it just gives you a value. If it acts like a function, I'd like it to be a function but that's my opinion. And I, you know, whatever thank you.

Sorry, anyone still on the queue? Just wonder if any other thing.

PFC (from queue): no need to speak strong preference for function.

BT: And that exhausts the queue.

FYT: Okay. So let me propose this way. So I'll proposing to the everybody seems like a lot of people express function and then they are. I think Daniel mentioned both are ok, so I go to change it, trunk, Adder to function, for all those maps that would anyone object me to do that? Okay. thank you for your device. That's exactly what I need from you guys. I’ll go make a pull request. I'll ask couple people to review it but basically you're just changing from getter to function now.

FYT: Still, I want you to go through this for whatever. the state 3 activity, So currently around March 2022, which is half a year ago, Chrome 99 and Edge and Opera and Safari all ship it. Of course now we have the change a little bit from from getter to function that they have the change. Mozilla has a bug open, we haven't see clearly what is holding them back. So old like as Mozilla see, is there anything else a blocking function thing we need to help them in order to reach that. So I would like to figure out from that. And then MDN has been edited. test 262 to have the feature for that already. already. Here are some of the tests 262 sure. show. Sorry, this isn't this. The MDN showing that basically just Firefox have an environment that yet have a launched it yet. And this is the to put together, but this little function (?) There are no single place I kind of do sound Photoshop to put. I think all different colors together but they are there to have, will have passed but that you can click on the test 262 to see that I think it will be nice if I extend more of a testing there.

FYT: So I'm not going to ask for a stage 4, but I will just will try to go through what is needed for stage 4 because I do plan to go come back Next year, in January to ask for stage 4 advancement. So my ask for the committee and also would like to hear feedback from Mozilla folks here. is there anything I can help for you to ship the thing? and is there any one have concern about anything thing I miss about stage 4? I would like to know now instead of way for it next time and basically this are again, those stage 4 requirement. I have the pull request. began to pull requests. Have to be changed. So we do have two compatible implementation already JSC and V8 but it will be really nice. It will turn to state 4 It also have Mozilla shipping So, we'll come back back here.

BT: Dan has an update from Mozilla.

DLM: Thank you. So I checked with Andre before this meeting on his hand, on the plantation for us and he would like to get responses for the two issues that he opened last year. So one is related to how the calendar tag is being handled. I believe and another one was a forward compatibility issue, so I think that's all that's blocking the invitation from our side.

FYT: I have to respond to him about. I think that the to issue he mentioned the calendar response, but I don't see he replied for a half year I think so I don't know what else I could do. About the forward compatibility issue. I think there's a I think the issue is that whether the time zone, I think you just make a note there they could be a in the future. We may have something we need to worry about us. So, I don't know how to resolve that. I don't think that’s blocking your shipping, right.

DLM: Well, Andre would like the for number 30. I didn't realize that I guess he doesn't realize they do were waiting for a response from him. because when I asked him a month ago, he still thought issue number 30 was open and he wasn't sure how to proceed. I think think we're issue number 30 compatibility issue. just having it. resolved is probably buying from from our point of view. Okay. One way or another.

FYT: Okay.

DLM: Should I follow up with a then to see you know what? what? What information are you looking? for, for man with issue 30 priming that? that up with them?

FYT: I mean. he asked whether we should consider calendar and I already addressed that in the reply that you know, the Locale already have the extension key to address those issue, okay? So all functions, we have we calling to the extension key to address those things. and it's not like you know, whatever we construct it is usually resulting in the Constructor time. So whenever the locale Construct, we can resolve that there.

DLM: Okay. Okay, I'll bring that up offline

FYT: All right, thank you. Okay.

FYT: okay, so I think in my times up for 30 minutes for this proposal, thank you.

YSV: I think you can find a clarification about his concern about issue 30 in the bugzilla bug in his response to Dan. it specifically addresses your last comment. So if you want to reach out to him I recommend doing that.

BT: All right. Thank you, YSV. Thank you. Awesome. Thank you, Frank.

Conclusion/Resolution

  • Getters to become functions
  • DLM to follow about with Andre offline about remaining blockers for Mozilla

Records and Tuples

ACE: Hello everyone. So I'll be talking about records and tuples today. and starting us off with an anecdote. I like the fact that while I'm right now, here, talking about records and tuples. And also, kind of here, because of records and tuples, because this is a proposal that many years ago got me looking at Bloomberg, which then got me applying to Bloomberg and then working at Bloomberg, nothing over to the TC39 folk and then nudging over to the records and tuples folk and then to know sitting here and talking about them. So, they're dear to my heart for getting me here. Here.

ACE: so records and tuples go back. a long way. Obviously they're in other languages, but from JavaScript they go back to at least, and possibly earlier, but like the Brendan Eich post, harmony of my dreams back in 2011 which had lots of things that we could add to JavaScript in the future? things like arrow functions and iterators and spread and destructuring. And it also talked about records and tuples. You can put a hash in front of your object literal or array literal and you get these frozen things that also could be. compared with equals and you also get other little things in this post like that slice notation that there's a proposal for as It's a buoy then with way forwards to what the proposal now looks like at a very high level, it kind of looks exactly the same. I can put a hash in front of my array literal and I can compare these things. And then you know, things that we didn't have back in 2011, we have sets and iterators and destructuring and just work with these things. I can. put my edges in a Set and then I can check if that thing is in the Set and then I can destructure them out. So that was an example going to be using tuples. and, you know, I could do the same thing with my records. Maybe I want to be kind of clear about you know, giving names to make things rather than have a pair.

ACE: And, we've got a playground that the RBU made that kind of, it means you can play with these things.

ACE: So, some details about the proposal that make up some of that kind of Decor, things about it, are the records and tuples are primitives. So, that wasn't in there. Brendan eich votes, they were objects, but the proposal models them as primitives, they have their own typeof. But, you know, they are object is Frozen when you bought it for a (?) coerced an object to this, going to just works. So they appear as frozen objects in their object pool. And they are. deeply immutable and the way they achieve that is they banned all objects? going to take all objects has a risky thing. So the only allowed to contain other records and tuples and all the other primitives, and then just a point that that's, you know, one thing when you're using the playground the typeof will be object because the polyfill uses uses objects

ACE: so, is a slide for MF. the history of the proposal. So this is I won't go through this but the people that want to kind of maybe new to this proposal and want to kind of go back and track. the history. you know. Here's probably a handy slide of links. And also we've got a bunch of things here. Again I won't go through this, but if people want to go and look at the videos of conference talks that my favorite kind of Champions have given and people from the community have given and also some blog posts. There's lots of stuff out there. We also have more links with people. We have a monthly record and tuple community call with with notes and also we've talked about this quite a lot in SAS and SES do a really diligent job of recording all their sessions. So there's like 20 hours or something of like YouTube content of us discussing about this that maybe if you're bored over Christmas this could be a really great way to spend your time.

ACE: So the kind of core goals that thread through all those conversations like conference talks and blog posts are that this proposal is really trying to give people a way of modeling their state, modeling their data, with deep immutability. and also being able to compare those things with deep equality and also being able to do all of that as ergonomically as possible So that it can be something they can read it easily reach for and use for out their programs.

ACE: And then, if we look at the kind of the current proposal, these are the things that we as champions - the things we're kind of proud of is that what we think we've kind of got packaged up is something that uses a lot of familiar things. So like the syntax is very familiar, they kind of just work with triple equals maps and sets. and when people, in the community kind of talk about these things, you know, they think of them because it in some ways they're Limited in that you can only put other primitives in this thing. But the people that are used to modeling their data with JSON, passing around JSON and serializing things to JSON. if that model kind of directly maps to this, but then you also get things like bigint and null and symbols. I also, they're very reliable. you know, the === equality, there's no side effects, they're immutable so you can't have cycles. because they have a can go getters and Setters or proxy things. They can't be infinitely deep or anything like that. There's also an amount of ecosystem compatibility. Due to the fact that they work with triple equals an Object.is And also if you're kind of just reading these things you can't really tell they’re not objects. So you use optional chaining or indexing or like checking your property is in these things. All of those operations just work. There's also other things that we've tried to do. Like, I think this is - with JavaScript we can only ever move forwards. So we've kind of used this proposal as an opportunity to fix some things that some people don't - to fix a few things in JavaScript, So, they have like a nice toString of the contents. You can't have holes in them. and we tried to avoid the Prototype pollution problem. but, we're very aware. where the proposal is currently at is kind of just sits at one point in the spectrum. For simplicity, I'm just going to pretend that when you're designing a proposal you can just imagine like a single line of design. You know, really when you're designing a proposal, it's like this n-dimensional design space. and I'm not going to try to model that in a presentation. So, for simplicity, we can imagine proposals just sit on this like, single line. and, along that line, there are other designs. So, let's jump to, I can have a point them out of my line

BT: think maybe this would be a good time to answer YSV’s question on the queue

YSV: I've been sort of thinking about this proposal a lot and one way that I sort of ground myself, when I look at TC39 proposals is understanding what our motivation is. And I think that also help us us here in the questions that were about to ask. What is the fundamental problem that we're trying to solve in the language by introducing these new data structures?

ACE: so, I guess that's a great question. I would say they're in some ways. There isn't one fundamental problem we’re trying to solve. I think it's a myriad of problems that could be solved independently, and can also be solved, kind of holistically. I think from Bllomberg’s perspective, some of the problems we're seeing are either people not using immutability enough, so they're passing around objects that are being mutated and that they just encounter race conditions on things they are trying to use immutably. And it's it's not as ergonomic to use libraries to achieve this, You know, they find it’s giving them the immutability they want, but at quite a high cost. So that's like one of the problems that this can solve. But then at the same time we also have things like how maps and sets are great for certain problems but then you quickly run up against a problem like I can really only use things like strings and numbers as my keys which is great but limited to certain problems. And then when I want to do something more it's quite a big cliff I have to climb. And the other set of problems we're coming to is equality. The thing we see there is that while you can implement your own equality, like again, I can install a deep equality Library. That's only helpful when you're in control of the equality. So, sometimes when you're interacting with frameworks or other things, it's that other code that's doing the equality comparisons, and there's no kind of option to pass in my own predicate. So, in that case again, you're kind of limited by what that Library offers and you can like walk the library or raise an issue but you're kind of - you're limited by what that Library can offer. So I wouldn't say there's one fundamental problem. I think there's a whole collection of problems.

YSV: That's of course. been my observation as well, And I think we will get into this a bit more, but I think us being able to concisely define what problem we are solving and why this feature is being introduced is an exercise that we should go through and have very clearly defined because that will help us make decisions. For example. when we discuss immutability as a goal, immutability is the feature, the problem that we are solving is something else, which is, for example being able to be certain that the data structure you're currently operating on has the same shape as when you saw it last time. So I think by nailing down really what the goal is it'll help us answer the questions that were about to come to.

ACE: Yea I agree. with that, 100%. Thanks YSV. We can carry and draining the queue while there's things on it.

RRD: Yeah, YSV, I wanted to give you a short answer. so it's mostly multiple aspects are interlinked which is deeply immutability and deeply equality. And we think and this presentation will show it because ACE has done an excellent job. trying that that those two aspects are interlinked. and we don't have an answer to solving those two problems separately.

PDL: So we started talking about this a couple years back before Robin introduced the problem. The problems I was encountering was that I was really looking for a structured primitive. So, something that I can request as a parameter to a function that I know that no matter where it goes after that, the function would never mutate. Because we were doing a lot of copying as part of passing things, into a function as parameter because there was never a guarantee, that the Callback that we're calling with that data, wouldn't alter that data. So you know, hey, I was writing in a library. I was getting a client call and I was returning data in a callback and I wouldn't have no guarantee that that wouldn't be altered. So I would have to do a copy every time because my big data tree shouldn't be at risk from a user of my library altering, it. So that was the problem that I was trying to solve but and that goes further because it's the same problem as returning multiple data points from a getter. Yeah, I can do that by combining it in a string, right? But now I have to parse that string again, right? So to me it was the structured primitive aspect that was relevant. and structured primitive when you think about it leads to consequences, So immutability is a consequence of structured primitives. All our Primitives are immutable, right? It leads to accurate triple equals because all our primitives work with accurate triple equals. And, you know, it doesn't matter whether it's deep or not, because we don't have any Peak restructured, growing tips, So those were sort of the core goals. to avoid. action at a distance. An easy way to avoid actions of this. That was the immediate need that I was trying to fill a Kentucky.

BT: To Robin's point, I just want to break in real quick. Sorry. Yeah. I To make sure. that ACE you’re watching your time box here. And if you feel like that if this discussion is going to come up later in the presentation, then let's definitely prefer to go through the presentation. So feel free to aggressively postpone the discussion items until we're through the slides.

ACE: Yeah, yeah, let's do that. Thanks BT.

ACE: So as I was saying that there's other points on this design spectrum that, you know, we're very aware of. So we could say, a kind of similar problems and imagine solving them instead of adding new types, that interact with like the existing APIs, you can kind of add things on the kind of the other kind of aspect of our like Cartesian product. of what's possible in the language and instead of adding new types, you can add new APIs. So we could have APIs for doing deeper quality. This is all this could be called Anything in. It could be object or e .d p equals but there's an imagined its object.deepEquals and deeper mutability could be, it's an object.deepFreeze and things like sets. perhaps they have new versions of maps and sets or option bags, you can pass the maps and sets where you can kind of control their equality. and this would kind of let you write some of the same bits of code. Or, you know, maybe there's a point somewhere else on this spectrum that's kind of a mix of the two things where we're not adding new types, like records and tuples are objects. arrays. But you can still like, triple equal them. So, you know, there's a whole spectrum of design here that in some ways kind of solves the same goal. But then has a different treat dogs. and as we've explored these designs like there are lots of things that come up. like prototypes and array likes indexing, and like the string, but we kind of want to classify those a second-order decisions and not focus too much on those, not because they're not important, they are very import and you know, we have our preferences but they're very much second order after what YSV was saying, I'm focusing on really what other core goals that we're trying to solve, you know, I'm really kind of nailing those and getting kind of agreement on those first before going too deep into the second second order of things.

ACE: So, out of those things, I think the thing that we really want to focus on because it seems to be the one that drives so much of the proposal is the deeper quality aspect of it. So, we did some research years ago, of what libraries currently doing for their their just their equality operation. and you know, in the language there are various different types of equality you can get - sameValue, sameValueZero, strict equal. and then the thing, I think the kind of his most interesting I found when you like reading through the source code of these libraries, is that the vast majority of the Implement, all of these things using triple equals is their kind of building block, and then kind of react and angular stand out as the ones using Object.is. So a kind of a consequence of that, is that So if you take immutable JS, for example, that uses triple equals is there quality operation under the hood. It means that when you put, with the current proposal, like a record in a set, so hit. So if I just put like a break, it object in my immutable JS set and I say does that equal another immutable JS set of that object, it's going to be false. because you know, two different objects. whereas, when I put a record in these sets, then it says the these two things are equals using the immutable JS equality operation. So they work out of the box with with libraries for this reason. and so you need to build JS because they hinted to earlier It doesn't let you customize this equality operation. It's like hard coded into their their Library. There's also - Sebastian Lorber wrote, who most recently is working on docusaurus. He wrote this really great blog post and did a conference talk about how these things work with React. And it's just like, one example. I poured out of his post and again reactors 1, it is a library that doesn't let you in for the majority of their APIs. It doesn't let you customize their equality. they use Object.is() So if you want to kind of work with their APIs, you need things that are Object.is() equals. and you know, set of Records being object. That is equals there. Really helps them kind of work say like they just work with it like react.

ACE: so, some of the research we did was a know, what are people kind of currently doing for deeper quality and it's very easy doing like a search across GitHub that our good friend JSON.stringify is very useful for when you want to just use the standard library and grab something or comparing its you think that these peoples. You can kind of be used this tool to get around the set problem. but we hit the problems of stringify of . you know, things like null and. and also things like your properties are in a different order. You know, you're going to get the different string you have to be careful of those things. And then the fact here, we have a set. I'm gonna end up with a set of strings. I can’t destruction' my thing back out. I'd have to parse it back out out from the string to actually get get the contents.

ACE: so, When? we talk about equality, these are kind of some of the attributes that we find useful. And we're talking about it. So, I just quickly read through defining these terms. so, when we say that equality operation is consistent. We're saying for the same inputs, you know, is it was going to give me the same answer. So this here would be an example of an equality operation that isn't consistent. consistent. symmetric, you know, if A equals B does B equal A. I looked at that Temporal proposal and mean to that Temporal has equality. that it's not a symmetric quality, rights based on the receiver. so it's great image, you can compare different types like like a date. I like something that's just a thing that only has like a day in a month as long as the day, in a month in it like it that time you can compare those Things. but it as a result in means that it's not always symmetric.

ACE: And then, do they have side effects? So you know if this if you have an equality operation that's going to trigger proxy hooks or getters then that's an equality that could potentially have side effects.

ACE: and then also does an equality operation preserve encapsulation, So if this equals operation was saying that these two things weren't equal not because they're two different objects and it's doing referential comparison. But if it's somehow, seeing that the private field is a 1 and then a 2 and for that reason, we're seeing these things are not equal then this equality operation doesn't preserve encapsulation. and then, is it terminating. So again, if it's going to trigger getters or proxy hooks or things like that, then potentially it's going to be trying to we know, it's comparing iterators. Like maybe it's going to compare So in this infinite. so it would never terminate.

ACE: and then, when we, look at what we would imagine, like a if you want, can I come in background two maps and sets, if you want your maps and set to be kind of well behaved? and what we would imagine what you'd want from the kind of the default way, that a map or a set would work, is that you would want an operation that it is consistent over time? that is symmetric doesn't have side preserves encapsulation and then terminates. then that's kind of why that when we being designing the quality in this proposal, we've really focused on an operation that has those attributes. And then like, in addition then overload the existing ones because those things aligned with those operators, So it kind of gives you those benefits. I mentioned earlier of working with the existing, Ecosystem. system uses those. operators. One way, I kind of like to think that this proposal is that there's kind of two ways you can go. You could either say because of how immutable these things are. Now that gives us this great opportunity. to also Define their equality in this way. All we're kind of defining something that has this consistent high quality. and then because of that, they need to be immutable to the (?)

ACE: so, in summary, we think there is a lot of benefit to the fact that we've kind of preserve that thing all the way through back from like 2011 of overloading triple equals and Object.is() you know that works with existing libraries out there. it also works with things already in the language I could read or include map cetera for the equals by doer, a switch statement. And you know, that meets our ergonomic goals. However, like what we're really kind of is really just so

RRD: sorry to interrupt. Actually there is a question (from WH) and NRO might be able to answer.

NRO: Yes, I am not sure exactly what the question is. But equality means by definition to be reflexive, symmetric, and transitive, and we are trying to preserve those three properties.

WH: Okay; I wasn't sure why you didn't mention reflexivity and transitivity.

ACE: yeah, it's mostly because I can't pronounce the middle one they are definitely the more formal terms for what I what I said.

DE: Yeah. so it's an equivalence relation. That's the goal.

ACE: Yeah, exactly. It creates an equivalence relation.

ACE: so, yeah, I think that's been really good that we've been able to get in stage 2 is Implement a feedback of the proposal So, you know, we think it works really well for end users, you know that, you know what? Actually happens when you try and then implement this thing. and the feedback we've got here, and I've done my best to try and it's been a lot of feedback and trying to slide (?), you know, maybe things you know there's going to be a loss of resolution. So really happy for implementers to correct an add to this. So from sweating monk in V8, you know, there's there's a lot of complexity here, you know, that was something that was mentioned when we went to stage two, You know, that was like this looks like it's going to be complex. So, it has to be worth that complexity. So then, naturally I think the following format is can we? simplify this, like, can we reduce the complexity? and then a thing I found in stage 2 when it Gary of them worked on implementing this in SpiderMonkey, is that one of our assumptions going into. This was the kind of using objects under the hood would perhaps simplify But actually turned out. with the way SpiderMonkey and have implemented their things which is a really good way of implementing things. does actually mean that. there were like performance implications of doing this I'll let you near and a kind of expand on that. So explain it, way better than me. and then also from from excess maybe this was kind of +1 from representatives from JavaScriptCore. They're also kind of concerns on. you know, how this we're making a very fundamental change to the language (?) like these New Primitives. There's a how that kind of impacts kind of the overall JavaScript programmer model.

ACE: Going a bit more into. So we've had a lot of conversations more recently, with Moddable. So for people that aren't aware of, a refresher. so, the Moddable JS engine runs on resource-constrained Hardware, like microcontrollers. And, you know, one of the ways they achieve that is they have this preload feature and again, PST in the room can explain this way better than me But the way I understand it is they can you can configure your thing to execute certain modules at build time, they then get stored in ROM. And then that allows them to when you actually run the thing you start so much quicker and It's going to use a lot less right right RAM. And part of that means that XS has quite a lot of stuff around. deep immutability. the kind of goes beyond records and tuples because they have this kind of petrify operation. So all the built-ins can be petrified like functions promises dates or a bar fills. All these things can be kind of deeply Frozen and also that, That. kind of immutability, you know, extends across all of the objects. so, the feedback we kind of got was that from our understanding? Is that the R&T proposal, you know, it's limited in that it’s just talking about primitives and it's expanding immutability by adding more primitives and then the deep equality that we've added is then again kind of limited to this this realm so, that coatsy like, represent Apple introduces some serious ecosystem challenges. and, you know, that and there's quite a bit of subtlety in in how these things are added. And they also echoed the implementation complexity, that was also was also given to from V8 and Spidermonkey.

ACE: so, we're kind of in this place of week. bro where that while we were kind of hoping to kind of get ready for stage 3. So we think the proposal in terms of spec text is in really good shape and we think it has a lot of good things to it. It's clear we're not ready for stage 3. So we really want to keep exploring these alternative designs to work out what can we do here? And it's kind of like the call to action to kind of as you're engaging these things. Like the more people that helped give us feedback and engage with this, the better. So these people interest in this have ideas or opinions. Please do get in touch. Or get on the queue. And also we want to work out how we can get more feedback from more developers. including - one of the things were saying is that we like how these things work with existing libraries And while we've been talking to some library authors, like we've been talking with the author of immutable JS, you know that there are more libraries that we could be talking with. So even if nothing we have our own next steps. But, you know, we want as much kind of feedback and suggestions and ideas on what people would also like to see in addition to this like, what other things would the committee like to see us doing what things. more input the better.

ACE: so, yeah, so that's the slides and now hopefully I've left time that we can actually chat about this proposal.

MLS: JSC has the same complexity concerns as the other browser engines.

BT: All right. Thank you.

SYG: Yes. Just like. All right, I want to give some more color on the complexity concerns from V8. When we chatted about this internally. I think overall - so this is strictly scoped to to kind of complexity and performance concerns. not to the language design concerns with speaking to the complexity concerns only we think the main source of complexity is this equality operator overloading and specifically things that you might do to speed it up if there were performance pressure like interning, and the more we looked at it became this fractal of complexity all kind of stemming from that. I think outside of operator overloading and interning complexity is probably not too bad. But if we have operator overloading for === equality and because it has syntax support we naturally assume, even though we have worked with the champions to kind of temper performance expectations on that triple equals and will be magically fast, but, and they have taken that to heart and we're very happy for it. But because there is syntax for it, suppose that there were just naturally arising ecosystem pressure to make triple equals fast the way we do that is interning, Of course, some complexity is there interning the more we looked at how to intern, something like records of tuples, It is incredibly complicated. And one recent thing that we learned that we didn't think about before is because of this whole immutable data use cases for templates, where you want to have like, this is where the champions moved from these boxes - moved from symbols and which map keys to boxes back to samples as weakmap keys. If you were to have this weak edge because you can have something in the record and tuple that is to solve a primitive that can participate in a weak edge via WeakMap. Now, the intern table has to participate in the fixed point ephemera marking that engines have to do for weak tables like no other interns like the string in turn table does not have to do that. This will be a just a new kind of complexity that it's unclear what the ramifications of our yet. This is just to say like basically all of the complexity that we identified in V8 when chatting internally about this kind of comes out of ===.

BT: All right. Thank you, Shu. Michael. I wonder if it would be possible for you to comment with more details on your concerns here today. Do they align with with Shu's, or do you have more color to add?

MLS: Yeah. sure that they line was Shu's There's some other concerns and in this is just the weight of the implementation complexity versus the value gained by the proposal. We've talked about it internally. Yeah, we can implement this capability but you're basically adding primitives throughout the whole engine, we have multiple tiers and you need to go through all those tiers. And it's it's you know not just implementing it, you know, the C++, it's not just the JIT, the inline caching, the speculation checks and things like that. It's a lot of work and we're not sure it's worth the complexity.

BT: All right. Thank you, YSV. Same question to you.

YSV: I’m also on the queue to give our experience has been with looking through Igalia’s implementation in our engine. We've been reviewing it and sort of seeing how it fits in. For those who don't know, our implementation is based on objects. So this is implementing records and tuples. which are specified as primitives as objects in our engine. This was initially thought of as a good idea because we didn't think that there would be quite so much variance and we didn't anticipate the complexity would be that, we'd end up finding what has happened is because primitives as they're currently specified at require different sorts of optimizations then objects currently have. So, for example, we have shape optimizations on objects. Primitives. Do not need this, and then sets needs sorted keys, and they don't need the ability to add an additional property over time modifying the shape of the object. As a result when we're going through the JITs or any other part of the engine with our current implementation strategy, we never know. If what we have is an object for a record or two. So, we have to add a number of if statements to check, where are we, what are we dealing with? And is, are we doing the right behavior for this thing and because of the variance between our implementation strategy, and the specification, it's difficult for us to audit and ensure that we've implemented the correct behavior for both. Because now we have a specification that deviates greatly from our implementation. There is a potential for a penalty to object use because of all of the additional checks and branching that we need to do and our recommendation, generally for implementations, is that the best solution here is to add a new primitive which is a more expensive solution than to build this off of objects. As a result of our implementation strategy we've also been thinking about our initial feedback to this problem, which is also what brings us back to our original problem statement. Because I think we all kind of agreed to that introducing immutability as a solution for being able to trust, the objects are passing around through your code. It was a good goal. But it kind of grew out of that. And I think I should actually stop here and give other people chances to talk because then this gets more into philosophical concerns. And I think we also want to hear from moddable. So I'll pause here.

BT: All right, moddable. Do you have any thoughts on this complexity issue that you would like to share, either in addition to or in support of what is already raised?

PST: I think. I mean, the presentation explained our concerns very well. I as an implementer myself, I would agree with YSV. Is that the way to go would be new primitives, which is significant addition to the language. I mean like look at the time It took everybody to implement BigInt. I mean, it's something. I mean, it's doable but it's will be worth it? To emphasize what has been said we love immutability. I mean these know, we don't discuss that goal at all and like what Peter said when we you move to stage two the main objection there is about all that design. could go further. and not, not to like, him this like box that that you current key. especially if the considering the woodwork it would be implemented. I think it should behave path forward. that's would make sense for, I mean, to launch that I mean, say that work. That's it.

BT: All right, thank you so much. Are there any other implementations that would like to raise implementation complexity at this point? Feel free to. speak up, add it to the queue. All right. then, let's go ahead Shu. Oh wait, no, we just did choose issue. Sorry, let's go. Go ahead with WH.

WH: I happen to like this proposal but I understand the implementation complexity challenges. The question I would like to find an answer to sooner rather than later is whether the intersection of everybody's constraints here is nonempty. How would we go about figuring out if there is any notion of value types that implementations might be happy with? Or perhaps the answer is that there is no form of value types that will work?

YSV: WH this is a question that I also have and for our part if we come to a conclusion with the committee today that the set of problems we are trying to solve precisely require and the best solution to them is the introduction of a value type, then I think that there will be a way forward for this. But I think we first need to establish precisely what we're fixing and why the shape of the constraints is what it currently is because I have some concerns that a number of the problems were solving with records and tuples are broader issues within the language and they may be worth solving at a broader level rather than restricting them to value types.

WH: If we're not doing value types, then this is a very different proposal and we're not in stage 2 any more. I'd like some urgency behind this discussion because this entered stage 2 in July 2020 and we've been chatting about it since then and I’m not sure how much has changed lately.

RRD: We explored many different alternatives and different designs in that time. We tend to go back to this design.

WH: Yeah, I agree. It's a good one. And if we go off to a different design, folks who support this now might change their mind.

SYG: I'm gonna try to clarify what YSV was saying. I don't know if I misheard. Yulia did you say that if we do value types, you believe there is a way forward for the complexity concerns for SpiderMonkey.

YSV: No. In fact, instead what we will end up with is a SpiderMonkey, will swallow the cost. I believe that each implementation will need to make that decision for themselves. But only in the case that we are fully convinced that value types are the right solution here because - sorry, DE. I'm going to sort of reference your queue item here. The argument is that the value types are a result of the goal, not the goal itself. However, I have noticed that value types have been frequently cited as the goal and certain properties of using value types have been used as a justification for the current shape of the proposal, which has put us into a situation where we cannot consider other shapes of the proposal for solving immutability. This is why I want to make sure that we're absolutely clear about what problem we are solving, because if the problem is to solve the value types, my opinion, a different problem than solving the problem of immutability for the language.

SYG: I don't want to get too much into the weeds of debating value types. I just don't understand what you mean by value types. You mean, like, if there were a proposal generalizing user value types.

YSV: in a sense, but I believe it would look very much like what we have now. But with value types comes a risk of a bifurcation of the ecosystem of the language, for example there are some well established needs for a deep equality operation for all objects. There's also a reason to believe that users will still be falling back onto objects for performance behavior while needing immutable characteristics. and I had one more actually but I can't - ah, yes, cross realm Behavior. Additionally, may also be necessary for and instead of calling it objects just to make the parallelism here. Be clear. we have value types and reference types. They are compound value types, and compound reference types. The problems of immutability can be solved with reference types and I think what I believe is currently a very interesting direction, would be to build out an immutability the proposal out of the shared structs proposal. Because shared structs are already seeking to preserve some behavior between different threads. And in addition they already have a fixed shape. On top of that if we introduce immutability that the values that are in that fixed shape cannot be changed that may be an interesting addition to that proposal or a follow-on to that proposal. And in that case, we are not introducing both shared structs and records and tuples, which have a certain overlap in terms of what they're doing. What we lose in this case however, is the equality behavior, which has now been cited as a top-level goal of this proposal. That equality behavior is so that we can key sets and maps with more ergonomically. I would call it an ergonomic concern. However, that is arguably still necessary for objects that are held in maps and sets, and it would be very beneficial. If, for example, the get method on such an objects took a function, and we had, for example, a static method, called Object.deepEqual. I'm going to stop there because I'm talking a lot.

SYG: Thanks for the. clarifications or language design point of view, I would welcome that direction but I believe that would run counter to the main value that Waldemar would seek to get out of the proposal, which is value types. And so, shooting from the hip here, which was my original reply, the complexity that V8 has identified that makes the current shape of the proposal unsavory for us to want to ship to the ecosystem I think come out of the value type part of it, not the immutability part of it, the equality operator., the triple equals overloading, the interning issues we talked about that seems to will only be compounded if value types were to be more generalized to have general operator overloading for example. So, barring cleverness that we haven't yet figured out, I daresay that shipping value types with operator overloading across the board that may pressure us into designing intern tables that are fairly complicated don't look good right now, But, you know, I had this is I don't want to say no way, it won’t happen, but currently we don't think it's worth it.

WH: It’s a matter of where you put the complexity. Value types have simpler semantics for users, but they're more complicated for implementations. On the other hand, if they make implementations too complex or slow things down too much, they may not be worth it. But value types do have a simplicity benefit for users. To give an example, it's great that we cannot distinguish two strings with the same contents in the language; it would be horrific if strings each had their own identity and you had to track that identity across functions.

DE: WH, I completely agree with you here. The idea is that this design is because we think it meets the user’s goals. So I can understand suspicion around this proposal as maybe rather than trying to meet JavaScript developers goals. This is trying to you know, fulfill the longtime dream of value types. I think we shouldn't let that distract us too much. So there's been a lot of research by TC39 going back before I joined the committee by Waldemar and others. How could we make it so that we could have more kinds of primitives? How could we make more identity-less values? And what would that look like? So I think we should consider this neutral. The research was. it was investigations into kind of logical implications in the ways things could fit together the differently influences proposal, but it influence the proposal because it was, you know, kind of thinking about fundamentals and doing research. The reason that this proposal adopts some of the results of that is because it fits into the core goals around deep immutability and equality. I disagree with what YSV said about, how equality can be simply separated as WH asked about like, what are the properties of equality? Is this an equivalence relation? The equality here, because it's on immutable objects and because it follows the properties, isn't an equivalence relation. If we invoke user code it will not be an equivalence relation if we compare objects that have mutable properties or anything mutable about them, it's not an equivalence relation. So it doesn't provide a basis for well-behaved maps and sets that that's a trade-off. We could decide we don't, we want to have maps and sets that aren't necessarily well-behaved, but it's a fundamental difference. So we can make other trade-offs here, both about whether we will want operator overloading and whether we want, you know, to take these as goals. But I do think immutability and deep equality are inherently linked, because only immutable things can have equivalence relation defined on them for equality. Unless, you know, you don’t recurse into the object as an identity based comparison. So, I think if we want to drop some of these goals, then maybe we shouldn't have value types in the language, it's really great. I'm very happy that various implementers have spoken up about their concerns. as Waldemar said, like we've been discussing this for a couple of years. And so, it's great to get on the table like maybe we shouldn't go in this direction. We should do something else. It's not clear to me. I hope that the future queue items, and the further discussion shows because this really is a cost benefit analysis. I want to hear more from developers who might benefit from this so that we can kind of ascertain that trade-off.

BT: I think thank you, DE. I want to point out that the time box is running relatively short here, based on the length of the queue is growing. I think, concision is important, but also I think the champions would probably get more value out of a breadth-first exploration of the topics then depth first. I am willing to help but I think probably the champions are in a better place to time box to individual topics. So yes, just to say, feel free to move the discussion along to another topic and I can advance the queue when you when you think it's appropriate to do. So and we can Circle back. Also if there's time.

?: Maybe Shu if you can make your point quickly because it's answering to them.

SYG: Right there. is in response to DE saying, deep immutability and deep equality are linked I believe the link is one that there's an implication for one way which is if you really want to give you equality then yes you do want deep immutability but. I don't see the link to the other way. That's all.

DE: I agree.

MAH: This was originally. motivated by something you lie. I said for equality there's really two different places where equality matters one is in mapping sets where we could imagine ways. of making that work with if slightly expanded API for mapping sets. However as ACE pointed at in presentation, that requires the use of the creator of the map and set it's added to to be able to adapt their implementation, so it wouldn't be compatible with existing. libraries. if we want triple equal to work that's a slightly different problem. And there it's more of the creator of the value that needs to do something to make triple equal work, not the consumer, and I again it goes back to to the goal here. What do we want? Do we want seamless ===, do we want seamless usage in maps and sets for these values. and, and I guess I just wanted to tell you that a concern, That's what we need to solve here is that we won't solve or not any (?) cancel that problem. which does require immutability maybe there is a smaller proposal.

ACE: Yeah. I know you're on the queue Robin. I would really like to come to Shane Shan has been on the queue for a really long time.

RRD: Yeah, I know. Just wanted to add something here, === also matters for libraries as ACE made in the presentation and the analysis that is important to us.

SFC: Structured map keys are a really big use case that come up a lot in internationalization, especially dealing with things like language identifiers, and many other structured identifiers, that can serve as cache keys or data look up, various things like that comes up. You know, very, very frequently. In terms of triple equals like, it's better than having to call an equals function, but there are less-ergonomic workarounds. But I think that Structured Map is a very big thing that this proposal uniquely solves. and, you know, the other one is, you know. is immutability. And in terms of the aspect of you know, you have data that you need to reach that you need To return out. And you don't want to have clients have to be able to mutate that because you want to be able to share that among many clients of a particular class. Like, I think is another aspect that this proposal uniquely solves. I definitely think that the direction that this proposal has gone since it's been at stage 2 I support the shape of the proposal at stage 2 but if we're, but if we must go back to the fundamentals, I think that these are sort of the biggest things. At least from my perspective that make me really the most excited about it.

BT: All right. thank you, Shane. In the interest of breadth, would you mind if we went to romulo First?

RCA: I just want to watch a little bit different point of view, mostly As a result, not related with the technical difficulties of or the design required to make this proposal or reality, but as a result of the TC39 educators call where the majority of attendants are people that spread the word about JavaScript and sometimes in one meeting we have I reach of maybe people with more than 300,000 followers that share about JS and everybody's pretty excited about record themselves. And as a result of one of the last call's where NRO presented about records and tuples, and suppose we had several Twitter posts videos and so with outreach of thousands of views and excellent comments and people are really expecting this to become a reality in near future. So just voicing the feedback from the community and, and people that kind of spread the word about the language people are building here. This is really important for them and I think that in userland they are expecting a lot to have R&T soon as possible.

YSV: Just a quick note. in response to RCA there, I've also been having conversations with developers, and when I bring up the possibility that records & tuples of might not be primitives, but I explained that the alternative stop gaps that might be put into place to give good ergonomics and equivalent behavior often times they say: “oh yeah, actually my use case isn't specifically tied to these being value types”. And I think that that's an Avenue I would like to explore before we aside, to introduce this In particular, I want to make sure that we fully reviewed the constraints that have come up from the shape of this proposal. I'm still not entirely convinced that some of those constraints are not in fact, a consequence of the shape of the proposal, rather, than coming from an attempt to solve the fundamental problem with the language. I can't remember what I originally planned to say with this item, but maybe I'll just get into into what would move the needle for us here. I think what we want to make sure before we commit to the specific shape we currently have is that the value types solution is really the fundamental… the problem that we are solving is, in fact, the feature that we have rather, than a series of wants from various different corners that are in response to certain deficiencies in the language that exists more broadly in the the language have. Resulted in the desire to sort of carve out a new space where we don't have to deal with certain histories of the language. Now in particular this is most evident from the definition of tuples, which in addition to being immutable and having the equality behavior also solves an underlying issue with arrays, which is that arrays once they exceed the boundary of the local value will go up the prototype chain and I'm not saying that this isn't a bad goal, this is a valid goal, but arguably this is a problem with arrays, and not a problem related to mutability. This is a broader solution. What I'd like to see is I'd like to have us explore an ergonomic way of constructing anonymous records and tuples perhaps based off the shared struts proposal. so I'm sure it's structs like we could for example reuse the existing syntax structs have very similar. restrictions records and tuples where in you cannot an Check that it's not also a shared struck, two cannot be a generic struct. Of course, Shared structs potentially have certain problems For example the constructor can we can leak through this keyword there are problems to explore in that area. I'm not sure how much work I'm asking from the champions here. And of course, I'd be willing to contribute what little time I have. I'll mention this to the Champions a little bit later later because I don't have that much time left. So that's one thing that I'd like us to explore in order to determine whether or not we can solve the ergonomics issue around being able to deeply freeze objects and actually I'd like to call on Moddable again once I'm finished it to talk about their concerns about the intrinsics of objects. Sorry immutability as a property of objects rather than being part of the nature of an object. So, the second thing that would also move the needle for us here is if the committee agrees that the partial fix presented by introducing these value types. For example, fixing arrays in tuples this behavior in arrays in raised, in tuples is preferable this partial fix is preferable to generic fixed for the language. This also applies to the question of Deep immutability. and and the top of thing, and also something else that I can't remember. The top of my head right now. And then, finally, that we are. So for the value types proposal that we are also at peace with the fact that we will potentially be bifurcating the language because a number of the features that would solve these issues more broadly in the language, will probably come up in the future. So for example, deep equal is already being discussed, we may come up with a new type, that is a mutable type called list that fixes arrays. What do we do in that case? Then we'll have lists and also tuples. We already have shared structs, which are also have both tuples and fixing a subset of the problems, but not all the problems with this one is if one question out of the three that I'm posing is is this something that we are at peace with and is this a good thing for the community also from a learnability perspective. And that all comes back to what our fundamental goals.

ACE: We’re about to hit time box, if we do. I would really great. All this feedback and the more wings as it would be the more, because the feedings is proposal, but better so thing, if people like we can shift like monthly recommend, typical two different times days of the week, like, we're very flexible it. So if there's anything we can do to make it easier for other people to chat with us and keep exploring the space because as is I think clear there's still a lot to discuss. Just let us know. We're very flexible. All right. Yeah and what about?

BT: I think we have exhausted the time box for this item, and it is also time for lunch for folks who are local there So, what?

DE: Can we discuss SYG”s point so we have a conclusion for this topic.

SYG: Hopefully limited to one minute if YSV and MLS are still in your room and as well. The action item I would like to suggest for the implementers here is something that crystallized for meeting our internal discussion. Is this value types versus immutability and where the complexity comes from and what we would our internal calculus is around the trade-off for that space. Most of it, most of our skepticism and pause comes from value types. So the action item I would like to present to the implementers is have a difficult discussion I suppose on like how hard your stance would be that if this were value types like, would you be okay with that? For, if this pivot its, if this pivots to value types is the is the true value add here like are you willing to do that? If you're not, I agree with what WH was saying earlier there should be some urgency here. we as implementers for you ultimately are not comfortable implementing the value types. It's trending that way right now for value types, we should directly to ask that question to our teams and then figure out an answer on the position. Instead of because right now it's kind of muddled. But if it's becoming clear that like the thing that people really want is value types, given that signal, are we okay with with, with doing value types because right now it's kind of hard to ask the question internally be like, wow, is it immutability? There's this, is it the maps and sets thing. But if there were, you know, a crisper question, you can ask internally to get the to get a ring answer.

MLS: Well, I will quickly respond for JSC value types is the main driver for a complexity that we see immutability is, not as much of an issue.

SYG: That is also V8’s position. I'm asking is if comes if the ready implementers excluded, say that value types is the value, would that be hard blocks from you all?

MS: Well, if it's in the standard, we should implement it. We know we can do it. It's just a lot of work and we're not sure it's worth it. It is a question of ROI.

SYG: None of us as implementers are sure. It's going to pay off. Like, we're kind of skeptical that it is and you know, it's up to us I think to really give a go or no go on the stage 3 here. If value types are the direction. So I think we should should ask that question internally. and get some get a more definitive answer?

DE: Conversely would you be open to maps and sets with a you know, JavaScript defined comparison operator. That is something to look into asynchronously. that's something that YSV proposed.

SYG: open to look into that async. We haven't discussed it enough. I haven't thought about enough to give a response right now.

DE: Okay. thanks SYG that's a very constructive next step, I appreciate it.

YSV: I do think that one requirement also would be to really get a concrete sensible. Of what we are trying to fix with the value types that excludes other Solutions here, And I'd like to hear from Moddable as well. We haven't heard very much from them.

PST: so, I agree with like everything that's been said, by SYG and MLS and YSV. we should investigate more what's involved with value types and all those things. I just know by experience that it's significant. undertaking. the on the immutability what we use is something that exists in the language, which is the Integrity level. and we use that to put a lot of things in the home and including structure data and all the same and the great. I mean, feature of the app is that all these things are very what they are like objects are re and so on and so they work. it just not do produce. Cannot change them, and one way we would like immutability to progress is to have more like intrinsic fundamental objects like arrays. I mean, all these things to enter the the wct game, and, and what we don't see a hole in this proposal is a bath to us that and and that's that was like the main reason why we jump and say oh my god, that's that's not going the right way because before talking even about deep equality and so on, I think we should look at deep immutability in the way that as always, I didn't being implemented in the in the language itself which is freeze, `isFrozen and all the things and I, we believe that it can go further and that, and so, that's the thing about what (?) say that. immutability is a feature, not a nature. I have time later, I mean, after the meeting. So, if we want to talk more, I'm ready for that.

BT: All right. thanks everyone for your thoughts. That was super productive.

Remaining queue items:

1 (DE) Is the committee open to a built-in type of Map/Set which runs JS during comparison?

2 (DE) What would change with the shared structs alternative besides equality being excluded?

3 (SFC) (reply to DE) A Map that runs JS during lookup raises questions about the underlying map impl (HashMap vs BTreeMap) that value types can avoid

Conclusion/Resolution

  • Implementers and champions to further discuss tradeoffs

Module and ModuleSource Constructors

Presenter: Caridy Patiño (CP)

CP: Today, I want to present Module and ModuleSource Constructors. We presented this about two plenaries ago, this is part of the module Harmony and we colloquially call this "layer 0". I wasn't there yesterday when KKL presented but I believe he covered the use cases for various module related proposals. I would not cover the use cases today. Essentially this layer 0 provides the first class Module and ModuleSource Constructors. These are going to be two new global values. It also extends the import statement, specifically the dynamic import to operate on module instances rather than just string values that represent module specifiers. During Kris' presentation, you probably noticed that layer 0 is almost everywhere. This is a foundational piece that provides the mechanics for having a new type of structure that allows you to import a module instance rather than just a string value. In terms of the changes, with respect to the last presentation, when it comes to the module source, nothing really has changed in the ModuleSource Constructor, which is going to be also a global value. It expects a source argument to be provided, that's going to be the module's source and this source is going to be parsed. The high order bit about this constructor is that it gives you no power, this constructor is simply going to parse the source code and it's going to store all the information about the parsing process. but it will not capture any information about the place where this constructor has been used and therefore it doesn't contain sufficient information to resolve any of the specifiers that might be used as dependencies of the source. It would require additional information from the developer in order to actually operate on that module. So this source is just access to the parsing mechanism. To add more details, we can say that by using this ModuleSource and creating a ModuleSource instance. it gives you no access to any associated information, it is just simply parsing.

CP: Once you have one of these instances, you will be able to create module instances and each of these module instances are associated with one module source object. From one source you can have multiple instances. Additionally, these ModuleSource instances are immutable. They only contain an internal record that contains all the details of the parsing process, and this data can be shared across different realms and this makes these instances portable to some degree. You can serialize them, you can pass it from one worker to another and so on. So that's one of the areas that we have to explore a little bit more but at the very least, ModuleSource instances are things that you can use to transfer source from one realm to another, from one process to another, from one page to another and so on.

CP: When it comes to the Module constructor, again TypeScript definition is shown here, There will be another global value. called Module. This has changed a little bit since last time. Specifically two main changes were made: The first one is the signature of it. Still the same kind of part but the signature is slightly different. Additionally, we make the source property, which is the only property that is accessible for module maintenance, we make it optional, meaning some module instances might not have a source available. I will go into more details on that but the semantics of this Module constructor are basically that it has an internal module record, a one-to-one mapping between module instance and the corresponding module record, and that importing one of these instances will always produce the same namespace object again. You might have multiple module instances, each of them will be associated to a single namespace object. To provide more details on that, every time that you create one of these instances, it is going to be what we call a fresh instance, you will have not linkage, no initialization, not evaluation, just simply waiting for you to import it so you can kick in the process of linkage and initialization.

CP: The process of importing one of these instances is also transitive, once you start the process of importing, that's equivalent to importing a string specifier. So it kicks in the process of trying to resolve the dependencies, trying to link them, trying to initialize them, trying to get them to the end stage and this end state basically represents a namespace object.

CP: When you try to import one of these instances as well, it will put in motion the process of resolving dependencies and this resolution process is via the import hook and we're getting more details into the import hook on the next slide, but it is important to highlight that the import hook is going to be called for any unsatisfied dependency. So we encounter a new module specifier, whether that is a static or dynamic dependency, meaning static import statement or dynamic import. It will be checked against the memoized modules stored at the Module Record, it will eventually call the module hook if needed. So, the module hook will be called only once per module specifier. And then finally, the module constructor is pretty much equivalent to the Function constructor in terms of mechanics. The Module value that you're using is going to have implications in terms of what Realm this module instance and the namespace object is going to be associated with. This is only important for iframes and probably the VM module in nodejs where cross realm references are allowed. But it's important to highlight that it is bound to the Realm and therefore, you can do the same things that you do with the Function constructor.

CP: In terms of the module Hooks, this is the TypeScript definition again. There are two hooks that we talked about last time. I think the importHook is pretty much the same. The only change Is that we now only receive the specifier. We decided to remove the referral argument. So because you could do that in user-land and we can go into the details of that if anyone has a question about it. But we don't necessarily have to have the referral as a part of the spec or part of the options that you can provide, you can implement that in userland.

CP: The importMetaHook wasn't a hook before, it was - I believe - in the previous presentation, it was an object. Now we can access that object and add details to it when you create a new module instance, we'll see some examples. Just one more slide, the importHook now is simply a function that will be invoked whenever the syntax of import.meta is used in the source that you're trying to import, and it will be invoked using the handler as the context when invoking the hook. This is the interesting part from the presentation, at least the biggest change that we have made. The module handler contains these two hooks and they are optional, the module handler itself is optional. So if you don't provide module Handler, you will get the default behavior of the host.

CP: So why we choose to have a handler and you should not get confused with the handler that we use for proxies. The handler that we use for proxies is live. So what we do with the proxies, is that we cache the handler then every time that we need a trap, we will be accessing the trap of the handler live. In this particular case, we choose, not to go that way. and instead use something more similar to what we see in the DOM APIs for where components where the hooks are stored during construction. So when you create a module instance, we will inspect the handler if provided and we will take the two hooks and cache dad who will also be catching handle ourselves. And then when calling those hooks we’ll be providing the context to it as the handler. as a result of the is, you have novice in generality, you still be able to create a Handler that behaves live, if you want to but it's going to be usable. and this give us the possibility to do obviously more optimizations that the engine when it comes to triggering these hooks. So those are considerable difference with the proxies implementation and the fact that it's called Handler, we can change that. But hopefully people will not get confused about it. So here is an end-to-end example. First, you create a ModuleSource instance, you specify the source for that, in this case is a source that is import a foo. food digest(?). You can create a module instance out of it and you can provide the option of folks, the Handler with the hooks. In this particular case, we provide an input hook that given a specifier where it's going to fetch resolve this specifier based on the current context which is the incremental of the crew module and you will fetch the source, get text, create a module Source instance out of that text and then return the module instance with a new source in a way that does not have a special behavior, it uses the default behavior of the engine in this particular. And this is why we don't necessarily have to provide a handler, You don't provide a handler you did the exact same behavior that you get when you try to do any kind of dynamic import on the holes as a as a review regular to you. And then, in the case of the implementer, what we're doing in this case, it's just augmenting, the meta object, which is provided by the hook with the current methods that we have available for us in the module itself. In this module that we were seeing in the screen. and by augmenting that you just simply reusing the same meta information that you already have. And then finally, the last line which is how you import one of these module instances you simply call import on it. So this and when it will go on kick in the process of fulfilling the dependencies of the module that you are importing dependencies foo.js as that gets through the input hook in the import hook. you fetch the source for foo.js page that on the core and resolution process. It creates a new source. create a new module and then return. And that way you will be able to connect the dots, create a module dependency, get the link to get them evaluated, and then finally returning the module instance in the last line. So that's kind of an end to end. give you a few more examples but I think the first thing that we want to highlight is that what we call the kicker, and this is common for most of the module harmony proposals. They all use the same mechanics which is dynamic import is the one used to kick in the process of getting the linkage. Getting the instantiation, evaluation and so on. this is very important, We spend considerable amount of time on the kicker. We tried different options, even from in terms of module 4, in terms of intense that import to import the module syntax and so on. So we try different cases of how to have a kicker, different from the import(). we ended up backtracking on that and we're back to import(). We believe this is the best. option that we have. obviously today, folks are using import() with string values. Now, it will be able to use it with string values and module instances. that the big the big difference. We have encountered no issues in terms of our compatibility. So we think it will be fine there. in terms of the identity, of modules, we can create one source, we can create one instance and we can try to import that includes multiple times. We're going to get the same namespace object regardless of how many times you try to in order. Obviously the second import there. the instance will have been evaluated and therefore, the namespace. twice. will be returned. as a promise(?), obviously, right away. could there would be no no need to do. the evaluations already about it. So it's effectively similar to the import of a string value. You try to import I advise you can I get the same namespace object, in terms of reusing module objects, we talked about this before you can create as many modules as you want, if you have a sword, and in this particular case, we have two instances, these two instances are different. When you try to import each individual instance, again, I get a new namespace. Different image space for each of that because their relationship is one-to-one between module instances and namespace objects and you still have one single Source though. as I mentioned before, these proposals, part of the module Harmony, there are many proposals under this umbrella. So we need to figure out how as a team how they play together. in the KKL’s presentations. You probably noticed that the layer 0 everywhere. layer 0 food is a foundational piece as I mentioned before. and hopefully we can advance this. I will go over the intersection semantics with some of the module Harmony proposals. But if you have any question, we can go over more details. The first one is. intersection semantics with the module expressions. expressions. module {}, you'll be able to create an instance. using this expression And the intensity, you're gonna get going to be an instance of the module.

USA: Just a reminder you have around 10 minutes left for the slides.

CP: Yeah. almost done. Two more slides. The source of that intent is also available is also going To be an intensive resource module And finally, you will be able to import that intense just so before. in terms of, the, the sugar and of these creating a module expression, it's pretty much equivalent to creating a module source. and creating a module with that module source, with the default behavior. So no handler. this. these two are equivalent Obviously the second one will trigger CSP default(?) rules but effectively they're the same. in terms of the module declaration, the same applies. you have new syntax for module declarations. This will create an instance. The symptoms is also a module module instance and it has a source for it as well. And then finally, in terms of the compartments, we split it up into multiple proposals, Layer 0 is one of them. but the intersection semantics way that is, that they are compartment Could provide a default host implementation, and compartment can also do certain things with the module and the ModuleSource constructor that are you go through all this So that's pretty much it. We're requesting stage 2. We can go into the questions. Let me see if I can see. It was you.

USA: perfect. First up, we have KG.

KG: So I understand what the point of the Module constructor is in terms of the module expressions proposal and the import reflection proposal where you need to have a reified representation of modules, so the Module constructor makes sense for that API. I don't understand the point of ModuleSource. Why are we adding a new kind of eval? I missed that.

CP: Yeah. So, first of all, there is a desire for a mechanism to transport these sources from one. from one realm to another, think about workers. For example, you might be able to get the module. Source in one. in one particular Realm, then transfer. That to another one. you need to have some sort of portable mechanism to do so. But I believe, if you don't have a source, then you will not be able to create multiple instances of that source. You have only modules and then you have a module, and then you pretty much create a copy of that module. We can explore trying to eliminate this internally in the spec we do have we don't necessarily have these one-to-one mapping with module source and the module record. But if you look at the spec, we actually split it up. So, the ModuleRecord is really the things that are inferred from parsing the source, and then the things that are needed for the mechanics of the module system, those are two separate things. So we split it into two pieces. So the module Source will have a module Source record that contains all the information about parsing and then the Module record will just simply use that module source record for for the pieces of the the parsing process. So we can explore eliminating this but there are use cases that require you to to use the source to create new instances. a good example of that, I think, is testing where you want to test the particular module and create copies of it and so on. So you get the source. Another very interesting aspect of it is bundling, you might be able to create or or use the module instance, module expressions and module declaration to transport the source without kicking in the evaluator. I think Yukons main concern is the about later. and you will be able to decide when to create an instance of it if you have access to the source. Otherwise, you just get an instance. And the only thing that you can do is just before you so.

KG: I'm still interested in talking about the concrete use cases. What I heard was you want to re-evaluate things in different Realms. I don't fully understand that. For that to make sense, you would have to have a use case for evaluation of a string in the first place. And that's the thing I'm trying to get at. So you mentioned testing, I don't understand the testing use case. You mentioned bundling, and I had understood that module expressions in module declarations were the intended thing that would help bundling. Because, of course, if you have the source text available at bundle time, there's no reason that you need to do this dynamic eval. So can you say more about what the use cases are for the dynamic eval because I'm still not getting it.

CP: Yeah, so at some point you will have to have a source that you have to evaluate. I think the main concern from what I'm reading is that we have this new evaluator now, whether you use a module source or you eliminate this piece and you just go straight into the module instance and to create a module instance, you might provide your own source. it if that's the concern. So, I'm trying to read KG. I'm trying to read exactly what the concern is. If the concern is that you, you you have a new evaluator. then we can talk about the use cases of having a module, as a source, as an extreme value that you have to parse out some.

KG: Yes that's my concern. Having a new kind of eval..

CP: So it feels that the module expression of module decoration, obviously, image, or minimize the needs of having a new module evaluator. But there are cases in which you want to stitch together the source. and not loading through the module system, but having it ready for you to evaluate, especially for legacy system at all. Like if you think about it, there are people who are doing modules today by just stitching together comments in a gigantic JavaScript source. That's an example that we have seen you together modules. then you reconstruct our module graph, on the client side using AMD, for example, something like that. At some point, you have to have a module Source available to you. That is not a module expression and it's not a module declaration. what you do, make that module, that can participate in the module graph.

USA: CP, sorry to interrupt but we're almost on time and the queue is very huge.

KG: Okay. I will move on but I am still not convinced so I do not want to advance ModuleSouce to stage two, but we have lots of other things to talk about.

KKL: And this is Kris and I want to address one of the use cases for having a module source. for Kevin is 4. So 4 1, module sources part of is not itself an evaluator of course but as you as you point out it is certainly a way to pass to evaluation through Dynamic import. The one reason for it to exist, apart from that is that module source gives us an anchor for doing analysis of the shallow imports of the source. So if you are implementing a bundler and say node, for example, you would need to have module Source in order to do the analysis. I mean, as opposed to just having something written up in a just using Babel, but the advantage of having it in the language, of course, is that it guarantees that you're going to have the same analysis as you would natively on the same engine. The the reason for having it as it is certainly possible. Also for the module Constructor to take a string. But I like having the separation there because modules, the modules Source Constructor is separately. deniable, from the module Constructor. And then, of course, the other thing is that the ModuleSource constructor is not necessarily the only Constructor of valid sources for the language, as NRO has likely likely to point up. Point out. It's also potentially possible in the fullness of module Harmony that the module Source lot would be filled by, for example, a web assembly module. Or other other types that natively Implement module sources.

USA: Sorry again for the interruption, but we are already over time.

CP: it seems that we have a lot of questions they can we get like, like, five minutes.

USA: unfortunately, we don't have any time.

GB: Sorry if I can add, we can potentially give up some time from import reflection if we want to put it into the discussion now, LCA, I don't know what you think about that. Okay. how much time are you donating?

LCA. what do you suggest? Yeah. no more than 10 to 15 certainly.

RPR: All right, We'll finish. this at quarter to the hour. so CP. This is please manage to discussion and figure out if you want to ask for stage 2. and and do. So at with at least two minutes remaining, thanks, so that so, can't guarantee I was giving guidance. that it's up to you to manage the remaining time box, you can tell us which items in the queue you want to address but if you're going to ask for stage 2, you will need to do it at least by 13:42. Is that clear?

CP: Yeah. Okay. So we talked about Source, strings, Some replies there. There's only three more new topics. The second argument discussion, we can talk about that. It’s one for you, one from Yulia as well about the modules or something extreme Exposé. So security

DE: Why don't you go ahead and call on somebody?

CP: Sure, let's start with GB.

GB: Yeah. I just wanted to bring up the point around the second argument for the large book. it's not a pressing topic. So we can potentially skip over that very quickly. but I do think there could be a benefit in having like a single load hook function, which within what was previously a compartments could be. shared between the different modules. And so that you only have one one of those functions and then you can have that sort of singular handler potentially, if you want. And I was just wondering if there's any thought around that, but yeah, we don't need to spend too much time.

CP: Yeah, I was one of the exercises on and we concluded the Handler exhibit, Better choice. We can we can talk about that offline in the SES sure we did did details. Correct. it's a good question.

YSV: Is there any plan to expose the string that constructs the module that? Because this would also impact other modules I assume and make introspection of the Full Source text of the modules available. Is there a plan to do that?

CP: Yeah, the reason why and I mentioned I think I mentioned at the beginning the reason why they thought source of Module is now, optional might not be present. It's because there are some modules that might be created and whose source instance is not going to be available for for for people to use it to create another instance of of that source and there are few examples of that in note, for example, you might not want to have more than one more than one FS module in a Rome. or in excess. you don't want to have a source because you don't, you don't want to keep that around. it. So there are cases in which we simply don't want to expose the source of the module instance and for those cases, the source is undefined, basically, or know we can talk about that, but beyond that, if we decide to remove ModuleSource all together then. it will be no way for you to run the module instance wrap the source I was used to create that module instance.

YSV: So I take it. The answer is no, we won't be exposing it like function too extreme or something like this.

CP: Correct. SYG?

SYG: My topic now. Okay, what? So I want to better understand the motivation, I can suspect the motivation for the hooks is virtualization. I'm wondering if there's a multi-part question but the first part is do the hooks solve other use cases and problems other than virtualization.

CP: Yeah. fact that the me one is now because they shouldn't like to think about one colors. You want to load a humongous amount of code in one, one single file. you probably will be using module declarations or module expressions there and you have to connect the dots between the end. So the developer experience is the same as if they were separate modules that are linked it. They have dependencies between them. there is not a rewrite of the original source of the actual module, you get the actual source delivering one, one single file and then you have to have a way to connect during the linkage process these different instances that you already have. And the only way you have to do that is by having a hook that allow you to, for each of the instances that you might create, be able to resolve the source that instance will provide.

SYG: so, so I guess that that then that helps them to the second part of my question is my concern. is ultimately, about performance but it's really about about so, I have performance concerns here because one, it's exposing something to user code, or letting user code hook into a place where it cannot hook into before. And that just, without looking really deeply at how this is implemented, not just in V8, but because this has such a host involvement in the web engine as well, I feel like calling arbitrary JS here, it's probably going to be a different performance characteristic than the default use case, where there is no hook. and that gives me concern is that if the use cases here are designed around DX, that puts us in a hard place of recommending something for people to use that could be drastically slower. And then that makes us not want to recommend that for people to use. I don't know if you have any thoughts on how we can reconcile that, like I don't want a situation where, you know, one of the advantages of bundling now is that, funnily enough, it sidesteps the ESM loading process sometimes and that makes it like much faster. If you re introduce the ESM loading process here and let the user code hook in that, you know, people might not use that at all, because it would just be much slower. But I'm also concerned about things like, I'm going to say, like, cosmetic Frameworks but I don't know exactly what I mean by that but it's something like 44. where the behavior they're trying to get with import hooks, like Auto appending suffixes or something like that, like, it's not worth the performance cost. and I feel like the hooks could incentivize this the wrong way in producing in folks. Making Frameworks that do these cosmetic effects that are might be nice for the X but drastically altered. Improve the performance.

CP: Yeah, quick comments. it's on that, The first one is that. the chorus effect is based on the spec refactor from NRO. introduced with the memorization process and so on for specify the second comment is is more interesting. I believe, which is that you only you don't have a way – I think we talked about that before – We don't have a way to go back to this resolution through hooks, you ever go into the default Behavior. So if you enter one of the sub-trees of the module graph, if you enter into the default behavior of browsers, and engines in general, you have no way to have a hook in that subtree. So that eliminates the possibility of today having a particular module graph. That will in the future have extra step that goes into user land. Executing code on a hook. So it's only when you choose to use the hook that you get the hook to be evaluated and the performance of it is described by The refactor from NRO. So it's going to be called only one per specifier. but if you choose to have one dependency that have the default Behavior. Then at that point, point, that subtree is automatically out of the calls to the hook.

SYG: Okay, that's nice. and pretty but I would like some more thoughts about the general ecosystem problem is are like do the Champions have thoughts on? how how would you, what guidance would you give developers on when to use these hooks? Like, what is the trade-off? you know, when is it worth it to? over to hook into the specified behavior? For bundlers, the trade-off might be self-evident. I mean, what? What? is the, What is the water park there?

KKL: I think, if I may, the answer is really quite it's a developer experience in production. Experience are always going to be very different especially when it comes to modules. One of the reasons for folks’ skepticism for introducing modules at all ten years ago was that you wouldn't be able to use them in production directly. There would have to be a build step or nowadays import maps. And I Ink that you might guidance. Don't write things,

SYG: I'm sorry. and people still don't really ship ESM in production.

KKL: Yes, and I don't expect that to change, really? I think that what what what's likely but so much for my for developer experience in production, I think that the answer is quite simply let your benchmark be your guide. that's always the best answer for a developer. And I think that your benchmark is going to generally point in the direction of import maps or something like it.

SYG: But that isn't the full story, right?

KKL: The developer experience is a very different scenario where there's zero or nearly zero latency and and the tooling allows you to do other things like hot module replacement and such which would benefit from these Primitives. It's also the case that these hooks are not necessarily just for bundling itself, there's also motivating use cases for my clinking. clinking other languages and making webassembly participate more fully. and and giving webassembly some of the ability to do things that it can do that JavaScript can't yet do.

USA: I'm sorry, folks, we're out of time again. So we'll have to move this discussion offline. I suppose that's all. Would you like to conclude really quickly?

CP: Yeah. just to test the water. I think there's some pushback from KG and SYG, we will follow up.

alt_text

String.dedent for Stage 3

Presenter: Justin Ridgewell (JRL)

JRL: All right. So this is String.dedent for stage 3, to recap essentially String.dedent allows you to write pretty source code and receive pretty output code. In this case here we have a Content block from lines, four through seven, it is indented inline with our source code. It looks and feels like this is an actual part of the source code. But when we output it through console.log, we don't have any of that leading indentation, it has been removed so that the output looks like it was written specifically for an output text while being pretty as source code. The only noticeable changes that we made at stage 2 was the decision to treat escape characters that cook into whitespace, we're no longer treating those as indentation characters for dedenting. So now instead of removing this \x20, which is an escaped space character, it will leave that character in the output. So our cooked output in this case would contain two spaces, the escaped Space character which cooked into a real space character followed by a real space space character. And this also affects the new lines. so if you have an escaped newline as we do in this case, that will not be treated as a literal line that we need to dedent, it'll just be treated as the same continuation of a line that is currently on. So the changes here, you can see the cooked changes here. it. Now it mirrors what the source text actually is. The raw output remains the same, it's only the cooked strings that have changed. Essentially, it's just instead of cooking and then de-denting, we dedent and then cook it.

JRL: so, if we to recap just what the the common indentation rules are without going through all the individual terminology for it. we find the most common leading indentation that matches on every single line that contains text. So, in this case, lines four through seven contain actual text. Line 3 is an empty line, It's ignored. Line 8 is a whitespace-only line. It is actually turned into an empty line in the output. So line 17. So the first non-whitespace character, which happens on line 4. Also the template expression, which occurs on line six stops the leading indentation at the dollar sign. The escape character, which we just discussed, on line 7, also stops leading indentation. and so the leading indentation is examples just four spaces and any of those three lines would have stopped the common indentation. and any other indentation excessive of that, so line 5 here, continues to have that extra whitespace.Template expressions are not dedented. So even though third here contains white space, that white space will never get removed because it's a part of the expression and not the literal static text of the template expression. And as I explained the cooking of an escape character never affects dedenting. So we have this this output

JRL: We do need to discuss what happens for caching, If you have a tag template literal and you want to invoke that tag with the template, the template strings array that you receive is constant, as we discussed yesterday in SYG’s proposal. It's supposed to be constant, no matter what happens as long as that code location exists, you will receive the exact same instance of an array. We mirror this behavior for String.dedent, so if you invoke us with a tag template array, we cache the result so that the next time this same input template strings, array is received, we just use the cache value. This allows us to wrap other template tag functions and preserve the same caching semantics that they already expect. We can't allow it to receive a bunch of new instances every single time the tag function is invoked. So if we run through the code here, we have to decide what our caching behavior is going to be. I'm just using String.cooked, which you can think of as the normal template concatenation algorithm, and then I'm defining a user-defined strings array in this case. Don't worry too much about what the actual text is, I have to do a little bit of formatting in order to match what the would happen if I had written this as a real template literal, but it's just foo and baz that's the part I want you to take to note. This is a normal tag function and then I'm invoking it with the expression bar and the output of this on line 7 is going to be foobarbaz. Caching has happened because we are using a wrapped template function, the raw array will be used as the caching key, and it's the identity of the raw array, not the contents of the raw array. So if I were to modify this, because it's a mutable array that's under user control. I can modify the raw array to contain ABC and XYZ instead of foo and baz. Now, if I re-invoke this function with the exact same user strings array, using the same identity for the raw array but with different contents, what is the expected output of this? I've changed foo and baz into ABC and XYZ, and I'm invoking with a new expression lmnop. What happens actually is that we used the cached dedent, so instead of getting ABC and XYZ in the output, we're currently receiving foo and baz, and then we're interpolating the LMNOP expression into foo and baz. So our output here is FOOLMNOPBAZ. Not ABCLMNOPXYZ. Do we want to make a change to this semantic? And how do we want to make a change essentially? there's certainly a couple of ways that we could approach this and I think, what Kevin is going to discuss is that we should be treating this as a frozen array. and if it's not a frozen array either, doing some kind of of value equality check or doing or throwing error. I kind of think it's okay, it's not expected that you're going to invoke this with a mutable array, or that you’ll ever mutate that array. The normal case of a tag template expression is to receive a frozen template strings array. And so, if you write this as real source text where you're doing a tag template invocation, and then you're passing a real tag template strings array, that object that we receive is frozen and the identity can be safely used. Doesn't matter what you do in this case, it can’t mess up. You can't modify it, the contents of it doesn't really matter. It's only the identity of the frozen raw array that matters for caching. So if we were to do any kind of value equality, or deeper equality semantics on the array, I think we are just optimizing for an edge case. That's not likely to happen. So, I should just turn this over to the TCQ. and we can go through any issues. Issues.

USA: The queue is empty. I expected Kevin to put something in.

KG: sorry not quick enough. Yes, thank you for laying out this issue. I agree with you that caching based on the contents of the array would be overkill. I would not advocate that even if we don't go with the approach I prefer. The approach I prefer is only doing caching when the raw array is frozen, and just not doing caching when the array is not frozen. That way you still get the correct behavior for use as syntax but you don't run into this edge where you might get stale results when invoking it manually. It's not like a super strongly held preference, and I agree we shouldn't bother doing the value check. But that's what I would prefer.

JRL: So the issue here, as described in the issue thread. Babel, when you're doing ES5 transforms and SWC when you're doing transforms and possibly others like TypeScript or esbuild others when not trying to preserve the spec semantics exactly, they transform into an unfrozen array. And so if you were to use one of these transformers and then write a tagged template literal, an actual tagged template expression, what we would be receiving in String.dedent is an unfrozen array. Which means if you were to wrap a tagged template function with String.dedent, you would be returning brand new instances of the array every time, the dented strings array, every single time it's invoked, which would break the expected caching of using a tagged template expression. So I don't think we should not do caching. if we don't want to use the array identity as the caching semantics, I think we should throw. That's going to mean that this API cannot be used with a loose transformer. You have to use a frozen array every single time.

KG: I don't think we should worry that much about Babel’s loose mode transform for ES6 features. That's going to see less and less usage over time and this API will exist forever. And, I don't know, not getting the thing in the cache because you were using a loose mode seems like the sort of thing that you should expect when you are using loose mode. It's like - you have opted into getting the wrong semantics. This is like an edge case that you should not be surprised to run into if you are still in the position of needing to use Babel for template strings. I'm just not that worried about it.

USA: Next up is SYG

SYG: That's exactly it, I don't understand why we care about something that breaks semantics in the name.

JRL: because it's widely used. I mean, it's as we just stated in the last proposal, ESM is not used extensively in production, and so people are still transforming with Babel constantly. Unfortunately, loose mode is extremely popular. So if we do propose a breaking change here where we're not preserving the expected caching semantics means people will unexpectedly get the wrong result constantly. if they're using something like lit-html. They're going to be constantly wiping away the DOM tree and then reinitializing, it gets the brand-new template strings every time. If they're, I don't have good examples of other template tag template strings, at the moment, but essentially it's considerably more expensive for the user to do this. I don't think we should not preserve caching semantics, if we want to change the behavior here, I'd much rather we throw in this case.

SYG: a stupid question. Why can't the lose transform Not be loose in this case.


JRL: it would require the dev to understand what is actually happening. If it's transparently working for them, but it's working badly, they may not notice the issue, so they'll just get incorrect caching semantics, and they're going to be doing a lot more expensive work without understanding why it's happening.

SYG: No, I mean, why can't the Babel loose transform use frozen arrays.

JRL: It would require them to update.

SYG: I see.

JRL: I mean Babel has recommended not using loose mode for quite a while now, and people still continue to use it. It's just kind of a legacy case that is still extremely popular.

NRO: So, just for your information, in the next version of Babel we plan to stop transpiling templates by default, and also we are exploring ways of making people know better what loose means. For example, we'll probably replace the loose option with one option that says something like generateNonFrozenTemplateArray or something like that. Because we are trying to make it clearer what problems loose mode may cause. So, hopefully in the future, people will be more aware of this and we will have this problem less.

USA: Next up we have JHX, you says that he agrees with KG and that's it. that's the entire queue.

JRL: The issue here is that we should either not be doing array identity caching. Or we should be throwing in this case. The current semantics where we do array identity caching, even for unfrozen arrays. Seems a few people see it as incorrect. So we should change that somehow but I don't think we have agreement yet, or at least because I don't agree with it, that we should not do caching semantics at all. Does anyone else have an opinion on this?

KG: Okay. for what it's worth, I would lightly prefer caching even for mutable arrays over throwing for mutable arrays.

JRLI like the idea to prevent the developer from doing something incorrect. I just don't want it to be transparently working except extremely expensive and subtly incorrect.

SYG: why would caching trigger the extremely expensive case, wouldn't it just be cashing like the court different results?

JRL: Caching is the current behavior that Kevin just mentioned but it's the incorrect Behavior because it gives you this incorrect output, a surprising result, but at least the surprising result will show you that you have a bug.

SYG: That's not that's not the extreme just clarifying that that incorrect result is like something that incorrect string is displayed. Not that it's extremely slow. and, apparently,

JRL: yes, that is the current is KG is suggesting that we we change this so that it's not caching for immutable array. which means you will get the correct result but it'll be extremely expensive for you,because whatever tag you’re wrapping will perform its initialization logic for a new template strings array. Because it can’t find a cached result that’s already done that work.

SYG: I am so confused. Okay, I thought what was happening was? it what KG said was? ideally, he would prefer, we only freeze Frozen arrays, it's not frequently cache Frozen arrays. Yes. arrays. Yes. But He would prefer. After that, in order first is only cache Frozen. second is cache, everything. The current behavior and last is throw prevent. Yes. Yes. Is that correct?

KG: That is my preference ordering.

SYG: So that sounds like the compromises. cache everything which is the current behavior.

JRL: I'm okay. with that.

USA: Next up, we have two more items in the queue but note that you don't have much time left.

??: Okay.

USA: next up, there's WH.


WH: I agree that we don't want the outcome of caching disappearing and the user not being aware of caching disappearing.

JRL: Okay. so I think that rules out option one here, where we’re not caching an unfrozen array, which I agree with. I think that is the incorrect choice to make here.

USA: it's JHX.

JHX: Yeah. I just want to. confirm that. what's the worst case of if we use cap? cache only Frozen, because, as I understand is seems on TV(?) for only well. long lost some performance. if they use the Babel, do some food, okay? Is there any example?

JRL: Any example I give here will be kind of contrived, The one that I'm most familiar with is DOM, rendering lit HTML. In that case, it'll just be performance losses, but the caching semantics is an agreed-upon behavior for tag template Expressions. You could have built any kind of behavior that expects caching to be preserved correctly. So maybe you're doing a network request, if you have to do it that's extremely expensive. Maybe you're doing something destructive, if you have to create a new object, again, extremely expensive. I don't think transparently working but being expensive is a good choice.

USA: Okay. JRL, you are on time.

JRL: So I think we need to get this result. some kind of consensus for what we should do here. and so, I think, we've ruled out Option 1, not caching immutable array. do we have a preference on either the current behavior, which is caching on array Identity or throwing if we receive a mutable array?

KG: Yeah. I marginally preferred not throwing but I don't really have that strong of a preference.

JRL: Okay. I'm happy leaving it as is, where we have the caching behavior in all cases.

BSH: I would not be happy with that, that's why I'm on the queue, but it's really bad. Do you whirring liquid or vendors?

USA: You were on the queue but we don't have time left. Unfortunately,

JRL: okay, let's take this to the issue and I'll bring this back at the next meeting. So, this is issue #75 on the proposal repo.

Set Methods

Presenter: Kevin Gibbons (KG)

KG: So we've discussed this a few times. This time we're going to be talking about the actual proposal. I have complete spec text with all of the bells and whistles written out if you are interested in that. So let's talk about the proposal. This is a proposal which would add seven new methods on Set.prototype which is sort of the missing basic utilities for working with Sets that we left out of ES6 and then never got back to. So this is union, intersection, difference and symmetricDifference. If you're not familiar with symmetric difference it is basically XOR, it's the things that are in exactly one of the two sets. And then also these isSubset, isSuperset, isDisjointFrom methods. These are not strict containment. So a set is a subset of itself and a superset of itself. isDisjointFrom, of course, meaning that they have no elements in common. And since we are going for stage 3, we need to go through all of the details, which I will be doing in hopefully reasonably efficient order.

KG: Easiest one: handling the receiver. We discussed this to death. And what we ultimately settled on in previous meetings was that we would access this.[[setData]] directly. So that's the internal slot that maintains the contents of the set. And if you are a subclass then it is your responsibility to ensure that the contents of that slot are accurately reflected by the rest of your functions or to re-implement all of these methods yourself. Receiver is the simple case.

KG: Argument, what we ultimately decided was that the argument should be treated as “set-like” meaning we should be checking size, has and keys. Even for methods like union where you only need to iterate, we are still going to treat it exactly the same way as we treat the argument for every one of these, which is, in particular, you get size and ensure that size is a non-NaN number before doing anything else, and then has and keys must be callable methods. And then once you have got them, you will invoke has and/or keys repeatedly as you compare against the contents of the receiver. Now for some of the methods, which of the two you are invoking might vary depending on the relative sizes of the sets. So if you are doing intersection you will invoke has if the argument is larger than the receiver and keys if the argument is smaller, that sort of thing. So an implication of this is that you can't pass an array to any of these. You also can't pass a WeakSet to any of these, because WeakSets lack the keys method. On the other hand, you can pass a Map, Map qualifies and it will be treated as a set of its keys because that is what the keys, has, and size methods on Map do, they behave as if the map is a set of its keys. And course, if your userland class does not have consistent behavior for has versus keys, so for example, if you have an indexed set where keys is giving indices but has is testing membership, you might have a situation where instances of that class work as an argument to intersection only when the argument is smaller than the receiver. That's pretty weird. But we discussed this at great length in the previous meeting and that was the behavior that we settled on. So that's what we're doing.

KG: And then finally there's the question of how do you actually produce the resulting thing? We didn't discuss this in as much length, but the behavior that I would like to go with is just to construct a new set directly - even more directly than invoking the Set constructor does. As you may or may not be aware, when you invoke the set constructor it looks up Set.prototype.add and then invokes it repeatedly. We would not be doing that. We would just be creating a Set instance and manipulating the contents of the [[setData]] slot directly, bypassing add on the prototype. We also would unconditionally be creating a base Set instance. There's no support for Symbol.species. If you are a subclass and you would like to create a different instance, so for example an instance of the subclass for subclass.union, you will need to override union and all of the other methods. This is sort of towards the vision of not doing Symbol.species, because as neat as it is, it has been more pain than it's worth. And so, basically, the story is that if you want to subclass, you have to override everything if you want non-default behavior. There's no hooks for a subclass to customize the behavior of the base methods. If you want to change their behavior, then re-implement them.

KG: And then, some details about the performance for these. I have written the algorithms to the best of my knowledge to have optimal Big O performance. So this means that you will generally be iterating the smaller of the two sets, that sort of thing. And then within that constraint we are attempting to minimize calls to user code. So difference is a good example here. There's no Big-O performance difference improvements possible, you always have to iterate the base set and check the membership in the smaller set. But you can minimize calls to user code depending on whether you check the membership in the smaller set by iterating it or by invoking its has method. So the algorithm is written to switch based on the relative sizes of the sets with the goal of minimizing calls to user code. It doesn't have an effect on Big-O performance but it still probably has an effect in practice. There's also early exits for isSubsetOf and isSupersetOf based on the size. And no other early exits because no other early exits are possible as far as I am aware. And in particular, when the argument is a base set and the intrinsics are intact, so Set.prototype.has and size and the keys iterator are intact and not overwritten on the argument, then this allows a fast path where there's no possible user code invoked in the middle, to just construct the intersection directly. Obviously you don't get to do the fast path with subclasses but you know, that's the trade-off of subclassing.

KG: And then there is another detail here for the result. Sets, as you are probably aware, are ordered. They are ordered in insertion order, since that's the only order that's really possible. And so all of the methods that construct new sets, the order in the new set is things that are in the receiver ordered as in the receiver and then things in the argument ordered as in the argument. So if you do union you get everything from the first set ordered as in that set, and then everything that was in the second set but not in the first set ordered as it was in the second set. Some details about that - it’s not quite 100% true in the case that your has method or your keys method mutate one of the sets. There is an order that will result but it's not necessarily this order. But like, whatever, if you are mutating the Sets while you are trying to perform intersection, you don't get to expect anything in particular.

KG: So intersection I think is the only case where this order is interesting. As I mentioned, intersection tries to be big-O optimal, so you are always going to be iterating the smaller of the two sets. So for example, if the receiver is very large and the argument is very small, we're going to be iterating the argument. But according to the order I just gave, everything that ends up in the result is of course in the receiver by definition of intersection and you're supposed to order things as in the receiver. So in this example, the set you iterate has these ordered as 1,7,3, but the set that you produce would order the elements as 7,3,1. Because that is the order that the things were in the receiver. And I believe, based on my understanding of how Sets are implemented in engines, that it should be possible to do this in time proportional to the size of the result within a log factor. It is my belief that the way that sets are implemented in engines allows you to look up two elements within a set and determine their relative order, which is what you need for sorting, without having to traverse the entire set. So I hope, and my belief is, that it is possible to produce this required order in time that is not proportional to the size of the receiver at all when the receiver is very large. I need implementers to confirm that. But yeah, I think it's possible. I think that's the only case where it's not totally obvious that you can produce the order that is specified. And in fact, the order that I am specifying here just falls out of the algorithms in every case except for intersection, and for intersection there is a “then sort these things according to their order in the receiver, please do it efficiently” step in the spec text.

KG: And that's the full proposal, all of the details. TAB reviewed it. I didn't have other reviewers but I looked at it as an editor. I don't know if we want to defer on stage three pending further reviews, but I was hoping that we could talk about it and maybe get stage 3 as-is.

USA: Okay. in the queue, we have MM.

MM: I really like this. I hope that it is able to go to stage 3. I do have some questions. Since I'm consecutive in the queue I’ll ask both. At one point in your talk you said assert the result has a next method. The spec only uses the term assert for things that are language invariant that users cannot cause to be violated.

KG: Yes, that is the wrong thing. This should say, throw if the result is not a method.

MM: Good, good. That's what I figured but we needed to verify it.

KG: But in particular, I want to call out that next is not checked unless you happen to go down the path that actually needs to iterate the argument. If you are in the path that doesn't need to iterate the argument, you're not going to invoke the keys method at all and therefore don't have anything to check in that case.

MM: Good good. I think that's the correct resolution. The other one is that you said, if the collections are modified while you're going, “don't expect anything in particular”. and in general, we've tried very, very hard throughout the spec to have as deterministic a spec as possible. So I want to clarify that the intent of the intent that you know, that there's a loose and there's an informal testing, which is you're not getting any useful invariant out with regard to these issues if you modify the thing on the way, but there's still a detailed specified operational semantics, so that no matter what happens all implementations agree on the consequences.

KG: Yes, that's exactly correct. The algorithms are specified in full detail. It's just that the consequences of those algorithms may be surprising if you happen to be modifying the sets during one of the calls.

MM: Excellent, In that case, I fully support this going to stage 3.

USA: Perfect. Next up, we have SYG.

SYG: Yeah, this looks pretty good me. Like all the design decisions. I cannot confirm the intersection order performance thing. What you said about, it’s doable because the requirement to keep insertion order, seems like what you said is pretty sensible. I'd want to understand how you're considering it, like, if some engine cannot do that. Is that a showstopper?

KG: Yeah. I guess if some engine cannot do this, I will pick a different order that is possible to do and probably it will have the effect that the order of intersection will vary depending on which of the two sets was larger, which I don't love but what can you do.

SYG: I see given that, then I still fully support stage 3 because that's explicitly implementer feedback. but given that it might have normative consequences depending on the implementability of the ordering in stage 3, I would appreciate explicit guidance and the champions keeping tabs on who has implemented, who's close to shipping, and to reach out to see if there are any issues.

KG: Sure.

USA: Next up, we have WH.

WH: For intersection, you said that you construct the result first and then you sort it according to insertion order in the receiver. What happens if somebody has deleted some result entries from the receiver by the time you do the sort?

KG: Excellent question. In this case you keep the relative order but move them to the end. So the assumption is that you're doing a stable sort that treats keys missing in the receiver as basically mapping to infinity.

WH: Okay, thank you.

USA: YSV is next in the queue.

YSV: Yeah, I was just checking our notes. We did notice the question for implementers but didn't have a chance to get to it. So, I need to double check that with my team, but beyond that, for the shape of the proposal, as it is now, we support this.

KG: Great. Okay. Well, I'd like to formally ask for stage 3 with the understanding that there is this open question about whether the specified semantics are actually implementable, that you will only find out when implementations go to implement.


USA: That sounds like stage 3.

Conclusion/Resolution

  • Stage 3

  • During stage 3, need to ascertain if resulting order of intersection as currently specced is possible by implementers, and for champions to keep work in sync with each other on this matter

String.isWellFormed \

Presenter: Michael Ficarra (MF)
\

MF: Okay. so this is the well-formed Unicode strings proposal. I say update in the slide title, but this is looking for stage 3. This is the whole proposal. So as a reminder, the goal was to determine if a string is well formed. This had a lot of use cases, Everywhere you need to interact with anything that will have alternatives string encoding or you know requires well-formed strings. All sorts of things like file system interfaces, network interfaces, etc. So the proposal, as presented last time, was just the first method there: isWellFormed. And during that presentation, I presented as well an open PR to add toWellFormed, which everyone seems to like so we incorporated that as well into the proposal. So toWellFormed takes a string that is not well-formed. Oh and to remind people what a well-formed string is: a well-formed string does not have lone surrogates, including out of order surrogates. So all surrogate pairs are in the correct order. So, going back to what toWellFormed does. It takes a string and, if it is not well-formed, replaces any of those lone or out of order surrogates with a replacement character U+FFFD. This is a very common operation. This is the same operation used within the HTML spec and very many other places This is the recommended character that is defined for this purpose. So that's the whole proposal. It has had stage two reviewers. I think it was JHD and JRL. I have to look into the –

JRL: Yep, I approved.

MF: Okay. so that's the whole proposal and I would like stage 3.

USA: Is JHD on the queue?

JHD: I want to say I strongly support it. I've already implemented polyfills, and I wrote the PR for test262 tests, so I'm extremely confident in the spec text.

USA: Next up we have DLM.

DLM: Yes. SpiderMonkey team. Also strongly support society makes a lot of sense to be included in the action.

USA: Next, we have KG, who says he also supports the proposal. Right.

MF: All right. Well, thank you everyone for the explicit support and it sounds like we have no objections.

USA: Yeah. Congratulations on stage 3.

MF: All right. Thank you.
\

Conclusion/Resolution

  • Stage 3

Import Reflection

Presenter: Luca Casonato (LCA)

LCA: Okay. Yeah. Yeah. yeah. So I'm LCA I'm going to be giving update on the import reflection proposal that's currently at stage 2. GB may join us here. Yes. So what is import reflection? So import reflection is a new syntax or a new feature for JavaScript that we propose to add to JavaScript. that allows you to import a reified representation of a compiled source of a module, if a host provides such representation. This allows you to import, for example, with webassembly modules, you could import the underlying compiled but unlinked and uninstantiated webassembly module to be instantiated later, for JavaScript you could import a module Source object like was discussed in the previous presentation with CP. This is also supported in dynamic import with through an options bag on the dynamic import with a currently proposed a ‘reflect’ key that takes a module string. But we may be open to changing that if there's feedback. the primary motivation here is to allow importing webassembly modules into ecmascript without actually instantiating the modules as part of the module graph. So the current best approach we have is to not use the ESM module system at all and to instead instead fetch webassembly, for example, with fetch(). For example, with Node, you'd have to read it from disk. Or in Dino you'd have to do the same thing. So this is not very portable. It requires special handling on a bunch of different platforms, not all platforms can fetch all protocols. So some platforms need special handling there, it's not the library statically analyzable As you can see, this is a expression and it's a relatively complicated one at that. It can be broken out into into many different. pieces like this fetch could be assigned to a binding, the URL could be assigned to a binding. It's difficult for bundlers to statically analyze this to be able to move the wasm around for example. Especially with this. special casing that some platforms required because they don't support fetch() makes this much harder. It essentially means that a lot of tooling has to hardcode the output from a bunch of wasm tooling in their parsers to be able to understand drill to statically Analyze This. For end users this is very easy to get wrong. If you forget the new URL, for example, with the hooting provider URL, your portability is gone. So this is using new URL() but the browsers, for example, support, import.meta data resolved now. I don't think that's tiptoe. Oh that's a different thing that needs to be. Penalized if he's a Fed. Trapper that could break the whole thing. All in all, not very portable.

LCA: One solution that people came up here is to do the WASM and JS integration directly which essentially allows you to import WebAssembly.Module instances. So instantiated and linked webassembly webassembly modules. But this doesn't solve all use cases. First of all, a lot of webassembly imports. They are specifiers with what we would call them. So if you want to import them portably from within a library, you would need to specify a global import map. So your end users if they import a Imports. The webassembly. library which assembly module that uses raspberries, they have to specify an import map to remap this bear specifiers. Also, if you want to do multiple instantiation, which is relatively common for webassembly because there's a lot of webassembly out there. That is a single essentially single pass, it's like CLI tooling that's very difficult to do here. You can't send webassembly modules between workers if you can't get access to the wasm module Object, which you can't do with the web and GSM articulation. And if you want to express it, if you want to pass memory to the webassembly modules, you cannot do that Using the welding ASM educational. Either you need to do manual. Instantiation are which you need the webassembly that module object for so, with our proposal, this would be solved by allowing you to import the WebAssembly.Module instance directly using static syntax the module, keyword here, would differentiate this import from a regular static import. This is now very easily statically analyzable, tooling can the with a very simple. for. Well, not that simple, But with a single pass parts of the JavaScript tasty can now find all references to webassembly Imports. It can now move them around which is very ergonomic for users. Makes it the tooling much nicer, and it has security benefits as well. Namely that we don't need to do a dynamic fish anymore, which is can make it the CSP policies simpler for importing, webassembly much better. If you have a strict CSP policy, I'm not going to go into too much detail here. you have questions about that. Please, do not like you and I can elaborate elaborate

LCA: Another motivation is allowing JS module reflection, so with the previous proposal from CP that introduced the module, and module Source globals to essentially allow you to do multiple instantiation of JavaScript modules. and also custom linking. This can be combined with import reflection to allow you to get module Source objects for arbitrary JavaScript code and essentially resolve those sources using the host resolver. So, exactly, as the host was over wood, this provides a primitive for dynamic model instantiation and multiple instantiation workflows. It has the same static analysis benefits that the wasm the well as motivation has. had the same security and benefits as well. I mended has the same, CSP related benefits as well, so it allows you to import JavaScript and dynamically instantiate and evaluate that JavaScript without requiring module eval as a CSP policy. there's, an open question about whether this should be importing the module Source or the module instance. I'll go, I'll go into a little more detail about that in a second, but I just wanted to bring that up here.

LCA: To clarify the scope of the proposal, the proposal currently adds the module keyword on the import declaration syntax and the reflect option in the dynamic import options bag, and the spec mechanisms to return the right. the module Source object. the reflected representation of the imported modules. when either of these is specified. It does not specifically add the module or Module Constructors. Nor does it add any WebAssembly-specific integration to ECMA-262, those are all host-defined behaviors. Well, the module and module which ones aren't but those are those can be done in a separate proposal. So this proposal by itself, actually does essentially nothing it adds. a keyword and an option in an important in this options bag and the actual wasm integration needs to happen in the wasm ESM integration specification. The spectacular spec text for this is now ready. It's built on top of NRO’s excellent ECMA-262 module loading refactor, which changed loading to only use a single post-import hook. I don't know what the pure number is for that view. Yeah. I'm fine. it, it's not quite emerged yet, but I think it's getting pretty close. The, you can find this back here. There's a couple. things I wanted to bring up namely that we're not adding any new host hooks Instead, the module Source object. So that's the reflected representation of a module is a new internal slot on the module record as this can be populated by either ECMA-262, for example, for JavaScript, for built in modules to this could be set to the module source. or it could be set by hosts for things like the webassembly integration. There are also module records that do not have a source representation, for example, JSON, or modules that are host-internal, for example, in Node, the fs module, These, if you would try to import them would throw a TypeError because there is no object in the module Source object internal spot. This is useful for the modules that don't have a representation ever, and it also works for modules that do not have representation yet without breaking existing code if we add a representation in the future, so you could imagine that JSON modules may have a module Source representation in the future. and yeah, the default is to throw unless there's a specific host implementation because this module Source object, slot is empty by default. The reflective Imports are not recursive, so importing a module source does not actually attempt to load any of its dependencies. and as such the ordering of loading and also the order of evaluation can be slightly different between import or from between a regular import statement and an import statement with a reflective import statement. this sort of illustrates that if you import a module with the view of reflective import of a module, this does not load any of the dependencies so you can see here that initially modulate is loaded but what do a Imports? Module B? This is not directly loaded yet because reflective Imports are not recursive, same for B and when you put C see Imports of G so G is recursively loaded by C. Then we import A, which itself was already loaded by the reflective import here, but now we're actually recursively importing it. So we also need to load the dependency of a which is e. so you see he is now loaded refer to with A but A is not loaded again because he was already loaded by the in this one, puts him him in here. and then, D is loaded. and A is loaded recursively by D, but you can see that For example, f is never loaded because we never recursively load to be. if there was await. import be at the bottom here, then F would be loaded.

LCA: The idempotence. of imports are unchanged. so, importing. if let me see how to phrase this correctly, if two modules, or if you import a module from a specifier and that specifier resolves to the same module instance internally for two different specifiers. Is that guarantee is also preserved for reflective modules over the module Source object. If you import two modules where that do not resolve, that you're not resolved to the same module instance, they may also not return the same module Source. It is possible that for two objects, which resolved to two separate module instances, they do return the same source And one example of this is this #1 here, which is something you can do the browser, you can add a hash to a specifier to create a new specifier and then it instance of the module, but it won't actually load the source again. so here the module Source may be the same. across two different specifies, even though the instance is not the same, But all cases, where we currently guarantee that the instance is the same, the source is also the same.

LCA: There’s a bunch of layering happening with other proposals. one with compartments layer 0. So that's the for the like data integration returning, the module Source instance, or returning module sources. When you import when you reflectively import JavaScript and being able to instantiate those. using the module Constructor, a layering with the module Expressions because module Expressions, also return module instances from compartments layer 0, Module Harmony layer 0, it's been renamed. which is kind of interesting. It essentially means that a dynamic import. it reflective Dynamic, import of a module expression is equal to the module expression for getting the source of the module expression, which answer makes sense. There's some layering with lazy loading, the main difference being that lazy loading is deep, whereas module reflection is not deep. It's shallow. So there's no recursive load going on. This proposal is not really meant for lazy loading because it does not do recursive load. GB is going to go into this a little bit more in the next presentation. presentation. There's some layering with the export default from proposal. which has been inactive for a little bit but this proposal add Syntax for for exporting the default. From a module. Sort of mirroring from the input statement. current If This Were to land. there would also be an argument made that there should be a reflective default exports as well. Yeah, that's not going to be the case. So this only touches imports and exports. there's the obvious layering with the wasm ESM integration, which I covered earlier. And then, layering with web components. I don't know, is GB here now? Yes, you are. an awesome going to talk about that for a bit.

GB: I can mention it briefly. In webassembly, we require all webassembly instantiations to be done explicitly through the imperative webassembly instantiation API, and this is kind of a standard technique where you provide the direct import bindings for the webassembly module. And in this way, webassembly modules, the way that they're used it's often a little bit more like passing function parameters, than exactly aligning with the host import resolution model. So maybe something a little bit more like module expression bundles as the model of WebAssembly linkage. And the webassembly component, module Builds on some of these ideas. I do, actually hope to, in a future meeting, give a little bit more of an in-depth introduction to where some of that work is going, but to briefly just discuss that in the in the scope of the reflection proposal, webassembly components want to be able to get access access to uninstantiated webassembly.module objects, so that they can perform their own linking just like you would for a module expression in a bundle or module declaration in a bundle. And so, by having this integrated into the module system, we would be able to achieve that goal. when you want to integrate directly interest resolution, which is where we want to get eventually. And that's that's kind Of a long road to get there. But that's a very brief discussion on for now and feel free to bring up anything that. if I haven't. explained it, that clearly. Yeah. That's all are mentioned. Excellent.

LCA: I'm so there's two open questions I would like to resolve for going to stage 3. one of them is the syntax around what the dynamic import options should look like. So there has been some discussion whether to use the reflect keyword. I would with the key module, this would make sense. in the case where we want to add other Reflections in the future, for example. lazy or asset. because they could be other keys in this reflect keyword. But Jordan brought up that. this is not, this does not directly to the reflect keyword, does not show up in the static detects, anywhere or the reflect key rather never showed up at the static text anywhere. So this may be confusing. So there is some discussion about using module: true instead, which would mirror these dead except more closely. and, then there's a second open question, which is much larger, which is whether this should be an instance reflection or source reflection. So in the slides I've shown you so far I've shown you source reflection, where the reflection itself or the reflected object is the module source. There is some people who would rather see this use we are the object returned or the objects that you import using The reflective import is a module instance as best fired by the previous layer 0. And you could get the module Source out of that by doing .source. So, the above example would give you a linked but unevaluated module, where the bottom one would give you just the source without any module instance. After some discussions, I am tentatively in favor of using instance reflection, but I would like to hear more from what other people have to think about this especially DE, who I know has been a proponent of instance reflection is the a favor of source reflection.

USA: Let’s go to the queue. So from 10 minutes, we start with the First up, there's EAO.

EAO: just very briefly. I probably answering your first question. Open question. There I haven't trouble seeing how this is really reflection because there isn't a thing that is being reflected. because the the module is not instantiated in the first place. So I think I would be more comfortable with the different keyword than reflect in the dynamic import.

LCA: Okay? Yeah, that makes sense. sense. Yeah. give it here.

GB: originally, I think with the reflection was that we were firstly having multiple Reflections but I think also, in theory, originally, you would have had the ability to actually reflect information about the module. So, on these module object to be able to read off things like what exports it has or what imports it has. So I think that's also maybe Some of the justification of where it came from originally. But because of the fact that these are parts of other proposals, it's more difficult to encapsulate that in the current proposal. but yeah, we were entirely open to naming changes. Yeah.

USA: We have NRO.

NRO: Just to clarify what LCA said earlier, I'm one of the people who are pushing for resolving to module objects and to the module source, and the resulting model object would not be prelinked. All the dependencies would not be loaded, so it's not possible that it is already linked. It's a single Module instance.

LCA: Yeah, sorry, I should have clarified that it did not be linked but the module resolution Hook is used in that module is the one from you. Oh, stroke. So the modules that would get loaded by that module after load cannot be adjusted. through a customer. of different instruments. Next topic is from YSV.

YSV: it was covered a bit in chat and it's feedback. I brought up before but I'm still unconvinced about import module as a syntax because it you know, Developers. what have they been doing with the import keyword this whole time, other than importing modules. So I think it'll be confusing and I think we want to choose a Dresses.

LCA: yeah, that is a valid point, I think. this is probably part of a larger discussion around whether module instances should be called modules or module instances. Because I agree that it is confusing.

USA: Next up we have RBN.

RBN: okay? Yeah, I we have a conversation on my team about this. and one of the concerns that was brought up was the complexity this might add to bundlers that currently can make certain assumptions about Inland decisions, freeze instance, for example, rather, esbuild and rollup can do certain types of hoisting that this by introducing Syntax that this might prevent. So I'm so curious about whether or not this syntax would potentially get in the way of bundlers versus something that might be a global function or a function on import.meta, similar to the import.meta.resolve that Node provides. So we're not something like that. Might be more convenient as opposed to this new new syntax.

LCA: I don't quite understand the concern. maybe an example of what you think. Maybe an example of what would break or would be a problem?

RBN: So the conversation we had was that and I'm quoting here. “This far, bundlers largely Allied module system, internal operations from the resulting bundle. so, module records that that exposed Yes. never a. module record, exposing function. supposes the existence of those module record internals. So this means that bundlers would now have to add additional logic to support understanding what these module records are, producing some module record equivalent if requested within the bundle, And, but again, this is this is assuming it. in some ways. It's a bit of a bundling(?) I guess compared to Actually, having native support for bundling, which is a separate concern about around performance that. we can discuss. in the module declarations proposal. But so, the other thing they said, was that a bundle of the sports module references like these would likely need to be incredibly complicated potentially point of hoisting. where all of the hoisting that bundlers like, rollup and yes, Bill do would no longer be possible. So it could could Result in much. lower performance in. any bundle that actually uses this capability.

GB: Firstly, it's worth noting that one of the primary targets of this proposal is bundlers, because it solves the bundling problem where we can't actually bundle webassembly properly right now because you don't know when you're loading a WebAssembly module because there's no static Syntax for it. So this gives us a static Syntax for importing webassembly and being able to treat it as an equal in the module graph at the level of granularity that is needed for most WebAssembly use cases, which is uninstantiated static. decorations. And and then at the same time we're getting symmetry with that in JavaScript because module expressions are bringing the same level of an instantiation capability to JavaScript. in a way that Sir, compatible with boundless. Yes. it's a different. Semantics gasp on this. Need to integrate it but by specifying a actually specifying a new capability for bundlers.

RBN: Thank you. China.

DE: I mean, like, I take it. BRN might have been referring to the inverse problem if you have a bundler that takes in something that's using this and wants to down, level it to CJS or something like that. And I thought that this proposal would be amenable to such a bundler down-leveling. Maybe as like an action item, especially since several bundler authors, like, including GB were, you know, heavily involved in this proposal, Maybe, maybe someone should try prototyping this and see if they run into the issues that that RBN’s team is worried about.

RBN: the specific concern. We had was around things like some of the performance related things that some bundlers like esbuild due to hoist exports, or to hoist Imports that are used internally to which avoids, some of the lookups that have to be performed. So this is one of those modules then becomes an actual module block or module expression, that hoisting becomes unusable

DE: My impression is that this proposal should be very readily statically analyzable. So I have a hard time understanding what the concern is.

RBN: just a measure of the complexity that would add to bundlers of bundlers. or a significant percentage of bundlers were signed on to the signed up for this complexity and are aware of it and are fine with it. Then I don't have any specific concerns. It's just to make sure that that's being that away. This is being raised with it that Community as well.

LCA: So, for time, sake, I think, let's take this to an issue on the way home.

USA: yeah, :LCA and GB, you have less than one minute. What would you like to do? do? What? would you like to do?

LCA: I'd love to hear from. the and about the incident resources of thing.

USA: DE could you be really quick?

DE: Yeah. so I feel strongly that module expressions and reflective modules are fundamentally getting at the same kind of run-time construct. This could be a module instance, or it could be a module source with like a base path attached to it for a relative. module specification. Ultimately, these can, these are equivalently expressive because you can use the if we have a module Constructor and a source getter(?), then there they can be expressed in terms of one or the other. So I'm skeptical of the idea of using different runtime representations for the two of them. If we want to hedge our bets. against wasm integration never really happening. I think we could, we could say, like, well for the wisdom integration, the module can just Not. contain not not be importable And the only thing you could do is get its first So yeah, I'd like to like to discuss this more with with GB in the committee but that's, that's my feeling feeling there.

USA: Yeah, unfortunately we're on time But would you like for us to capture lightning?

LCA: Yeah, that'd be great.

NRO: I just have an answer to DE. ModuleSource and Module are not equivalent in expressiveness, because you cannot easily build a Module from a ModuleSource, unless you pass a hook in a way that matches the behavior of the host. So you can easily get the ModuleSource from the Module, but the other way is harder.

USA: All right. Yeah, thanks.

DE: Oh, that's a further argument for going with module for for both of these.

USA: right? I suppose. you could continue this. offline. Next up, we have GE and YSV for effort importing valve.

LCA: If I may just ask for more item on the agenda. So while we're still on import reflection is if anyone has any specific specific concerns. For reflection. for for considering where the current specification is for stage 3 review. if they can bring up those points now, so that we don't hit it in the next meeting, that would help with your mouth, Serve anyone wants to speak up.



JRL: Would you like me to do that in matrix chat, or on the issue tracker?

GB: But you can take a minute now. If you like Justin

JRL: I bought this essentially, every time this has been presented. The current semantic, we're using a keyword after the import, any keyword, is not extensible for bundlers. The problem space here is so much larger than just whether or not you get a module or an unlinked module for WASM. Reflection is a large problem space for bundlers, for example maybe you want to get the plain text of the source code. Maybe you want to get an array buffer, maybe you want to import an image and control how that is sized or encoded? We have where we control the transformations that we have to run on the module that we’re linking. We are using it for importing fonts with metadata. All of these are import Reflections and none of them are solved by a module keyword. Anything here that is not an extensible syntax that we can attach more to in our bundler is not an acceptable solution.

DE: So that the history here is that I was working with other people on on a proposal that was extensible like this and it was narrowed down to import assertions because of concerns from GB and Jordan. that bundlers would misuse this or something like that. Maybe one of those two people could could explain. But that I think that's avoiding that particular risk of a block is a good reason for these champions to not go in that direction.

JRL: it's going to be blocked, either way. we're in a deadlock here

DE: So I hope those people who objected previously could engage in this discussion.

GB: Thanks Justin. It helps the like to hear that. Hopefully we can continue. That discussion will find and it's like that. We've got everyone on the same page in these discussions of this point, at least in terms of understanding where the question marks are that we need to work through.

CP: It might be interesting to get more people into the module harmony calls which are biweekly. organized by Chu and YSV. So if you have any interest or any opinions on this, please join us

GB: yeah, that certainly would be recommended because there's there's a lot of cross-cutting concerns that were able to bring up in those meetings. So, that would be a great venue to the discussion.

JRL: I can start attending it. Thank you.
\

alt_text
\

Conclusion/Resolution

  • List

  • of

  • things

Deferred Module Evaluation

Presenter: Guy Bedford (GB)



GB: Oryx that the third module evaluation. sir. I'm picking this up from you earlier today. and YSV originally presented this little while ago. little while ago. and it's kind of a simplification of YSV’s original proposal to try and get something that we can find agreement on in committee. So to just reiterate, the use case that's being solved here, is that module performance is important and it's something that we're tackling in a number of ways. And with all of this module stuff going on, we mustn't forget modules’ performance is the most important thing at the end of the day for users. And so we must keep focusing on these kind of use cases. So what is the problem that this proposal is looking at, is that we're looking at large code bases, we've got a lot of module code that's executing on initialization, of merchants(?) a large bundle. but kind of when you're looking at module, later modules and you've applied, Applied all loading optimizations. So once all pre-loading water for optimizations, bundling when necessary, and so many banks run for, bringing that up that, you know, bundling continues to remain an important optimization in modules were close. Once, you've done all these things, and you've got this optimized module graph, there Still Remains the synchronous blocking top-level execution, cost of the initialization. So that's a problem that we're looking at. And we want to try and solve this without being forced to ‘async-ify’, the entire code base.

GB: So YSV earlier was able to bring some numbers to those from some Firefox use cases, I don't know the exact details of these tests but in these in these benchmarks, it's looking at performance. characteristics. where 45% of the time is spent on loading and parsing and 54% of the time is spent just executing that top level graph initialization, which has to be done synchronously, which has to block the event Loop when it happens, Can't be of threaded sort of thing. and if you have a large graph and you hit this problem, there's not a lot of things you can do. you basically have one option which is to start. I think things out in. Dynamic Imports. And so you find functions that call things that aren't needed during the initial execution that they're only used during the sort of running. lifetime of the application. maybe a few seconds after the initial page load. and you ‘asyncify’ those functions, so that it will easily import the dependency that it was trying to execute on that conditional branch. And when you do that, you then need to ‘asyncify’ everything up through the parent function stack? And so the question is, is that really what we want to be encouraging people to do as their only option? and then even further, dynamic import doesn't actually solve the entire problem, because dynamic import still needs network optimization. Just have a dynamic import you've now actually created a performance problem because you're now going to have a waterfall problem which you then need to separately preload, and and water parks. - there's a static analysis difficulty for bundlers. If you've got highly dynamic, dynamic Imports. And so it's not necessarily the best solution, but it's the only solution. So the idea is you can have any primitive that can improve that startup performance without sacrificing that API, So you can still write nice modular looking code. And what was specifically brought up last time around Union's proposal Was some of the magic around how bindings were being handled and how evaluation is being handled that it was actually becoming a new kind of binding primitive and so the original proposal was that you could have this kind of lazy initialization. In this proposal we switch between the with syntax and the sort of reflective syntax that we've been using as well, equivalently because we haven't settled on a direction for this proposal, so please don't judge that too harshly. And with the original API that she proposed you could have the full grammar of normally s module Imports. and then you would add this lazy initialization. which on access you would you would execute those findings. And what we're proposing is a simplification where you only get these lazy namespace objects or deferred module namespace objects, which in that example we have got a method that's only used rarely on a dynamic path that that's after their initial initialization of the app you can turn that import into a lazy import and then you access the exports as you would on any namespace. space. But that access becomes a getter. So the getter becomes the evaluation part. And so the import is loaded all the way up. The dependencies are loaded. All the network stuff is done and it's ready to execute synchronously, but that synchronous execution only happens when you when you apply the getter so that to the defer Quantrill namespace, Any get our will execute the entire module and it will only execute once and you're then able to call the function so that you're getting it deferred as needed, and the initial initialization of the application can cut out that execution work entirely. So on your module graph when you add is, what you're getting is, you're creating It's almost like a new kind of top level separation, where that lazy graph is a new sort of top level graph that as if you were top level Dynamic importing it. And if graphs overlap you can, you can race execution, just like you can with normal modules and it can work recursively as well, just like recursive dynamic import and then are some small issues for example with top-level await, we can't do all the network. Sorry we can't do get everything synchronously ready because the the execution might actually be asynchronous itself. And the way we deal with that is, we would either need some kind of special handling where it's not allowed entirely. So, when you do this lazy or deferred import, it would throw right away. Or alternatively. swirl before top level 2. we could even legally handle asynchronous. evaluation. down to the synchronous. direct. Synchronous. supper on, which is, well, defined concept. And that remaining synchronous of graph, could then be evaluated, as the Deferred evaluation of quick different module name space. And then just to go over the I/O benefits. I'm not sure what I'm supposed to go Brown. The slide. and then yeah. The other thing that was brought up was the potential for some kind of stack injection, error injection. So, when when the because of the fact that this this execution is being done as that getter on the deferred module, kind of a new way of running that top level asynchronous evaluation. as opposed to Dynamic import or static. Imports being the only way to do that today. And so if, for example, if there's an error that error is going to potentially be cached in the module graph, if we stick with Eric, aisling, and that means that you the stack of that error would come from the place. Where was evaluated? So there's some discussion around the fact that that could expose the the place where that deferred evaluations happening because it belongs to the execution stack and and also that data(?) becomes the calling position. whereas with dynamic import would do this asynchronously, it becomes a part of the synchronous. evaluation evaluation point. so, the top level await and this kind of injection, or something, the mean, Harry things to work through, but apart from that, it's used to be relatively well, defined. So what we were just looking to find out today is what people think about this reframing of The Proposal, we could still extend the proposal back to some of the most Mark things with bindings and Future. But if we just try and get agreement on this kind of a primitive, an agreement on solving this use case and discuss it while we're discussing all the modules things so that we can make sure that we're handling all the use cases that we need to have. So are there any questions?

USA: First up, we have Ramadan.

Wow. don't need to talk about this at just suggesting different set of flies a

USA: All right. Next we have YSV.

YSV: Yeah, I just want to emphasize that the important change here is that we are now using a namespace. space rather than having this new concept of the lazy. Bindings that will then replace themselves when we do the first access of those bindings. And I think that this or at least I hope and this is what I would like to have feedback from, for example, from Jordan, who raises some the previous meeting, whether or not this would be sufficient. Because this would now be effectively a getter on a namespace. So that's the first piece I have and I can go to my next item.

USA: before that RPR has a reply to that.

RPR: I think this changes the proposal. I understand why it's been made and am generally in favor. Anything that pushes this proposal forwords is amazing. The main thing is it's a small loss of ergonomics because one of the principles of this proposal is that it's something where you can make a surgical change to the performance characteristics of loading. Ideally it's an optimization that you sprinkle in. And the whole reason is that this is much easier than going through and making all your code's callstack async, which is a very non-ergonomic and rippling workaround. Whereas with this change (the loss of named imports) this means developers will have more work to do compared to just adding a keyword, because you'll then have to go through all of the usage sites and prefix them with a namespace (ns.identifier). So it slightly moves the proposal away from that goal. But it's definitely not a showstopper.
YSV: Yeah, I just want to respond to that Rob. So as some of you know, I've actually built this loader for Firefox. to load our client code. So, previously we had a handwritten custom loader for JavaScript that behave completely differently from the ESM system. Similarly, you can think of it as similar to the CommonJS system before they realize that Order for it to be specified. That they cannot do synchronous loading. So our old version continued to synchronous loading and as we moved to using ESM, we actually ended up for other reasons. being forced to introduce a lazy namespace. So this design now actually reflects the reality in the Firefox codebase. I agree that ideally we wouldn't have to do this but I also respect that we've currently got an invariant in the language where the module bindings are always what you import and never being replaced in the way that I was suggesting. So I do accept that. and I will say our developers did complain when we made this change. It wasn't ideal. They didn't like the fact that they needed to use a namespace. But with linting, we were able to get this across pretty easily.

USA: Next up it’s YSV again.

YSV: I just want to bring up an important use case that was missed in my previous. presentation and also I kind of forgot to put it here which is one really interesting thing that is possible and one reason why we would benefit from it is this would bridge the gap between synchronous module loading systems and asynchronous ones like the ones found on the web. So of course we cannot have a fully synchronous loading system because this breaks various concerns such as run to completion. However, what we can have is we can do the work up front that is asynchronous. So that includes the top-level await work, it includes whatever. Kind of network retching, we need to do and then all of the things that can be deferred. specifically the execution of non asynchronous code, we can do that a bit later and we can do more aggressive optimizations on server side code. But in the end, it will be able to share the code between for example, Node.js and like, SpiderMonkey environments with client code and I think that's a huge benefit for being able to reuse code.

USA: just a reminder we have six minutes but not that big of a queue. Next up, Ashley

ACE: Yeah. So just wanted to speak mostly on behalf of my colleagues back at Bloomberg. So similar to what you are saying Our experience over are implementing some of the kind of build tools and runtime tools that would need for this to work. as all been pretty good. Like we've, we feel like we've got a good hand on the edge cases and they work. Those edge cases, kind of make sense. that they can be kind of very clear. clear ways of understanding them. We also have a team that focuses on measuring performance Can see how big an improvement this can add to start up time of things. So lots of value implementation people like so far. It's looking very promising. So yeah, very very supportive. This is really excited about this.

USA: And next up, its you again.

ACE: I'm done speaking and yells at the, with the loss of ergonomic, the fact that we now can't just add the keyword and Imagining the same way in an editor. I can come on. like rename this variable I could come and make this import lazy. I imagine all the editors will then Whispering too far. find. All the places that were previously using the binding and, you know, change it to the namespace, So yet again, it's a slight shame but I imagine overall we get like 99% of the value. So very happy,

USA: It was the last item on the queue. Thank you for oh no, actually they're scary. Yeah, I guess so.

in the slides about about top-level a thing? The total top-level await, sorry and Last. time that we talked about. these proposals with YSV, if you have a top-level await and you will not be able to use it this way, is that it still? The case guy?

YSV: I can also enter this so in the case of top-level await we can make the same decision as what I mentioned earlier, which is we can do either. We can say that it throws. That's one option. The other option is we can eagerly load asynchronous subgraphs. So if we come across a part of the subgraph because we have to do the parsing link ahead of time, then we can treat that asynchronous sub graph as as eager guy, were you going to say something?

GB: Yeah, so, you could, if you have a module that imports a dependency that doesn't use top-level await, but that dependency itself has another dependency that does use top-level await, you could execute the, the top-level await bottom level dependency. But then stopped on the on the on the next dependency, which is which doesn't have top-level await and still make that lazy. So there would still be some benefit in doing that. Thanks.

USA: We have. Ashton.

ACE: To CP. with the tooling we're doing at Bloomberg, a lot of that tooling is around top-level await kind of case. So, like right now we have a solution for it. We're out the two choices were kind of leaning towards allowing top level await and you know, we'll have like concrete experience of developers experiencing that mode.

USA: That was the queue. Thank you again for being on time.

YSV: Just to make it explicit is the committee comfortable with this direction. And is this a direction that we could possibly achieve? stage 2 it?

USA: I hear a consensus.

JHD: I think "consensus" may be too strong. I think it's too early to know if we can achieve stage 2 with this. It's hard to understand all the details in a short presentation.

GB: We will certainly bring it back with a longer presentation in one of the committee meetings. but thanks everyone.

DE: I wanted to briefly clarify with the discussion about top-level await that I guess that means preflighting it, fetching all the dependencies and running the top-level awaits eagerly. In the Bloomberg context. I think the the way that some of our applications that would incorporate this work, is that they would use some static analysis to make it cheaper to fetch some of these things or understand which subgraphs have such top-level awaits It at runtime. It's only about evaluating less but also fetching in parsing less is also important. In the web we don't really have a way past fetching less but in tools that optimize things you know, in build systems that use these semantics, our evaluation has been that you can actually parse less at runtime if you calculate a little bit of metadata about how things would fit together. So anyway, that's all just to say. I think this is a great proposal that the changes that you've made now are really good and I support you now I'm optimistic about this eventually moving to stage two.

USA: Great. We are on time. I hope you got the conclusion that you needed.
\

Conclusion/Resolution

  • List

  • of

  • things

An introduction to the LibJS JavaScript engine

Presenter: Linus Groh (LGH)

  • LibJS

    LGH: Hello everyone. I'm LGH and I work on the LibJS JavaScript engine which is now part of the larger ladybird browser projects. And hopefully today I can give you an overview of what that entails. So some basics. We've been in development since around March 20, we being the larger SerenityOS community. I'll give some background in a bit. It's written in C++, but modern C++ 20 and we quite often push the compiler to its limits. We're usually on the bleeding edge, and you can’t compile with anything other than that. We use CMake as the build system and it's all open source under a 2-clause BSD license. There’s a special part: it's all completely from scratch, which means no third party dependencies at all beyond the C++ compiler. That includes everything from the regular expression engine to our underlying BigInt implementation and the libraries to support Intl. It has both an AST and bytecode interpreter, but no just-in-time compiler. AST is the original one and then bytecodes has been added later and is roughly 10 percent behind them in terms of test262, but the plan is to completely switch to bytecode. We have a very strong focus on spec compliance and completeness. So we're not looking to skip any parts because they are inconvenient, or have no support for legacy features, because the overarching goal is to make an engine that is capable of handling the modern web, which also includes the legacy web, including Annex B.

LGH: As I mentioned already, we have SerenityOS, which is strongly related to this. It's a hobbyist operating system basically, that tries to mimic the look and feel of 90s Windows. But under the hood it's implementing a lot of modern concepts. There's a whole talk to be given about this as well, but it's out of scope to look at the entire history of that, but the basics are a guy called Andreas Kling started working on it in late 2018, first on his own and later with a community of developers. And then in June 2019, the first bit of what is now our browser engine got added as LibHTML, initially to just support rich text rendering, so like imagine a Google Doc which has text formatting and embedding images and stuff like that. And because everyone is familiar with HTML, that seems like a good fit. So it started as a basic HTML renderer. That was the initial reason why it got added but we still don't have HTML-based rich text editing outside of the browser, but someone could go and make a content editable document. And that over time slowly turned into a proper web engine with more than just HTML - CSS got added. That turned it into a basic browser with the user interface and everything, but that only gets you so far and that at some point we decided, yeah, we needed JS engine, because if we want to support websites properly, as modern websites are basically unusable without JavaScript. So that's what we did in March 2020, and we started working on a JavaScript engine called LibJS. If you look at early iterations of that, you might see similarities to JavaScriptCore, because Andreas has worked on WebKit for many years, but nowadays, that's no longer the case. So once people joined, they kind of pushed it into a different direction. And, in fact, this was recorded on a YouTube video, because Andreas regularly makes raw development recordings. So just sits down and implements something, and uploads that to YouTube, mostly unedited. And so if you ever want to see a guy start implementing a JavaScript engine from scratch, that’s in this video, I think it’s about an hour. The AST is handwritten, so that was before we had a parser and lexer.

LGH: All right, so let's quickly address the name because it’s a bit ambiguous, LibJS, JavaScript library, but it's very simple to explain it. So because everything in that SerenityOS operating system is made from scratch we don't need to invent names that stand out or are catchy, like products will try to find a good name for that. So everything has a very descriptive name. We have applications like browser and calculator, and file manager and PDF viewer, and so on. And they are called that both in the code internally and in the user interface. The same is true for libraries. So we have LibGL, LibGfx, LibDNS, LibHTML, [some more listed]. You can guess from all these names, what they do. And all sort of encapsulate one thing. Then we have around 60 of them, and in there is everything you would you need for an operating system. And so very naturally “LibJS” was chosen. It was no real discussion. That only happened later on when we decided to pull out the web engine and everything around it into a cross platform browser project because we no longer wanted to limit to that niche hobby operating system. So that's now called ladybird browser and entails everything from the web engine, CSS parsing, webassembly engine, JavaScript engine, and so on. So both “SerenityOS JS engine” and “Ladybird JS engine” is fine, depends on the context a little bit.

LGH: Let's look at some characteristics. So as I mentioned it’s completely developed from scratch, and that’s not really because we don't think that all external code is not suitable, it's just sort of just a principle because we get really good integration. And once you develop everything from the kernel to the last user space application and like they all share the same standard library. Like you've got the same string type everywhere, the same vector type, and we really don't want to introduce anything from the outside into that because then it falls apart. It's implemented in a way where we started quite late, like two decades behind some engines. Which results in a weird order of implementing stuff. So, at times, we had very modern bleeding edge JavaScript features implemented way before some obscure legacy thing that every other browser has got. One example is the ‘with’ statement. Yes, no one wants to use it, but you kind of still need it, but that came later on. The historical timeline is a bit distorted and it's all mixed. One thing we find very important is staying very close to the spec. I'll get into that in a bit, but its source code just looks very similar to what you would see in the ECMAScript specification and the proposals, and that helps a lot with staying correct and also being able to find stuff from the spec in our engine and then see how that integrated into other things. Additionally optimizations and everything not in the specs is marked as such. I mean, we still got parts that are not in the spec at all, garbage collection, for example. Also we have no roadmap because it's all volunteers, it’s just an open source project. Contributors don’t really make any promises about what they will work on or will not work on. So, the easiest way to make something happen is to make it yourself, we usually say, but that doesn't mean that we won't implement something at all. We usually try to stay up to date with all the latest proposals and stuff. But there is no official "yes, we will do this at point x in time". And as I also mentioned earlier, we regularly get recordings of engine development, recorded and published on YouTube, and also once a month, we make a summary video that contains all the changes both on the operating system side and on the web engine side. Some people use that to stay up to date with the project. Here’s one example of how that coding style looks. So, for example, the GetMethod operation, it's just four simple steps but turns into way more code than just four lines because we mark everything with those comments, which also makes it very easy to review, and makes it very easy to check for correctness. You can also see, we don't always use the exact same name. So “func” from the spec gets turned into “function” for example, and we have additional helpers as well. It's not a literal translation, but it tries to stay very close.

LGH: What's in scope? As I mentioned earlier, it's basically everything. We target the latest specification draft, so we don't even stick to the 2022 or 2023 yearly release. We just use whatever is currently on the main branch, both in 262 and 402. New proposals are usually considered from stage 3 onwards. We don't always have the capacity to make stage two prototypes and then do all the changes from there, but past stage three we are usually very happy to implement them. Annex B as well, it's just part of the core engine. We provide all the hosts hooks that are in the spec, and host-defined slots for any host that wants to embed the engine to customize behavior as they expect. And that is obviously the case in the browser, stuff like how you load modules. Or [[HostDefined]] slots to hold custom data and next to the VM. But it's also important to mention that it’s a pure ECMAScript engine. Concepts that are defined elsewhere, we don't really want to put into the engine. So while some engines will choose to ship WebAssembly as part of JavaScript, in our case it’s a separate library and the web engine provides the JavaScript bindings for WebAssembly, it's not part of LibJS itself. Here's a list of implemented proposals. A bunch of stuff in there like array grouping, change array by copy, import assertions, JSON modules, ShadowRealm, Temporal, Symbols as WeakMap keys. On the Intl side as well, but that's not a lot of stuff I work on. DurationFormat, the enumeration API, and Locale Info, for example.

LGH: Next up, performance. So at this point, it's still largely unoptimized. Now, that doesn't mean it's unusably slow but it also means on some websites you just suffer a little bit. The main reason for that is basically just we didn't take the time yet to look into a lot of stuff. Most focus has been on correctness and completeness. But long-term, we still want to stay away from just-in-time compilation. We don't think it's worth the added complexity especially in a smaller team. Like obviously you get a good speedup from that but we want to try to just keep the bytecode to make it as fast as possible. Now if that turns out to be still too slow, okay, we will go with JIT but given that “Super duper secure mode” exists in Microsoft Edge and “Lockdown mode” exists in Apple's Safari, we think there's a possibility that on a modern computer it can be fast enough without just in time. But we do have a handful of customizations sprinkled in here and there. Lots of stuff that you might know from other engines: object shape transitions, rope strings, NaN boxing for encoding value types, caching of environment lookups, and something that I'm not sure anyone else is doing, storing strings as UTF-8 internally, because that's what most of our internal string functionality expects. And then anything that needs UTF-16 from the JavaScript outside gets re-encoded on the fly.

LGH: Testing is obviously very important to move quickly and not break things. So we have a custom test runner, with a test suite for JS, which has a jest-like test framework. Basic test functions and assertions. We have roughly 50,000 lines of code for tests across 1k files. That's not nearly as much as test262, but especially in the early days that's sort of what got us to a decent point with testing. Because we had a test262 runner early on, but for a very long time, you just couldn't run all the harness files and so the results from that were not very reliable. But nowadays, test262 fully runs, and we get about 88% of passing tests for the AST interpreter and 73% for the bytecode, so still a bit behind. But everyone on the team really enjoys getting the numbers up, getting the graph up. I definitely would recommend that for anyone trying to make a new JS engine. And obviously, also integrated in CI to ensure we don't break anything, although test262 still takes a little bit too long, so we currently run that once stuff gets merged. And if that broke anything unexpectedly, we have to fix it up separately.

LGH: So it doesn't really make sense to make a JS engine if no one is using it. The list is still very small, but growing within the operating system, which you can see on the screenshot, that's what it looks like. We have a standalone JS interpreter. A simple wrapper program around the engine basically that provides both a REPL and a file-based interpreter. You give it a file and it runs it, that’s not something special but the REPL has syntax highlighting, and pretty-printing for different things, which I'm a big fan of. And then obviously in the browser, which was the initial reason why we made it, which is supported by the LibJS engine. As you can see we use the same, very generic name there. We have a spreadsheet that uses JavaScript as it's expression language. So no excel-like language but it's just JavaScript and you get the whole standard library and a bunch of custom functions tailored towards spreadsheet applications as well. And then we have assistant which is similar to the macOS Spotlight feature which also has a built-in calculator and someone just decided “okay, let's take the JavaScript engine, instead of implementing these expressions from scratch. Seems like a good fit”. Outside the OS, we have the ladybird browser. So it's currently using QT as a cross-platform GUI. And we have a random project called Cosmo, which is not by us, but someone else just taking the engine and making a JS runtime for a game, which was very surprising to us because we don't make any promises that the API is stable or anything like that. And here is a screenshot of that REPL, we try to provide lots of introspection into internal state, for example, pretty printing array buffers showing both byte length and contents, for example, so just trying to be helpful. And here are some screenshots from the ladybird browser trunning on a Linux machine. You can see. the layout is a bit wonky all of them, like the search bar on Google maps, or Facebook. They are not perfect. But they obviously all run a ton of JavaScript and that works.

LGH: It's not been without problems. So over the years we've had probably three big issues, which I have listed here. The biggest one is initially not using the spec as the main source of truth. In hindsight that sounds very silly. But you know, you got a bunch of people just being excited about the new thing and they jump onto it. They're probably not as careful as they should be. So people would go and implement stuff from memory, or implement stuff based on MDN descriptions, or implement stuff based on other browsers’ observed behavior, and you just get a bunch of edge cases that will not work, we had so many inconsistencies. For example, evaluation order of individual steps was not right, or mismatched compared to all the other engines. Also things like duplicate ToString or numberconversions. Yeah, it was a bit of a mess. Our solution to that was just imposing a very strict coding style, which I showed earlier, where we literally take the entire spec, copy it into the source code one line of spec, one or more lines of source code, and then you can very easily compare that. And it also makes sure that the person who implemented that actually looked at the spec instead of just going ahead and doing it from scratch. That's extreme but it solved the problem. Then we also did not get some fundamentals right from the beginning, one example being objects. So you probably are familiar with internal object operations. Which proxies hook into, for example. And unless you specifically provide those entry points it’s very hard to make a fully compliant object implementation, we had one that worked in 90% of the cases and getting the last 10% right was incredibly difficult because you would just get so many edge cases and special handling, it’s turning into a mess of spaghetti code. To fix that, we did it again from scratch, which is unfortunate. And the same with Realms. So early on, we just didn’t have the concept of Realms. Intrinsics lived on the global object, which was fine. It gets you ahead for a bit, but then after a while, especially once you start integrating it into the browser and stuff like iframes, cross realm correctness just falls apart. And then we had an almost 10,000 line change in a single PR, we ripped out all the old stuff and replaced it. Also not testing at a large scale, test262 is not complete coverage, but you can get very far. And so had we run that consistently earlier, we would have noticed all these issues from the beginning. Like “oh, I just implemented this function and I expected it to work. Why are half of the tests still failing?”.

LGH: We have currently 87 individual contributors, counted by commits that focus on the engine itself. If you take everything around it like BigInts, or regex, it’s much more. But it's still only a very small core team. So we have eight people which made over 90% of contributions. But we still to this day, encourage newcomers, it's not as easy as it was in the early days where you could go and Implement if statements because no one has done that yet, all the low-hanging fruits are gone. But we still encourage people to join and try something. And maybe they'll like it and they stick around. One example of that is editorial changes. We try to keep up with those even though it's not necessarily required to do it. But that could be as simple as changing a few words, you can get a good feeling of how the whole process works. Inspired by the graph showing Igalia as being the number two chromium contributor that got shared recently I made one for ourselves, the penguin avatar is myself but we got a few more people there, many of them focus on some specific things. You also might have seen them online on GitHub filing issues in proposal repositories or for test262. It's a great team. I’m very proud of all the work they have done.

LGH: Now you might wonder, how can you try it? So obviously, it's still part of the Serenity operating system, which runs into QEMU. So, we've made it very easy to run, it’s basically just one command to run after installing a few toolchain dependencies. It also runs natively on Linux and macOS. We’re not providing binaries right now because we don't think it's quite ready for that yet. We want people who provide feedback to be at least able to build it. We have integration in esvu and eshost, thanks to Idan who did that a little while ago. Also Ladybird as I showed in these pictures earlier. And as of one or two weeks ago, we have a WebAssembly build of the entire C++ engine. So, you can actually try in the browser right now. Which is thanks to Ali, who also made our WebAssembly engine. That’s it, if anyone has any questions I’m happy to answer them now, but I'm not sure how much time is left. We are already over the time.

USA: I had one comment. I'm not sure, before this presentation, how many in the committee were familiar with LibJS, but I wanted to explicitly thank you for all the feedback you’ve provided over the months for Temporal, as well as DurationFormat, which are the two proposals I've been working on. As many implementer feedback, it's been really helpful to fix all the bugs, and I think it's really important with all the discussions sort of meta discussions we've had this meeting around what constitutes stage 3, and like, you know, how do you get implementor feedback and if implementing a proposal a proposal as big as Temporal, say, takes years in large browser projects, does that mean that this process of implementing and getting implemented or feedback would take all this time. So I think there's a lot of value for us spec writers in an implementation which we can use to implement things in a more agile fashion? And perhaps in the future I would myself like to prototype some proposals in the engine.

LGH: Thanks, I appreciate that. And lastly I've collected some random links in the slides that people can check out. I’ll add a link to the slides to the agenda.

USA: Perfect, thank you.

USA: And with that, thank you for everyone for all the awesome discussion today. It's been quite intense. So, I hope you all get a good rest. a special thanks to all our remote participants as well as the note takers. See you tomorrow!