diff --git a/meetings/2023-03/mar-21.md b/meetings/2023-03/mar-21.md new file mode 100644 index 00000000..94286860 --- /dev/null +++ b/meetings/2023-03/mar-21.md @@ -0,0 +1,1074 @@ +# 21 March, 2023 Meeting Notes + +--- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| -------------------- | ------------ | ----------------- | +| Michael Saboff | MLS | Apple | +| Kevin Gibbons | KG | F5 | +| Waldemar Horwat | WH | Google | +| Daniel Ehrenberg | DE | Bloomberg | +| Chris de Almeida | CDA | IBM | +| Ashley Claymore | ACE | Bloomberg | +| Guy Bedford | GB | OpenJS Foundation | +| Jordan Harband | JHD | Invited Expert | +| Ben Allen | BAN | Igalia | +| Nicolò Ribaudo | NRO | Igalia | +| Philip Chimento | PFC | Igalia | +| Jesse Alama | JMN | Igalia | +| Eemeli Aro | EAO | Mozilla | +| Luca Casonato | LCA | Deno | +| Daniel Minor | DLM | Mozilla | +| Asumu Takikawa | ATA | Igalia | +| Ujjwal Sharma | USA | Igalia | +| Sergey Rubanov | SRV | Invited Expert | +| Peter Klecha | PKA | Bloomberg | +| Richard Gibson | RGN | Agoric | +| Justin Ridgewell | JRL | Vercel | +| Frank Yung-Fong Tang | FYT | Google | +| Shane Carr | SFC | Google | +| Chip Morningstar | CM | Agoric | +| Daniel Rosenwasser | DRR | Microsoft | +| Istvan Sebestyen | IS | Ecma | +| Willian Martins | WMS | Netflix | +| Ben Newman | BN | Apollo | +| Linus Groh | LGH | SerenityOS | +| Ron Buckton | RBN | Microsoft | +| Luis Fernando Pardo | LFP | Microsoft | + +## Introduction + +RPR: Chris de Almeida is proposed co-chair, and Justin Ridgewell is proposed facilitator. The election will be tomorrow. + +RPR: Please fill out the sign-in form. The meeting will run 10 AM to 4:30, with a hard stop at 5 PM. + +RPR: We don't have individual microphones because the entire room is wired in mics, so please leave the room if you want to have a side conversation. Remote attendees: Please speak up if you have trouble hearing; we have support from logistics folks. + +RPR: Logistics through TCQ + +DE: For today there is no transcriptionist. Hopefully we will tomorrow. My deepest apologies. + +KG: I would like to ask for permission to take a recording, so I can tune an automated transcription system. I will keep the recording private, it will not be shared with any other human. + +RPR: Any objections? Hearing none, I will be asking people to take responsibility of taking notes in particular today. and then hopefully tomorrow only fixups. + +RPR: We'd like to provide better meeting summaries, in addition to the very verbose logs. If you have a look at the template and the notes now, you will also see a place where the presenter can summarize the key points. So after you are presenting and so on, as part of the publishing process beforehand we will be asking presenters to put in a little more effort to provide a summary. + +DE: I wanted to make a quick procedure proposal. At the end of each topic, we should synchronously pause to write the conclusion. Historically we have been doing it asynchronously, but having the shared conclusion, is kind of the most important part. How do people feel about this procedure? + +ACE: I like it. + +RPR: This would happen after the time box. + +DE: So it wouldn't it would it would not count against the time, yes it's there. There you couldn't say oh we're out of time so we can't follow us to know. This is done all for the time. but + +RPR: I think in this meeting, we do have enough flex to do that. Thanks. Thanks. All right, so that's the note. + +RPR: The next upcoming meeting we have is in a couple months time. That will in fact be fully remote on the Chicago time zone. + +## Committee Housekeeping + +RPR: so, we've got the regular host housekeeping and things to go through. First of all, we need to see if the previous meeting's minutes can be approved. So any objections to approving that? + +(silence) + +RPR: So, I think that we can consider that approved. + +RPR: The next is adoption of this week's agenda. Any objections to that? [silence] actually, that leaves us with just over an hour or spare time based on the current schedule, which is good. So, there's a little bit of time for any overflow needed. All right. + +## Secretary's Report + +Presenter: Istvan Sebestyen (IS) + +- [Slides](https://github.com/tc39/agendas/blob/main/2023/tc39-2023-011-Rev1.pdf) + +IS: Okay. So now I will try to share my screens a second. So the bad news is, you know, that it is still very long, 22 slides. The good news is that I'm not going to present those because the content of it is mainly an update of the information that I usually submit. So I can be very, very quick. And I will try to concentrate only on the and interesting things and then all the rest, then I will ask you that if you are interested in then just to read it through. Okay? + +IS: So this is what has happened since the January meeting. This is a typical type of presentation ofwhat I usually bring into the information, so what are the latest latest TC39, Ecma GA documents, And then I will say something about the request for the short summaries again, based on my experiences that I have made so far, which was already positive. And the status of TC39 meetings participation. So that, you know, I don't know, 10 second something. standard downloads Etc. Also a 10-second something because we didn't have to do much in the first two months. Done. What is important here? And then we have to come out in from this meeting is the next one, is the status of the ES 2023 approval. You know that. So we have to think about when to freeze Exactly, the to specification or the ES2023 the royalty-free opt-out procedure. Etc. So this is rather important here. And then the chair re-election, this is coming tomorrow. So I'm not going to say on that. I'm done. being a reminder of the five years periodic review of the to fast-track. This is certainly not standard. I have already is brought that already twice, but it is unfortunately still relevant. This is again, just a repetition without any changes what we had so far so, this is the list of the documents, which can ready for the TC39 file server. You know, that we have published there basically, double documents. You know, that we have also seen on GitHub so it is not terribly interesting for participants. Okay, the new relevant. GA documents. There are only two GA documents also. Not terribly interesting. So I continue okay. + +IS: That year, again, the usual explanation, Why are these lists of interest to? This is the nine members. And so they are more interested to act more members than to TC39 members because you have this information on the GitHub. anyway so I okay, this is not about the short. summary request. So actually the good news is you know, that at least I got from the from the agenda, the type And then I got four each discussed agenda, I that I'm talking about the January meeting. the conclusion and the summary, and I have put everything together into the main main part of the of the minutes. So this is a sort of duplication, of course. But for those who are not reading the technical notes and only the main part, this is a good summary. And the good news is that I have already found that even without a summary paragraph of what we have decided and what we have not got yet, it is already a good information content. you know, out of the of the title and out of the resolutions and summary etc has helped a lot what has happened under meeting. So I think we are already on a very good path. is bad, but once we get the one paragraph in addition let's say for the next technical notes etc. we would be a good shape. + +IS: The next one, this is important. So the stages of the ES2023 approval. So at some point in time in the very near future, we hope to freeze the specification for ES2023 and and because the royalty-free patent policy requires that we put it out internally for two months. Now the approval of the ES 2023 at the general assembly is the 27th of June. So the latest date, theoretically, not practically, would be the 27th of April to put out this frozen versions of the specification. for Ecma members. So I would suggest of course not to wait until the 27th of April, but to finish it as soon as we can. So the idea I think it would be if we could do it at this meeting, or if we cannot do it at this meeting, then shortly after this meeting. But I would suggest we let us try to do it before the first of April, but we need to come out with a freeze versions of this specification. The freeze version means, you know that from the substantive point of view it has to contain everything that we want to have. From the editorial point of view, not yet. So we can even make editorial changes after that. Etc. So that's not so critical. So it means that once the freeze is out at the May 20. TC39 meeting. then we can approve or so formally accept. the ES2023, specification. for approval of the Ecma GA, and this would be a TC39 decision. Because of that, it is probably a yes or no type of decision. So I would say that we have to be very careful not to run out from the from from the deadline, in order to have it approved at the general assembly. Now, in case that we are still running out, and which might help you it's a theory practically never happened. Then, it is also possible to do that from the general assembly that we ask for a letter ballot for ES2023 for after the 125th general assembly meeting. So, this is a central question that we have to discuss it here, and we have to come to an agreement. And also with a plan how we are dealing this approval process for ES2023. + +IS: The next point is the chair group election. This is just a copy of what I found out GitHub. It is on the, we have an excellent team. We have excellent teams of facilitators, etc. Everything is all thing. Clear, if it is not clear than to Morrow. will join the meeting and she will be in charge of the of the of the of the conducting here. this this election process. yes, and I think it will be a good one. So this is up for tomorrow. First thing tomorrow. It is just as a copying as I said. + +IS: regarding TC39 meeting participation. This is just a continue of the old table, which I found very useful And you can see from the latest entry, which is here atthe bottom of the second page, January 23 remote meeting, so it is still the same very nice high amount of participation. since the participation everybody was removed, 27 companies have participated. This is a typical participation. So I think we are in terms of participation still in a very good shape. + +IS: Now, regarding the standard download statistic again, those who are more interesting figures that can they can really see see through later on. The figures yet are already showing the usual pattern at the moment. You know, we have only be available only to collect two months of data. So it is not so much. Okay, the same is also true for the access statistics, one is for ECMA-262 HTML access. The other one is for the Ecma 402. These are the usual things. Then the TC39 plenary schedule again. copied, you know where are we now? It is unchanged. I will go immediately to the next slide, format of the meeting. We have set it up, it is just for those new readers who might show this is 39 now, but for the old one, it is not not interested. So I get just for reading the same is also true with the five years periodic review for the to pass practices. TC39 standards to JTC 1 SC 22. I have already presented this twice but it is unfortunately still relevant. It would be bloody important you know that TC39 that JTC1 still provides a positive. response to this periodic review. One is the the JSON standard, the other one is the architecture standard. This is quite important from the Ecma point of view because the entire philosophy is based on that. You know this is not change, but we are changing our the yearly specification which otherwise we could not fast-track. to ISO. + +IS: The GA venue and dates are unchanged. so it is just again a repetition uninteresting. So those who are involved in these matters, the same. also for the execom meeting unchanged Next one is 19-20 of April in Geneva. Etc. + +IS: and this is the end of the presentation, and I'm sure that I was within 15 minutes. So that's the end of the presentation. and thank you very much. much everything. Uh, is can be somebody's interested in reading them can be downloaded from the TC39 website. I have completed this presentation. + +SFC: I was wondering if you have referrer information for the spec links. I think I've asked before because that for the Ecma 402, specification in particular, many of the older editions are the ones that have more traffic than the newer editions. And if we had referred information, we could maybe go and figure out what the problem is and fix it. + +IS: No. it is the same. And then this reason you know, these first years I never count again. So I leave it out, you know, I have the feeling it is some kind of bots, you know, I don't know some kind of bots. So so I only take it into account, you know, the latest 4, 5 years and that's it. We have no further information. further information. And we were not able to follow what the hell was going on. So I have completely given up, you know, on that detail. I feel very we can still survive with that. that. but when you information, thank you. + +IS: Okay. So then I sort of stopped sharing way. Yeah. And then it is back to you. + +RPR: So sorry. there's more questions on the queue. Next up is DE. + +DE: So you mentioned. This SC22 review for I guess for the JSON standard in the architecture standard. so, given that we don't have any updates to those and the ecmascript suite is already a standard, that being the thing that refers to all of our other. documents published by ecma, what is the purpose of this review? And what is the risk if it goes badly? + +IS: The risk would be - JSON is up now. Yeah. And I am less worried about JSON because because the facto it is a it is an extremely strong and extremely popular standard. So whatever ISO does. in my opinion, you know, I mean the fate of the standard is, you know, they are doing more damage. Age to themselves than to us. + +DE: What is ISO even talking about giving that we're not making any changes? What are they going to - + +IS: That's right, That's right, you know about but but you know I mean every five years, you know, they have to say yes this is a good sign that we still want to keep it. you know. And if they say if they say no, we don't want to keep it and kick it out. They would be very, very stupid. you know, the if they kicked out the JSON standard so it is not about changing it or whatever, it is about keeping it as an ISO standard or not keeping it. It would be really very, very stupid to kick it out. it out. + +DE: So, do we have this for you, periodically, for the ecmascript, sweet standard as well? + +IS: Yeah, probably eight months group is it is it is coming up at the end of the year, you know? So it is not coming up so I am because it is a five years review. So every four or five years, this comes up and for for the for for that one, it is coming up somewhere. So I think it is in the last quarter of 2023. so I am warning you and everybody in TC39. If you have some kind of connections to add to your SC 22 national bodies then. try to make it Influence them so they say simply yes to both of those and that in ISO in order that they don't kill it. As I said JSON is up already, hte did the other one, The Ecma 414 is no doubt but it will be up for the five years review at the end of the year. + +DE Are there any particular requests or concerns that you've heard from SC22 that we should be thinking about? + +IS: Well, not really, it is only my concern that I looked at it, you know. And how SC 22 is working and I have figured it out, that there is currently no working group, you know, which is associated to the Ecma standard. So it is only coming up in the plenary. and this might be good or bad. It is just a potential concern not necessarily A real one, you know, it is just a warning. And we have one contract by who we know very well and who is very helpful in these things, you know? Rex is the one who is following SC 22 as Etc. So we might contact him and ask what is the situation at the moment in SC 22? Is there anything that we really have to worry about that, etc. + +DE: Yeah, I think that would be good. Generally, I generally trust and refer to Ecma management here. I'm a little concerned about Rex because when he was co-chair he was kind of privately reporting that we were somehow out of compliance with rules [which I disagree with] and kind of threatening that this would look bad at the ISO level, as you and I have discussed. So yeah, we should follow up on this but mostly I trust and defer to Ecma. + +DE: Okay, next question. You talked about complaints about the notes. I understand that people are concerned with the overall length of the notes, but I'm wondering who the audience of this is, and how detailed in the summary people are looking for. Are they looking for things that go beyond the conclusion? And if so, what? + +IS: So regarding the summary, the original idea, what we have discussed with Patrick Luethi and actually only Patrick came with the idea and then I think it is a good idea. I agree, we have been using that in the itu OR law, you know? So just to have one paragraph summary for every contribution about it. and that's all. + +DE: We've been producing a summary of the conclusions for years and I'm wondering if there's anything beyond this summary of the Resolution that you would like to see, or if the complaint is mostly focused on, it's just too long. + +IS: I found the conclusion okay. And I have also taken the conclusion into the main part of the of the minutes and I didn't have any problems with that or with the title, you know? But it would be just nice, you know. So if in addition to it, we could have have this. one paragraph summary, But as I said my first reaction was that at this point in time, even without the one paragraph summary, it got significantly better than what we had before. So I think we are on a very good way forward in order to create minutes where people don't have to read the 250 pages of the technical notes to get to all the details. + +DE: Yes. I'm glad you decided to start collating those conclusions. So, I'm taking that there aren't concrete requirements for anybody who wants to know more about what's going on, that they just want something shorter than the notes. And if there are such concrete requirements, you will come and report those to us. right? + +IS: Mm-hmm. + +DE: Last, IS mentioned the GA meeting. All Ecma members, not just Ordinary members, are invited and welcomed to attend the GA meeting. It's just a Zoom call. We will post the link on the Reflector as a redundant strategy with email that is already sent to all members. I strongly encourage anybody who can afford to attend the GA kind of in the background, calling in, to do so, because Ecma is really interested in serving all of its members. There are conversations going on about how to engage non-ordinary members more and include them more in the decision-making process. Also most decisions in Ecma are not made by a vote. They're just made by sort of rough consensus. So you can definitely come and participate in the discussion if your a member organization since you as their kind of delegate to the GA, + +CDA: This was my question on the queue: only Ecma representatives that can call into the GA? + +DE: It's up to the member organizations who to send to the GA–it doesn't need to be the primary contact that's listed in the item of the Ecma memento document, but like, you know, if you want to attend and you have to like coordinate with your primary person. + +MS: You need to be a part of an Ecma member. organization. That's it. + +DE: Yeah. But then concretely within IBM, where CDA is coming from. I think he'll coordinate with other IBM representatives like Jochen Friedrich to attend. + +RPR: All right. Yes, I think we're at the end of IS's report. + +## TC39 Editor’s Update + +Presenter: Kevin Gibbons (KG) + +- [slides](https://docs.google.com/presentation/d/1vc8V1y1ktuQkEXosFdXUriaC-Hn4nj9w_Vwyx6iphTQ/) + +KG: All right, editor updates. There have not been any significant editorial changes since the previous meeting in January, there's been the usual minor tweaks and fixes but nothing worth calling to everyone's attention. In terms of normative changes, we have landed two stage 4 proposals: symbols as weakmap keys and change array by copy. Those have been approved for a while and got stage 4 I believe in both cases at the previous meeting, and after some bike shedding about definitions for how to phrase things, especially in the symbols as weakmap keys proposal, we have finally gotten everything to a state that we're happy with and landed those. And then this last one was technically normative, but certainly a bug fix: when we added named back references, we added it in such a way that they were present in the annex b grammar and they were present in u mode in the regular specification. As you'll recall, there is the actual regex grammar, which lives in Annex B, and then there's this completely fictional grammar that regexes have in the main specification which is not used for anything anywhere as far as I am aware, but which we maintain separately for whatever reason. And we messed up the integration such that if you didn't have the U flag and you were looking at the non-annex B grammar, then named backreferences were not allowed. We went ahead and fixed that without coming back to the committee because that had always been the intention. + +KG: In terms of upcoming work, we as we discussed in the previous meeting are going to work to reduce the amount of monkey patching that Annex B does. by essentially inlining, a bunch of stuff from Annex B. So that instead of saying, you know, this algorithm is actually different, go look in this other place, and then you have to sort of manually reconcile the two algorithms in your head. We are instead going to put the annex B grammar in the main algorithms with, you know, "if Annex B is enabled", or some other phrasing, "if the host is a web browser or supports this feature", or whatever, then do the steps that annex b says, otherwise do the non-Annex B steps and it will just be a single algorithm that you can read top to bottom and not have to sort of figure out how to reconcile the two. And then, the remainder of the work is fairly similar. We are making progress on consistency in general. Also we made some progress on clarifying execution contexts since the last meeting. Otherwise, very similar list of work. + +KG: Last and most important thing is that we are cutting ES2023. We are freezing it, or rather we have frozen it, I should say. We are not expecting any more significant editorial changes. There will be at least a couple very small editorials tweaks that will land but nothing large. Which means that the patent opt-out policy is starting now.. The next meeting is in very slightly fewer than 60 days. Normally this is a 60-day period but we would like to get the spec approved at the meeting that is in two months. So please ensure that if you feel the need to do any review of this, you do this in advance of the next meeting, not the full 60-day period, so that we are able to get formal approval at the next meeting. And then, after that, we will ask for official approval after the period at the following meeting. + +RPR: Any other questions for Kevin? Okay, all right. thank you for that + +#### Summary + +A number of fixes and cleanups have been applied to the specification text. No further significant changes will be made before ES2023 is cut. We will be starting the IPR opt-out period now, and ask for approval next meeting. + +## ECMA 402 Update + +Presenter: Shane Carr (SFC) + +- [spec](https://github.com/tc39/ecma402) +- [slides](https://docs.google.com/presentation/d/1jHVL3op6uOTNZvqlzmbAQTkT9Pw7CHdN-4kp-LPcecg/) + +SFC: So I think most people in this room I've seen at least part of this slide before but just in case anyone has it. Well, as I can throw to work, it's JavaScript internationalization library, the Intl object. As you can see here, we can do things like localize dates and date formats into your favorite, local and favorite region. So how is Intl developed? It's a separate specification, but we all proposals, all moving through the TC39 stage process. We have a monthly phone call to discuss details and you can find more information at these links. So here's the here's the TGT. Nell, which will in turn have been editors for the last year. I'm the convener and then these are the delegates. These are I copied the attendance list from the last two, meetings and merge them to get this list here. So we've been getting a fairly solid attendance lately, which is great. So, thank you everyone, and thanks for all the contributions from the delegates. + +SFV: So, ES2023. We just got an update for the ECMA-262 side of this. I was wondering if Ujjwal or Richard has had an update to share on the ECMA-402. ES2023. + +USA:We have a few remaining work items for ES2023. Most importantly. stage 4 PRs. but we should be wrapping it up soon, hopefully. It is ready before April. + +SFC: Thank you. for that. Is there anything else that you need from this body to prepare the ES2023 draft? + +USA: There is a PR, I suppose you'll get to that later. Consensus on that would be great. + +SFC: Okay, so let's look at pull requests. + +SFC: There's one normative pull request. It fixes Issue #402 in the ECMA-402 repository and it's this one. Here I'll go ahead and switch over here to these slides that USA put together. It changes a little bit about the default houtCycle computation, so that it resolves to the now, non-preferred formats as you can see the example here, there's some funny stuff going on with this logic. The one of the issues here is that the seal the are 43 update changes a bit about somehow some locales, use the default our cycle. So, one, one issue here is that yet, this is too new for TG2 consensus. This PR came in since after our last TG2 meetings, we have not yet had a chance to discuss it with TG2. But you know, as for our formal we process are still asking this body for feedback on this PR and for tentative consensus. If 402 achieves consensus on this PR, One line., it's the actual changes that it's like a one line change. I can open up the actual pull request (PR: 758), it's a one-line change in the specification right here. So that that changes the the hour cycle resolution logic. If this seems okay to people then we'll probably achieve TG2 consensus at the next TG2 call in a few weeks. And you know, we'll go ahead and and I'll ask for consensus, at the end of the presentation. + +SFC: But in the meantime, proposal status. So we keep track of all of the proposal tracking on this wiki page. So we've got two stage 4 proposals that are shipping in the ES2023 Edition, which will be Intl enumeration API, as well as some of them were from NumberFormat v3, as Ujjwal mentioned, is just working on merging these in to the specification. We've got two stage three pull requests: intlLocaleInfo and from one DurationFormat. Those are (?). we took the recommendation with this group, to change things to be functions instead of getters, so FYT is working on integrating a change for DurationFormat. There have been a number of mostly editorial pull requests lately to resolve various issues in Duration. For my visual gave a big update on that TG. to lessen the last TG1 meeting in January, so these are both moving along. I'm hoping that these are both land is stage for later this year. We also have a couple stage two proposals: era display, that I've been presenting on, as well as the era and month code proposal. Frank gave an update on that in January and you know, it's also related to the Temporal work. We also have a bunch of stage 1 proposals. These stage one proposals may at some point advance, but yeah, stage 1 is largely a place where we keep track of experimental work that people have been doing and are exploring because once they don't proposals are forth, but only so only the stage 2 and above proposals, are the ones that it's like, have a concrete path. + +SFC: so, let me go back to the slides. Okay, one thing I really wanted to highlight today is the User Locale Preferences https://github.com/WICG/proposals/issues/78, it's not strictly a TG1 proposal or TG2 proposal by itself. but it does touch other parts of the web standard and there's a lot of overlap in terms of personnel and things. so this, this link here is a good place to go. The core question that we're exploring is how do we improve the internationalisation experience in the web platform in a way that respects user privacy? Anyone who's a you know, keen on on the you know privacy concerns know that there's like a lot of concerns about things like accepted language, which is very fundamental to you know how internationalisation works on the web platform. And you know, most with know that well most internationalisation do your Locale your language and region is only like - it's important but only a subset of the information that's normally apps are able to use in order to give you a higher quality internationalisation experiences. There's also many other things such as your hourCycle and numbering system preference, your calendar preference, measurement units, and other things. Which all effects, which are all part of the bag of options that collectively, we call unicode extensions or or user preferences. So we've been exploring, how do we make this work on the web platform? And we had a big discussion about this at the last TG2 meeting earlier this month, and I highly encourage anyone who, if my comments sparked any interest at all from you either the privacy or the internationalisation side, to go and take a look at this proposal, and give a way in and help us find the best path forward that satisfies all the requirements. So thank you for that. + +SFC: If you'd like to get more involved with TG2. you can do it. That's our [GitHub page](https://github.com/tc39/ecma402). One thing that is always helpful is if we can have help writing [MDN documentation](https://github.com/tc39/ecma402-mdn) and especially writing polyfills in JavaScript. format.JS is a really great polyfill. There's, you know, Community contributors to from time to time contribute, polyfills for our proposals and this is a really great way to get involved with internationalisation as well as standards work in general. You can read the spec, we've gotten really good feedback from polyfill authors before on some proposals. So, it helps the specification because you know it's an extra implementation, extra eyes on the specification actually implementing it and making sure that it works. So it's a really, really great way to get involved. That's my plug for writing polyfills and yeah if you'd like to join our call, mail this address or talk to me or anyone else on ECMA-402 and we can get you all hooked up with that. So that's my update. + +SFC: And I'll go back now to the pull request (758) to ask if there's any concerns and if we're ok giving conditional consensus on this, assuming that's TG2 will achieve consensus on that in our next call in a few weeks to know. Okay. + +DE: So I would like decisions made by the plenary to be informed by discussion in TG2, I would be happy to fully defer the capacity to take consensus decisions without free you from the TG1 at TG2 if we want to. But I don't see the benefit in having conditional consensus from the committee here based on future discussion that hasn't taken place in TG2. yet, if we want to come teach either an item and just not comfortable. saying anything at all without without TG2 talking first. But I would be okay if we say that in general TG2 you can just take decisions, So I'm uncomfortable with conditional consensus here. Just like I expressed discomfort with conditional consensus for the Temporal issue we discussed last meeting. + +SFC: That's fair. to be clear. I wouldn't normally ask for additional consensus except that USA did mention that he would like this to land in. Yes, 2023. And there's not going to be another TG1 meeting before we have to submit that Co if we don't achieve consensus distributists will save this, which is probably fine, for ES2024. + +DE: I think we shouldn't worry too much about the annual version cuts, and we should mostly worry about the current draft spec to be in good shape. And I would encourage TG2, or anybody to propose this process change, allowing TG2 if in the future something smaller comes up to take decisions. I just don't see how we could meaningfully give feedback on this yet if that's what we're being asked to do. + +SFC: Okay. + +USA: I just wanted to say DE that I do understand what you mean here, when you say that we should so far, we have followed, what is mostly a two-prong model, where we discuss things in both venues and and they could happen out of order, but it's understandable if we decide to always discuss first within TG2, before coming to it. Although that said what SFC had mentioned this is a particularly complex issue because what we have right now is that there is an incompatibility between different engines, and that is because of the bug in the spec which means that we resolve to hour 24, where we know that there is actually no current time zone that does follow that hour cycle. So, because of that, we felt that this was pretty straightforward approval. the at that, you know what we're resolving to There's no real time zone that does follow that. + +DE: So yeah, so three things first, I was unaware that you had previously done the ask for consensus in this opposite direction before and I apologize for the kind of turn of objecting to it now and not having objected to it before. Second, I think you should care more about the current draft spec, which will be the path towards really fixing the browser compatibility issue rather than the annual version cuts. And third, I would be okay with saying that the committee will just refer to TG2 on this matter. I'm more okay with that than saying we agreed to it because I haven't done this technical review work, and I think consensus on this particular issue would kind of imply that we had had done this if you work as a committee. which we think there's a couple of points. in support of you on that. + +FYT: I'm the author of the PR, I do think DE's Point better. I'll actually surprised that bring up here, I think we really shouldn't do that reverse. I agree with that. + +SYG: +1 The procedural thing Dan said, I completely agree with. I think if you're going to ask for conditional approval, on some discussion that hasn't happened that is no different than saying we would like stage advancement power within TG2, and that is fine. Like, I think that should smooth things over: most of these things you bring back our kind of pro forma. Anyway, we like expertise those of us who have expertise should already be in TG2. So it seems fine to ask for the general power. But this conditional one by one like ad hoc conditional thing doesn't make any sense. + +SFC: For this particular PR and issue, I would rather discuss it in TG2. So I have it on the slide because the normal procedure that we do followed is that we take all the normative PRs, patch them up into these slides. It's unusual to have a PR that's opened between TG2 and TG1. So it's a bit unusual, but this is normally what the process that we follow us on is followed them. comparing these slides but I'm totally fine with discussing this in TG2 and then coming back here. In terms of a procedure change, I'm not ready to propose such a change at this point. So I think that we should discuss. Yeah, there's not much more to discuss here. Will just discuss those in need you to and then come back in two months and ask for consensus. + +DLM: I'm yes, I think from Mozilla’s point of view, we prefer to have the actual discussion of proposal advancement in front of the larger committee, and that's for us to have the opportunity to do internal review, we don't have that same review process in place for TG2. So for us to be able to speak on behalf of SpiderMonkey or Mozilla, it really needs to happen in this committee. + +SFC: Wearing my hat as a google delegate I will +1 that as someone in that position, in the sense that the way that we that I we run the TG2 meetings is not as formal as the TG1 meetings. We don't have an agenda advancement deadline and it would rather not implement one because that's a lot of extra process. So it is much easier from an organizational point of view to just say there's only one body that has actual advancement Authority and TG2 provides recommendations, that is a much easier operation to run than trying to formalize. It's like if we were to say TG2 actually has staged advancement power like that, that requires also changes to the processes in TG2 which is not necessarily something that I'm willing to sign up for. + +RPR: All right. I think we've been a little bit. We're talking about like a potential process change here. So we're kind of moving away from the original item. + +SFC: everyone, please get involved with pick my for it. Thank you. + +### Summary + +ES2023 cut is on track +Please see the user preferences proposal, User Locale Preferences https://github.com/WICG/proposals/issues/78 + +### Conclusion + +No consensus for PR #786 due to not having been discussed in TG2 earlier. +In the future, everything that TG2 brings to plenary for consensus should be discussed in TG2 first, given hesitation around approving things "conditionally pending TG2 discussion". Annual version cuts of standards are not usually considered a reason that a change is urgent. +Process changes for TG2 were discussed, but it wasn't on the agenda and leadership would like to keep things similar for now. + +## ECMA-404 Status Update + +Presenter: Chip Morningstar (CM) + +- [spec](https://www.ecma-international.org/publications/standards/Ecma-404.htm) + +CM: So I have it on good authority this morning from IS that JSON is "an extremely strong and extremely popular standard". So there's that. As usual, nothing newsworthy. ECMA-404 remains an island of tranquility in a world gone mad. + +### Conclusion + +- No newsworthy changes (as usual) + +## Test262 funding status + +Presenter: Philip Chimento (PFC) + +- [repo](https://github.com/tc39/test262) +- [slides](https://ptomato.name/talks/tc39-2023-03/) + +PFC: I wanted to share an update about the funding status of test262, and the composition of the maintainers group. I apologize for having these slides late. I think I shared them with the other maintainers on Friday, and things have been really crazy with traveling. + +PFC: I'm in the maintainers group of test262, so are several other people in this room and on this call JHD is here, RGN on the call, I think, there are others as well. + +PFC: As an overview of test262, it is the conformance test suite for ECMA-262 and ECMA-402. This is an effort that helps all of us do the work that we do in this committee and for the good of the the ecosystem. Having this test suite helps to ensure interoperability in implementations, prevent bugs in implementations that result in discrepancies between them, and it helps us find bugs in the proposals longer before implementation starts. The people that are spending the effort to make all this possible, there's a certain amount of maintenance necessary. There's a maintainers group that consists of some contracted maintainers, previously people from Bocoup and now it's currently from Igalia, of which I'm one. The contract is in partnership with Google and is .4 FTE of work that is contracted. There are also other maintainers who have their time paid for by their employer to spend on test262. There are also volunteer maintainers who do all of their maintenance work in their free time. Other than the maintenance work there is also test writing. Some of the test writing is done by the maintainers. Some of it is done by people working on the implementations, some of it is done by the authors of proposals. Some of it's done by community contributors. So all these things, all these sources of effort that come together to make the test suite. + +PFC: Here's a little slide with numbers about the previous calendar year. About 450 commits, about 300 pull requests merged, about 3,500 new tests in the suite, and on average, it takes a little over a week from the time of the request is created until the time it is merged. That's an average; obviously, there are much shorter ones and ones that are much longer. In 2022 we also created the maintainers group with governance policies. Before that things were a bit more ad hoc, and this is good because there's a process now for people to get involved and receive the permissions and the trust that they need in order to be able to become maintainers. There's an RFC process for changes to the test suite that affects consumers of the test suite, such as implementations. We have a new policy for the staging directory which allows the proposal authors to contribute tests that are already correct but maybe still need some work to get into the right format, or that are correct but not complete, and have these available for implementations to run so that we get alerted of problems with interoperability earlier rather than later. + +PFC: I mentioned the contracted maintainers. The contract is funded by Google at the moment, but unfortunately as of April 1st Google can no longer contribute this. So, what does this mean for TC39? What's on this slide is kind of my opinion or projection about what happens without this contract. Some of the TC39 proposals will sometimes have their tests written by proposal authors, and time is paid by the proposal author's employer, and so it's likely that those proposals will still have test coverage. Examples of this are the ongoing test coverage for Temporal, or the test coverage for Change Array by copy, which recently landed. The part where I expect things to change, would be with the coverage for proposals that are not funded in some way by proposal authors' employers. Some of the proposals I or my colleagues from Igalia have been working on writing tests for are Array.fromAsync, or duplicate named capture groups. So, if no funding is available it means that those tests need to be written by someone else in order for those proposals to advance to stage 4, and it seems likely to me that this is likely going to be absorbed by the proposal champions. So that means all of you. There's the other work that the maintainers group does such as pull request reviews, and other maintenance. So it's not like the maintainers group is going away completely. People are still participating, but the basis for relying on the availability of the maintainers group, we can just generally expect less availability and so it becomes more reliant on maintainers with limited time and or unpaid time. So, it seems likely that that kind of thing would move a lot slower. + +PFC: The objective right now is not to get consensus on one of these paths, but just put up ideas so that we can do some short discussion about them. I'll go through these a little and then SYG would like to discuss one of them. Ideas that have come up in discussions are: just continue the best we can with reduced involvement from from maintainers. It means more work for the proposal authors, and proposals might go slower. We might consider process changes, although we're not proposing any process changes at this time, that would would require a lot of preparation. Another idea is to look at WPT (web platform tests). The policies they have for accepting tests lean toward more towards "best effort", which is not what we do in test262, but we could consider being more like that. Another option is that we get more paid contributors, so if you like that please take this message back to your company and advocate for it. Other ideas are possible. SYG, over to you about this slide. + +SYG: Sure, Yeah, thanks PFC. Due to "different economic reality", what's the phrase du jour here? We can no longer do the test262 contract. The staging stuff put into place last last year is helping test262 to keep the velocity that it has had with funding with implementers help, by having implementers being able to directly commit less structured tests into the staging directory. That is all in service of making test262 more like WPT. Now, you might think I am wholeheartedly pushing for that direction, but I think that test262 quality as a whole, and historically, is higher than the WPT. There are way more web APIs and an interop by and large has been a much bigger problem for web APIs than ECMA-262. And test262 is a huge part of why that is, because the quality of test262 has been historically very high and very thorough, and tests spec corners that implementers, one, don't necessarily want to write tests for, and two, probably more aren't in the right mindset to think - they aren't spec authors, they aren't thinking of spec coverage in the way that some test262 contributors in the past have been thinking about spec coverage, and come up with the test that tests all the corners. Implementers test different corners about implementation, but not every corner. So I think what we have today with no extra funding going forward would naturally lean towards making test262 behave more like WPT. But I want to make a pitch here that I think that it will be a good idea to keep the test262 quality as high as it has been historically. And I don't think that level of quality is easily reached without extra funding. That is all I would like to say, + +PFC: Thanks. So that was the end of my slides. I don't know if people would like to have a discussion right now in the meeting or you'd like to discuss things more informally, during lunch or something. I will be available for that, but is there anything on queue right now? + +DE: I want to give a +1 just to SYG's comment. I'm happy about the addition of the staging directory. I think we'll make more use of it in the future but I'm also very happy with the work that Igalia has done over the past year in terms of improving test262 maintenance from reviews and tests are a good thing. And to ensure that coverage is increased. And it's great that you have this fast turnaround time, there were previously issues with that. So, I hope we can find some way to collectively fund this. Bloomberg already funds the writing of many test262 tests, like the examples of funded proposals that have tests are tests that we funded. That's not to say that all of them are, but I think this logically makes sense as a shared burden. One thing that some standards can be used to do, like Khronos, the standards body behind WebGL. The standards body itself contracts with a provider to write the conformance test suite and that's that's one possibility. I think if we went to ECMA and said that, the first response would be, well, why don't you have the committee members pool resources separately? So I think that's another thing to consider. I don't know how congruent that is with the current economic environment. Probably something more to discuss at lunch. + +SFC: And just to +1 that comment. I also wanted to raise ECMA-402 funding, it's another thing where I've been able to successfully in the last several years pitch to my leadership that it is very important that we continued funding Igalia's work on this subject, but it's another type of thing where it's not a very good long-term solution because it really should be a collective thing. I believe that we would all say that test262 is very important and ECMA-402 editorial work and things are very important and they should be collective things and not carried as a burden by any particular organization. So, just +1 to that. + +### Speaker's Summary of Key Points + +- The contracted maintenance for test262 is ending (previously sponsored by Google). We discussed various possible ways to continue but we don't have a way forward yet. +- There was broad recognition of the value of professionally maintained test262 by the committee. + +## Test262 Updates + +Presenter: Jordan Harband (JHD) + +- [repo](https://github.com/tc39/test262) +- No slides + +JHD: We've updated a bunch of tests, so if you are championing a proposal please keep up to date about the test status of your proposal. That also means if there's something in the proposals table that isn't accurate about describing the test status of your proposal, please send a PR to update that. We documented our test262 RFC process and our maintenance practice rationales. Feel free to read that, I'm sure you will find it thrilling. We merged the async helpers implementation. So there's test helpers for some asynchronous behaviors There's a number of proposals that have asynchronous behavior that will be able to have easier tests authoring as a result of those helpers. + +JHD: It would be nice if - this is a mild proposal that when we approve normative changes to proposals, that we put something in the notes indicating who is taking responsibility for filing a test262 issue to track those changes, or PR or whatever, but it would be nice to have somebody for each proposal, for each set of normative changes to kind of drive that forward and make sure that it's tracked. Not as a strict requirement, but just as like, it'd be nice if we tried to do this. That's the end of my list. + +DE: I thought we already had a strict requirement that normative PRs need to have test262. + +JHD: Normative PRs to the spec but these are like when we're talking about normative changes to a proposal, like when Temporal does its normative updates, things like that. That's a case where it's sort of a gray area in our process where it makes sense that there should already be test262 tests but we've never tried to enforce that. + +DE: I guess in our current process the line is you need tests at stage 4. So, when something is stage 4, we would have tests via our current process, right? We could consider, as other people alluded to, having tests earlier, but that's a different topic. + +JHD: Yeah. So given that I think anyone who wants to approach that topic of requiring test sooner separately, please go for it, but my request right now is not about getting the tests done, it's just making sure there's a tracking issue somewhere in test262, so somebody can take a look at it. + +DE: Okay. So this is a reasonable request for this subset of stage 3 proposals that have tests that kind of purport to be complete. It's not a requirement for landing tests with them. But we do need to track it if those tests need completeness. + +JHD: That sounds like a great way to phrase a conclusion. + +### Summary + +Test262 has updated tests, and landed async test helpers. +Please maintain your test status in the proposals repo table. + +### Conclusion + +When a Stage 3 proposal is trying to maintain complete tests [not a requirement until Stage 4], if a normative PR gets consensus in committee, then please file a tracking issue/assign a person to restore test coverage. + +## Reminder to enable GitHub 2FA + +Presenter: Jordan Harband (JHD) + +- No slides + +JHD: My motivation here is that I want to require two-factor authentication in the TC39 org, but if I check that box it immediately evicts anyone from the org who doesn't have 2FA turned on which would ruin all of the organization I've done of the member lists and everything. So please if you have not yet enabled two-factor on your GitHub account, enable it, it's actually super convenient at this point. You can hook it up to touch ID, to face ID, to a physical key, you can hook it up to Google Authenticator or 1Password, or something like with the seed for a random code. I think you can even set it up to just shoot you an email or something and you click on that every time, and you can have any or all of these methods enabled. So please go make sure you have two-factor on. + +### Speaker's Summary of Key Points + +- Enable two-factor authentication on Github + +## Iterator helpers + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-iterator-helpers) +- Note: Topic is split into three sections + +### Validate arguments + +- [issue](https://github.com/tc39/proposal-iterator-helpers/issues/264) +- [PR](https://github.com/tc39/proposal-iterator-helpers/pull/265) + +KG: So iterator helpers as you may recall is stage 3 as of a couple of meetings ago. That means that implementations are starting to go through it, and in some cases have noticed things there that are a little bit weird. We will be talking about three different things over the course of this meeting and I've split them out into separate items, mostly because I didn't realize when I had the first one that I was going to have so many different things. Anyway, this first one is basically an observation that the way the spec is currently written, it consumes the receiver, in the sense of looking up the `.next` method on the receiver, which it will call later, before it validates arguments. And this is odd for two reasons. One is just that it's more different than the code that you would sort of naturally think of: the thing that you would naturally think of is that you do all of the argument validation and then you do the iteration. And second, it is just generally inconsistent with the pattern that we have more or less followed of doing all argument validation before we start actually consuming anything. + +KG: So I have a pull request. All it does is that it validates arguments, for example validates that the argument is callable in the case of `.map`, before it looks up the `.next` method on the receiver. I think this is a small and generally positive change, but that is a normative change to a stage 3 proposal, so need committee consensus for this. If there's nothing on the queue… which there is not, I would like to formally ask for consensus. And I guess it's part of our new process that we're supposed to have at least one explicit second. + +RPR: You have messages of explicit support from CDA, LCA, and JHD. + +KG: Thank you very much. Also, sidebar, I do hope that we can spend some time in the future more explicitly documenting these sorts of conventions, because there are a number of them that are not written down anywhere. I know we've mentioned that before. This is just another example of, it's important that we actually get around to that at some point. + +#### Summary + +An oddity in the iterator helper specification meant that the `next` method on the receiver was looked up before the arguments were validated, which is different from how you'd normally write similar code and also is inconsistent with most of the specification, which does argument validation before consuming anything. A PR is proposed to correct this. + +#### Conclusion + +Consensus on the PR. + +### Closing iterators which have not been iterated + +- [issue](https://github.com/tc39/proposal-iterator-helpers/issues/266) +- [PR](https://github.com/tc39/proposal-iterator-helpers/pull/267) + +KG: Alright, so moving on to the next item which is iterator helpers: closing iterators which have not been iterated. This is another one of those things that was noticed during implementations. So the way iterator helpers are currently specified, is that they are basically generators–sort of spec internal generators. This makes it I think clearer to readers what they're supposed to do, because this is closer to what a natural implementation in userland would be. It's also just much easier to write these down as generators than as iterators that are tracking all of the state explicitly instead of just closing over stuff. Unfortunately, one of the ways that iterator helpers are supposed to be different from generators is that if you construct a helper, so for example you call `.map` to get a helper, if you don't then iterate over at all, if you just immediately close the helper by calling the return method on the helper, it should close the underlying iterator. As currently specified, that does not close the underlying iterator. This is just a consequence of being specified as generators because for generators, if you close the generator there's nothing to do. Like it just moves the generator into its closed state. You couldn't possibly be within a `try`/`finally`, or soon a block with a `using` statement, where there would be resources that would need to get cleaned up on close. So if you close the generator, no code runs. But if you close an iterator helper, that is supposed to close the underlying iterator as well. We didn't notice this because we were really thinking of them as generators, but this is a place where they are supposed to be different. So I have a PR that fixes it, that all it does is to add some special logic to `return`. All it does is if you close an iterator helper that is not yet started, it will explicitly close the underlying thing. There's a bit more bookkeeping in that you need access to the underlying thing at this point, which previously was only closed over, but the only important normative component is if you try to close an iterator helper by calling `return` on it before you are actually started anything then it will explicitly close the underlying thing. Again, I think this is something that we should always have done and just failed to do because of how the spec was written. + +MM: I like this, but since it's different than the consequence of just writing it straightforwardly as a generator, I'm wondering what the implications are for emulating it accurately in user space right now + +KG: I don't think it is possible to emulate these accurately as generators either way. + +MM: Do you have a faithful userspace emulation of this? JHD? + +JHD: I'm typing it as we speak. so I can probably let you know in an hour or less. + +MM: Yeah. Okay, I would I would like to wait on that before since especially since it's an hour away and we're about to have Inch. I would like to wait for that emulation before simply. going going ahead with this. + +KG: I can guarantee you that it's possible to do such an emulation. Not as a generator, but you can write user code that is a faithful implementation of this. + +MM: Yeah, not not as a generator is fine, and as if there is nothing terribly surprising about, the is your code then. Yeah, I'm okay, right now. then we can revisit if there's a surprise, once JHD writes that. + +KG: That sounds good. Okay, so I'd like to ask for consensus for this change assuming that it does not prove to be unexpectedly complicated. Looks like there's a reply from SYG. + +SYG: I was going to say what Kevin said. I can also guarantee that it is possible to do in user code, it's just like, it's not going to be just typing a for-of thing in a generator and expect that to work though. + +MM: Okay. Okay, so I'm pretty happy. So yeah, I approve modulo possible surprises once JHD writes his implementation. + +DE: JHD, is this part of the complete es-shim implementation of iterator helpers? + +JHD: Yes + +DE: Great. That there's a complete. issue, implementation iterator helpers that you can get up to date. All right. Would you like to repeat the request? Yes, yes I would like to ask for a consensus and in particular to get at least one explicit support for this change. would like to explicitly support? + +MM: Explicit support. + +KG: Thank you, MM. Okay. I hear explicit support and no objection, so I will take that as consensus for this change. + +#### Summary + +A bug in the iterator helpers spec would lead to an underlying iterator accidentally not being closed in the case that the helper was closed before iterating it at all. A PR is proposed and accepted which will fix this. + +#### Conclusion + +Consensus on the PR + +### Iterator helpers: renaming `.take` / `.drop` + +Presenter: Michael Ficarra (MF) + +- [proposal](https://github.com/tc39/proposal-iterator-helpers) +- (slides)[https://docs.google.com/presentation/d/1BjtOjv447KcXSsz2GdV-HBnhhUTToRMHuMQO6Zlosnw/] +- (issue)[https://github.com/tc39/proposal-iterator-helpers/issues/270] + +MF: Okay, so we have had a request from the community to re-evaluate the naming. If you want to follow along there the issue is 270. As background, we have two methods called take and drop. Take takes an iterator and a number of elements and produces a new iterator that is exhausted after that number of nexts. Drop takes an iterator and a number of elements and nexts the underlying iterator that many times and then yields all of the remaining elements from the underlying iterator. + +MF: So these are usually called `take` and `drop`. They're sometimes called by some other names, but will get into that detail later on. Also necessary background: in all the iterator helpers methods that consume the underlying iterator, when the consumption is done, they close the underlying iterator, they don't just stop iterating. + +MF: That is true of `take` here as well: if `take` completes, the underlying iterator is closed, so it can't be used as a way to just advance the underlying iterator a certain number of times and then reuse the underlying iterator. There are some community members that we're trying to use `take` in that way. They were looking to reuse the underlying iterator after exhausting the produced helper iterator. They claim that if the name was something else like limit, they might not have made this mistake. So this comes from an actual user request and there are a number of supporters of that rename on the thread. + +MF: So, here's some data that I collected on use of the names for these two operations in other languages and in JavaScript libraries. You can see that by far the most common names for both of these are take and drop. And in particular, the contentious one of "take" is almost universally used. The only other alternative that is used more than once is "limit", which was the one that was being suggested. So on that point, I think if we renamed to "limit", we should also rename "drop". I think everywhere where drop was used the name of take was "take". So there's kind of an implication there. Even if we do make this rename, this doesn't allow us to use take and drop for other operations. We should consider those names dead. We should no longer use "take" or "drop" for anything. + +MF: Something else worth considering while we're considering these renames is that we have plans for future proposals for takeWhile and dropWhile which are typically called that: takeWhile and dropWhile. If we renamed to limit and skip as possible alternatives, you would have limitWhile and skipWhile, which are names which basically don't exist anywhere else. That might be a reason not to make the rename. And that's it for the presentation. So what we are considering is renaming take to limit and drop to skip, or possibly, some other choice. I can seed the conversation by saying that my opinion is that given the data that I collected, it looks like take and drop really are far too common to ignore that naming precedent. And yes, some people may draw incorrect inferences from it, but renaming it would probably have more downsides from having a less familiar name. So, do we have feedback? + +PDL: Yes, so uncool this first question is do other languages that use take and drop also close the underlying iterator and is there Any. motivation. And what's the motivation for doing that because it seems like I thing, I wonder, why. + +MF: The idea of the underlying iterator being closed is kind of a unique thing to JavaScript iterators. The the other question was, if somebody did want an operation that advanced an iterator by a number of elements, could we add something like that? I'd be fine with doing that, that's just not included in the initial iterator helpers MVP and this is. + +PDL: I would suggest I would support the rename and possibly think of tacky using `take` and `drop` for something. It does not close the underlying iterator in a separate proposals at some point. + +MF: I would not be okay with using take or drop to do anything but what they're currently specified to do. If we do the rename, we should not use take to mean this new thing because it's very, very common in other languages for take to mean the thing that it currently means. + +PDL: Well, except for other languages, don't close the underlying iterator. So that's the one thing that is in line with everything else. okay? You're saying it would be still be the same other than food the same other than not closing family trip. That's possibly find. + +KG: Yeah, you asked about why this is coming up. So I don't think MF mentioned in the presentation, the reason we're bringing this up is because Node.js has a streams implementation and they have just copied iterator helpers onto their streams, and they have copied that as close to the spec as possible, which is great, and have shipped that as experimental code. Not as like "this is done"; they're open to making changes to it - it's firmly marked as experimental, but shipped so they could start getting feedback. And one of the pieces of feedback that they got is that someone expected this to work in the "keeps the underlying thing open" way, they wanted to take some items from the iterator and then take some more items from the iterator, and that is a thing you can do in other languages sometimes, depending on… like in Rust, there's this ownership model that prevents you from taking before exhausting the thing that you've taken and so on. Anyway. So this came up in real life, we got actual feedback from an actual implementation. That's why this is coming up. + +LCA: Yeah, KG does mention this but Rust does close the iterator after a take, because it takes ownership of the iterator to take method does. So once the tank is complete iterators, close for dropped, rather and you cannot go. we use it for brother take operations, + +KG: I thought if you dropped the `take`'d iterator, you could like restore ownership. + +LCA: Oh no. So if it kind of depends on whether the iterator was for will weather, the iterator was a borrowed, it's kind of complicated. whether the original they can have the original iterator and copy it like a reference-copy it, and then you can take things on that, but if you if you then `take` on your original iterator, it would resume. it would have its own like counter, with start at 0, again, rather than starting from where it last left off + +KG: But in any case Rust prevents you from being confused. + +_break for lunch_ + +LCA: Rust has `take` and `skip`, and they have `take_while` and `skip_while`. + +DE: I'm happy to follow the champions' opinion that we should stick with take/drop. Given the relative (but not absolute) uniqueness of closing the iterator to JavaScript, I still don't think that means that we should disregard the rest of the cross-language consensus on this. I understand that closing iterators is something that people have to learn, but it seems like the kind of thing that you just have to learn once, but maybe it's not so bad. this is a subjective opinion, subjective circumstance. But overall, I'm happy to defer to the champions on making a call, since arguments were given arguments on both sides. + +JRL: I don't understand why renaming to `limit` changes the expectation that the underlying iterator is going to be closed or not. If we rename it to limit, then the expectation could still be that it doesn't close the underlying iterator and you could limit multiple times. I don't understand why renaming eliminates the confusion. I think the confusion still exists, it's just, that we're going to call it a different method now. + +MF: On the thread where this was proposed, multiple people gave that opinion. It's subjective and I take them at their word for it. + +SYG: Okay. So three things. One, my opinion is I still like `take` and `drop`. Two, this is stage 3, so I would like our bar for renaming to be high here. I would like us to default to not do things like renames during stage 3. And three, I have similar concerns as to what JRL basically said: is the confusion. II can see a scenario and I'm not I would like to take. the the folks who offered the opinion on the issue at their word, but I can see a path to confusion where they had a certain behavior in mind, they found out that this iterator helper does not have that behavior. And any initial name would have been similarly confusing. Like it's not that clear to me how much of the root cause confusion is attributable to the name here. So given that, I don't think I'm compelled to change. from `take`/`drop`. + +DE: +1 to the preference to not rename during stage 3. + +DLM: I feel the same way. + +WH: I'm not quite sure what to do here. The important thing in commonality with the other languages is the usage patterns. And if it looks like the same usage pattern should work in JavaScript, but it subtly does something different, that'd be a problem and I would be reluctant to ship something which repeatedly causes such a problem. What are the options here? Same name, rename, not close the iterators, anything else? + +KG: It was suggested, and I don't particularly want to pursue this option but it is at least a thing that we could do, to have an additional argument to the method which specifies whether or not to close the underlying thing when the `take` is exhausted. But I don't really like that option. + +WH: This would be a landmine that we're putting into the language for anybody familiar with these things coming from other languages. + +KG: Yeah. so I did want to speak to that a little bit. I agree that there is the potential for that, but it is at least not particularly subtle, because you exhaust the first iterator and then the second one is just empty. That will be confusing, but that will not be subtle. You will just not have things. So you will get a bug, if you have the wrong expectation, but it is not a bug that's like your program is a little bit wrong. Your program is a lot wrong. So I am hopeful that it will not hurt too much. Like there's definitely - you could contrive a situation in which your program is just a little bit wrong and you don't notice but I expect even people who have the wrong expectation with the current semantics will generally notice. + +WH: Yeah, it depends on how hidden this is inside the program and see how familiar folks are with the details of how this stuff works. I would expect casual users to be confused by this. + +RBN: So one of the things that WH just said was that was saying that we could be doing something wrong and that we would be doing something different than everyone else. But my impression from both having written a library that does these types of operations having worked with C# and LINQ for many years. and kind of surveying the system is that the expectation for most of these patterns, is most of these existing runtimes in the ecosystem is that whenever you do `take` the iterator is essentially exhausted or closed. Now, most of these operate on the iterable, versus the iterator, which is a distinction between JavaScript and package in the ecosystem and many other languages that are also prior art. However, the consistency that we have with the `take` name is when it's used, it's fairly consistent to mean that you're essentially exhausting the thing, that you're done with it, you're not continuing to use it after the fact. You're going to be further operating on the results of take, rather than taking something and then trying to take something else. That's not usually the common case with these type of iteration methods as they're used in the ecosystem, and I think that there's value in using a name that is consistent with the ecosystem, has a consistent approach with the ecosystem, and that also is one that's commonly used in other languages, because not everybody is only a JavaScript developer - people will come with that background and knowledge. They come from other languages or will take this to other languages they use. And the other thing I wanted to point out, is that closing the iterator is probably the best failure straight. I pointed this out in the reflector. That, if you are using a, let's say an async iterator that also has an async version of `take`, and that's backed by a database that you want to be sure that the default state is that that database connection is dropped. Otherwise you'll starve yourself of resources. And the same is true for file I/O and everything else. So, the reality is that the only real valuable failure state that we could actually leverage is closing the iterator. and there are simple ways of not forwarding return, if you want to reuse this, that we could even consider modeling into the API be that through an options bag which Kevin has mentioned is something he'd rather not consider, or maybe even another method that allows you to create a non-closing iterator or something of that sort. + +USA: We are past time. So, MF, what do you want to do? Do you want to bring this back, or… + +MF: I'm comfortable finishing this without further discussion and seeing if we have consensus on no rename? + +KG: We don't need consensus on not renaming. + +MF: It sounds like nobody opposed keeping these names. + +KG: I mean I'm happy to ask if anyone opposes us moving forward in that way, it's just not something we need consensus for. I personally am fine with keeping the existing names and just saying that people will be confused no matter what we do, and this as Ron says is the least bad confusion possible. + +DE: +1 to that, and we can always add either an options bag or another method, so it seems like continuing with the current proposal seems like a good way forward. + +SYG: +1 + +MM: Yeah, I support not renaming. + +DE: Anybody want to express concerns? + +_silence_ + +MF: Great. + +#### Speaker's Summary of Key Points + +- Some people expressed support for the current names, and some would prefer a switch to some other name. +- Several delegates shared the view that there should be a high bar/strong motivation for changing names during stage 3 +- JS's usage of iterators rather than iterables, and the existence of iterator return in the first place, is rather unique, complicating the comparison with other languages (which tend to use `take` and `drop`). + +#### Conclusion + +- We will be sticking with the existing names and semantics for `take` and `drop`. Not making a change does not require consensus, that said there was explicit support from several delegates to stick with the current specification. + +## Temporal update and normative changes + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal) +- (slides)[https://docs.google.com/presentation/d/1b74GI-zHrG0wDzmwFs_yPWRli24KyVUNx3GeZt8JouA/] + +PFC: (**Slide 1**) Hi everyone. It's me again. You heard me this morning about test262. My name is Philip Chimento. I'm going to be presenting about the Temporal proposal. Unlike the presentation this morning, this work here is a partnership with Bloomberg. In particular, I'd like to thank JWS from Bloomberg for helping prepare the slides. He did a lot of work on that. + +PFC: (**Slide 2**) The purpose of today is to give a progress update. Last time In January, we talked about making a final push to resolve the issues raised during the stage 3, and we are now closer to that. I listed a few things last time that we still needed to address and I'll have at the end of the presentation a batch of changes that I'd like to ask for consensus on that address those things. And then I'll talk about one new issue that was raised last time during the plenary. So there will be a short discussion of what remains on the proposal. The other thing is that implementation has continued in several engines, and as always this has produced great feedback. We've also received feedback from people trying out the proposal using polyfills, and in particular there is a bugfix that a community member noticed and brought to our attention. + +PFC: (**Slide 3**) Another thing I should mention is the progress of standardizing the string format in the IETF. This is something that I try to give an update on every time I present. So, the status that's been for the last several meetings that the document is under review by the IETF's Internet Engineering Steering Group. As luck would have it, this morning we received editorial comments from the IETF area director for this area about the draft. So this is as fresh news as you're going to get on it, and the timeline that I've heard mentioned is 7 to 12 weeks from area director's evaluation before the last call. So I don't have updated information on where that came from but that's what I heard as of this morning. As a reminder, we've agreed not to ship Temporal without a flag in any implementations until the standardization process has been completed for the string format. + +PFC: (**Slide 4**) What else is left? There is the issue with the proposal allowing nanosecond precision. I have a bit more to say about that in the following slides. Another thing that is new since last time is that we were requested by implementations to add a host hook for HTML. To the best of my knowledge, this is a layering fix that doesn't require consensus from TC39, but I put links here to the PR for the proposal, and the PR for HTML, in case you're interested. Aside from these things we don't expect anything else other than editorial changes unless implementors bring up any showstoppers. + +PFC: (**Slide 5**) About nanoseconds, this is very closely related to the topic of not doing unbounded integer calculations in the spec. As we discussed last time, the problem motivating the request to go to microsecond precision is so that implementations don't have to do expensive arithmetic operations essentially on bigints. After investigating this and getting feedback from other implementers as well, it seems like the nanosecond precision is not the only place where we'd have to do unbounded integer arithmetic. It also occurs when you balance different units in a Temporal.Duration with each other. In the spec this balancing arithmetic takes place in the mathematical value domain, ℝ. I originally thought that this didn't matter, but ABL, who's been working on the proposal for Firefox, helpfully provided bunch of test cases for test262 showing places where it does matter. Where if you implement the calculation using floats, you'll get a different result than with integer arithmetic. None of the implementers that we've talked to liked this. And you know, it doesn't seem like a good situation to have to do BigInt arithmetic where it wouldn't be necessary. So what I mentioned earlier is that if we eliminate nanosecond precision and have the whole proposal be in microsecond precision, that won't eliminate all the places where we would have to perform bigint arithmetic in implementations or even arithmetic in 64 plus 32 bits, which is another thing we talked about. So, we did a bunch of investigation and discussion about this. We have a framework for a solution. I am not presenting it for consensus during this meeting because the details are not worked out enough to propose a spec change at this point. But it involves putting an upper bound on some of the units of durations. And the goal that we're trying to achieve is that all calculations have to be able to be performed with at most 64 plus 32 bit integers. + +PFC: (**Slide 6**) So this slide is a bit of an illustration of what the current situation is and what changes. We perform the calculations in the mathematical value domain, like I said, and then we store them in the internal slots of Temporal.Duration as a "float64-representable integer", which we get by taking ℝ(𝔽(..)) of the value. That means that implementers can implement the storage as 64-bit floats. But unfortunately, it doesn't prevent the unbounded integer arithmetic from happening in the interim between when it's retrieved from and stored to a Duration object. In the framework that we're proposing for the solution, we don't want to change the storage. Duration units will still be stored as float64-representable integers. The date units are going to continue to be stored separately. They need to be calculated separately, because calculating with date units requires calendar operations. The result of having to calculate date units using calendars is that you can't freely convert date units between each other and into time units. So for example, if you have one month you may not convert that to 30 days because not all months are 30 days. You need a reference point and a calendar calculation and such. Time units, which are hours, minutes, seconds, and whatever units we choose for subseconds, are always freely convertible with each other. There's no calendar that says a second is actually two seconds long. There are leap seconds, but POSIX ignores those, JavaScript Date ignores those, we're not taking those into account. There's leap second smearing, there's a movement to abolish leap seconds. Those are all things that we are leaving outside. So the way we're going to do with time units is to convert them to a normalized form of an integer number of seconds and an integer number of sub-seconds. So if we were to keep the proposal at nanosecond precision, the absolute value of the subseconds would be between zero and a billion minus 1. If we would go to microsecond precision, the absolute value of the subseconds would be between zero and a million minus 1. And then we're going to place an upper bound on the absolute value of the number of seconds at `Number.MAX_SAFE_INTEGER`. So it's no longer possible to have a Temporal.Duration with any time units that, totalled together, would be equal to or longer than (`Number.MAX_SAFE_INTEGER + 1`) seconds. So, when we do calculations with durations, we're going to convert it to this normalized form, do the calculation, and won't have to deal with integer overflow, or precision loss, and then convert it back to the float64-representable integers for storage. + +WH: I'm curious — if seconds and subseconds are both integers, and I assume that they must have the same sign, what is the reason for having separate seconds and subseconds rather than just having an integral number of subseconds in the spec? + +PFC: Just convenience for implementers, because if we had one number then the maximum value would be larger than 64 bits. + +WH: It would fit into a 96-bit integer, or actually, you don't even need 96, you need… something like 83 bits. + +PFC: That's right, something like that. It would fit generously into a 96 bit integer. We kind of expect implementers would choose to implement it this way anyway. The seconds would be a 64 bit integer and the subseconds would fit in 32 bits. + +WH: It's just simplifying the spec, implementers can do any implementation which is mathematically equivalent. + +DE: It sounds like the clarifying question has been answered, and this is an editorial decision made by the authors, and I think that makes it non-blocking. + +PFC: I'd also be happy to take a look at how dramatic the difference in the specs would actually be, if we have time for that. + +PFC: (**Slide 7**) The plan is to produce a detailed spec change soon. We'll check back in with the implementers and see whether it addresses the concerns. And at that point, we will talk about whether to keep the precision to nanoseconds or reduced precision to microseconds. We'll have a clearer idea of what advantages microseconds might bring, and we aim to have a decision on this by the time of the May plenary. And so we'll have a spec change ready that we will present for consensus at that time. If you have questions or feedback about the idea that I just presented, I'm here, and several other of the champions are here as well. If you have opinions on this, I would like to hear it. And obviously, if you're not here and you have questions, feel free to reach out online as well. + +PFC: (**Slide 8**) All this is to say, we're nearly there. The to-do list is finite and decreasing. + +SYG: First of all, thank you PFC and the Temporal champions for taking feedback very seriously and working on a way forward here. I want to understand one thing. So for benefit for the rest of the committee, v8's position here is ideally that we would still want microseconds and bounded integer precision, hopefully just using int64s, but that seems like it's actually not on the table. Which I think I'm okay with, but I want to confirm with the champions that that is not on the table because we can't have 64 bit everywhere. right? + +PFC: Unless you want to reduce the range of allowable values for Temporal duration under a limit that we think is not realistic for what we'll see in usage, then duration calculations have to be done in 96 bits and not 64. + +SYG: Thanks. And then, the second part is the 64 plus 32 thing. The background there is that we're looking around at how other duration libraries represent nanosecond precision time. And in particular, abseil, the C++ library uses this system uses an int64 to represent seconds and an additional 32 bits to represent sub seconds. In particular. I think they represent quarter nanoseconds or something like that, which is what could fit in? So I understand the detailed specs are not worked out here. I'm wondering what the implementation strategy is with the current plan from the champions. Does it basically lock all implementations into int64 plus 32 representation, even if you don't want an optimized implementation off the bat, is the choice still that if you don't want an optimized implementation, you still just do everything bigint because obviously, that's big enough. But if you do, there's a very clearly understood way to do an optimized thing and that's 64 plus 32. + +PFC: Right. I don't think it obligates you to use 64 plus 32. In particular, I believe, although I'll need to confirm this when we actually go to work out the details, I believe it should be possible to create a polyfill for this with two JS numbers instead of 64 and 32 bit integers. But I'm not 100% sure about that. + +SYG: Alright. I think that clarifies my questions. Thank you. + +DE: So I think I'm very happy with this framework. I want to say at Bloomberg we have a moderate preference for the expressiveness of nanoseconds. There are number of publicly available financial data feeds that are expressed in nanoseconds, and it would be great if we didn't have to worry about whether this would be representable in Temporal units. That said microseconds are already pretty small, most of the things that you want to display to users are microseconds or coarser, so it might not be the end of the world. Anyway, I think the biggest problem was the bigint overflow, and then I get the feeling that the 64 plus 32 thing will be suitable, so I don't see a reason to coarsen precision to microseconds. and so I'm very happy with what was proposed here, once the details are worked out. But it also wouldn't be the end of the world, like Temporal is completely unusable, if we went to microseconds. + +WH: For this I would like us to have the simplest spec and just let implementations pick whatever implementation suits them, be it a single integer or separate integers for seconds and subseconds. If we specify it as a single integer, it's fairly obvious how an implementation can split it into seconds and sub seconds — there are time libraries that do just that. If we specify it as a separate seconds and subseconds, it's not obvious how an implementation could do this using a single integer, and it's very easy to get the spec wrong, particularly in the areas where you need to transfer carries and signs between arithmetic on one of the numbers to the other. The overflow boundary conditions can get quite tricky where you might get cases where something overflows when it shouldn't. So, I definitely want implementations to be able to use separate integers for seconds and subseconds, but I would like to specify it as the simpler variant of just having an integral number of whatever your subsecond units are. It just makes the spec much simpler. + +PFC: I think that's a fair concern. I will say it might not be as obvious as you think how to split it up, because we have actually had comments from implementers on something similar, where they didn't realize that the spec allowed them to split a large integer into smaller integers, but I think maybe we can solve that with a note, or something like that. + +WH: Yeah, just add a note. It's challenging to mathematically separate integers and get all the boundary cases. + +DE: The rest of the Temporal spec is also challenging mathematically. I have a reply here. You're making an interesting editorial suggestion which I'm not necessarily opposed to, but also do you have an opinion on the semantics or the framework for the semantics proposed about the limits, maybe sticking with nanoseconds? + +WH: I'm not quite sure what you're asking me. I would not be in favor of having this work with times with up to a googol seconds. So we definitely need to have some limits. And 53 plus 30 bits, or anything like that, is reasonable. + +DE: Great. + +WH: The thing I care more about are identities such as addition is commutative and associative. + +SYG: I get where you're coming from WH, but speaking from experience of trying to review a lot of implementations here, and just given the sheer size of Temporal, I would lean the other way for editorial direction. I would be more comfortable that if everyone has agreed on the bounds and number of bits to represent these, and what a good representation ought to be. Now, rather that the spec gets that right, once in spec correctly, then to lean on implementations to interpret, what the optimization representation could be. Just the sheer size of this proposal I think means in practice that implementations are going to be implementing the spec literally step-by-step. It is just impossible to review otherwise, like, I don't know how - like if somebody came to V8 and was like here is an optimized representation of Duration and all the math operations, I would just reject it. If it was not step to spec step mappable in a way that I can actually review. As a software engineer I don't know what we're going to do if it's spec'd in such a way that it's not obvious to review. + +WH: I strongly disagree with that position in general. What we need is a limit on how high the number of seconds and subseconds can be. I don't want to have to deal with edge cases in which you have a number of sub seconds greater than a second, dealing with opposite signs and stuff like this. It just creates unnecessary spec complexity. + +SYG: There might be a way to thread the needle, but there's nothing which says that you must Implement seconds and subseconds using two separate integers. You could just use 96 bits arithmetic and it's so often faster and simpler. + +DE: So I want to propose that for next steps on this interesting topic, the editors have this regular open call opened to all delegates, you continue the discussion there? I think we can trust the editors to ultimately make it a good call on this editorial decision based on inputs like this. + +WH: It's not just an editorial call. It's also a correctness issue. + +DE: This spec has to be correct. + +WH: Yeah, the spec has to be correct. It is my main concern. + +DE: I think the point has been registered and we could move on. + +JGT: So first just a note, this is not necessarily related to the size constraints. But one thing that wasn't immediately obvious while designing the duration type, but now is fairly obvious, is that a very large percentage of durations are going to have one unit in them, right? And so there may be some significant storage optimization opportunities for durations, especially if they're sort of, lots of them are being created And so, as implementers are sort of giving feedback on the bounds they might also be wanting to think about what, use cases are likely for durations, and perhaps to think about an optimised path for single-unit duration. + +CDA: Okay. we have about 12-13 minutes left. So PFC, do you want to continue? + +PFC: (**Slide 9**) OK, I'll run through the normative changes quickly. There are only five this time. Like I said, the todo list is finite and decreasing. + +PFC: (**Slide 10**) This is one [fix time zone formatting in ZonedDateTime] that I presented last time and didn't achieve consensus because more time was needed for review. So during our review, we had discussion with TG2, and FYT raised some concerns about the proposed solution, and ultimately we were not able to get to a position where the PR proposed last time would be able to achieve consensus in plenary. So we came up with an alternate design, which we believe addresses the concerns we heard, while still allowing the toLocaleString method of Temporal to be used, although passing one to the format method or the other methods of a DateTimeFormat, like in this code sample, will throw. This is a temporary solution to allow toLocaleString to be used, and we hope that in a follow-up proposal we will be able to find a solution that everybody's happy with, that will allow a ZonedDateTime to be used with the other methods of DateTimeFormat. You'll notice at the bottom of the slide, it says "late PR", as we had part of this discussion after the agenda deadline. So you should note that unlike the other PRs I'm presenting, this one was added quite recently. If you need more time to review it, I'm here available to talk it through with anybody who has questions. And in case people are not ready to lend it consensus today, perhaps we could have a short item on that at the end of the meeting. + +PFC: (**Slide 11**) Right. We have a pull request auditing the callers of MakeDay and MakeDate and TimeFromYear for possibly out of range values. This actually stems from an existing bug report in the tc93/ecma262 repository about how the operation MakeDay is not precise about when it returns NaN. This affected some of the operations in Temporal, which asserted that NaN was not returned from that operation, but that may not be correct. So, rather than complicate the spec with having a bunch of code paths handle NaN separately, we would like to put mathematical value versions of at least MakeDay and other similar operations, which hopefully when we get more clarity on what the operations in ecma262 we could recombine in the future. But this at least makes the semantics clear and removes ambiguity without complicating the spec too much. + +JGT: Hey, PFC, could you back up two slides? (**Slide 9**) Just to clarify on this, the current behavior in the spec that we're trying to fix is what's shown in this code sample, right? So the illustration of this code sample is that it's really bad to return different results from toLocaleString than from DateTimeFormat.format. That's the problem to be solved. The solution that we're planning is essentially to have that second line of code throw, right? So we're not proposing what's here, but the PR behavior is just to throw in the second case and then come back at some point in the future with a better solution for date-time format. + +PFC: Looking at this slide, I see wasn't wasn't entirely clear, you're right about that. + +PFC: (**Slide 12**) All right, back to RGN's PR (#2500). This is a follow-up to an issue that we discussed in plenary last time about when exactly property bags are validated and how, when you pass a property bag to a calendar operation. This switches the order of things around a little bit to make sure that all the validation of calendar specific properties, like `era` and `eraYear`, is handled in calendar code, and so if you don't support other calendars than ISO 8601 then `era` and `eraYear` do not appear in the spec that you have to implement. So this has a couple of very subtle changes to the order in which certain properties are accessed, but the main visible consequence is what you see in this code sample below. The first two calls to `PlainMonthDay.from` are unchanged. If you if you put a month, day, and year that don't form an existing date, that will throw; if you put a monthCode and and a date that exists in any year, that works. So that doesn't change. If you have monthCode, day, and year, and that forms a date that doesn't exist, previously we would consider the third call here as equivalent to the second call. What has changed here is that it's considered more equivalent to the first call, and it's not accepted, + +PFC: (**Slide 13**) All right, next we have an audit of user-visible operations (PR #2519). Feedback that we've gotten often is that calendar and time zone operations, which are potentially calls into user-visible code in the case of custom calendars and time zones, were called redundantly. We've had a number of PRs l from either the champions or implementers trying to fix specific cases of this. At this point we decided, since we keep seeing these, we need to audit the whole proposal and just fix any calls that might be redundant instead of fixing them case-by-case. This audit is done now. It doesn't affect any of the functionality, but it is all observable as you can see in this very tiny section of the diff that I wrote for the test262 tests, a lot of Get and Call operations are eliminated. We've tried to write it according to the principle that you should get the methods once and call them multiple times only when necessary. So, a lot of lookups have been eliminated and also some calls. + +PFC: (**Slide 14**) Last one (PR #2517). This is a bug discovered by somebody who was interested in using Temporal and using one of the polyfills. In certain situations, in the calculation of rounding a duration, the largest unit wasn't respected correctly. So if you want to balance 2400 hours into a duration where the largest unit is a month, then what you want, at least relative to this particular date, is three months and 11 days, but with the current spec text you would get 100 days. This is the right length of days, but not the right unit, and we are fixing this. + +CDA: We have one question on the queue from KG. + +KG: Not a question. I just wanted to say the audit of user-visible code I'm strongly in favor of. That's excellent work, it's something that it had been clear it would need to be done holistically for a long time and I'm glad it actually got done holistically. It's good. + +PFC: That's good to hear, thanks. I'd like to ask if we have consensus on these five normative changes. + +CDA: DE supports on the queue. We do have a clarifying question though, from JGT. + +JGT: One more note on that the first normative PR that the bullet mentioned is in addition to throwing in the `format()` method, it also affects all formatting methods in the Intl.DateTimeFormat namespace. So `formatToParts()`, `formatRange()`, etc, all of them would throw if presented with a ZonedDateTime. + +PFC: That's right. It's a bit unfortunate that you cannot directly format a range of ZoneDateTimes, but as I said, that's something that we hope to make possible with some future adjustments to the proposal, but not in scope right now. + +JGT: Also, to clarify, it is possible you can transform it into an instant, right? So you can do it, there's a workaround, it's just not as ergonomic, but you can transform the ZonedDateTime temporal type into a different Temporal type and then you can format that range. So, there are certainly possibilities there In the meantime. + +CDA: You have two explicit supports in the queue from DE and DLM. + +DE: Yeah. I definitely support each of these. For this one we're talking about with ZonedDateTime, it seems like an okay starting point and we've been successfully incrementally adding to DateTimeFormat. So, I think that can continue. + +DLM: I would say really quickly that way that we appreciate the amount of hard work. + +DE: So we're out of time for now. but if we have more time to discuss later the meeting, I think would be great to flesh out the microsecond versus nanosecond thing more a little bit, as well as how much we want to wait for the IETF before considering this to be not anymore requiring coordination. But yeah, overflow. + +### Summary + +Barring any showstoppers raised by implementors, we expect to present one more normative change to avoid unbounded integer arithmetic in the next plenary. +We discussed the framework for the solution to avoiding unbounded arithmetic on integer conversions; there was general support to the idea that the number of seconds in a duration is bounded to MAX_SAFE_INTEGER, which is anticipated to solve the need for BigInt calculations. +Topic to be continued on day 3 in an overflow session, discussing the nanosecond vs microsecond precision, and the way forward with the IETF review of our string formats proposal. + +### Conclusion + +All 5 PRs got consensus and will be merged +https://github.com/tc39/proposal-temporal/pull/2522 - Change allowing Temporal.ZonedDateTime.prototype.toLocaleString to work while disallowing Temporal.ZonedDateTime objects passed to Intl.DateTimeFormat methods +https://github.com/tc39/proposal-temporal/pull/2518 - Change to eliminate ambiguous situations where abstract operations such as MakeDay might return NaN +https://github.com/tc39/proposal-temporal/pull/2500 - Change in the validation of property bags passed to calendar methods +https://github.com/tc39/proposal-temporal/pull/2519 - Audit of user-observable lookups and calls of calendar methods, and elimination of redundant ones +https://github.com/tc39/proposal-temporal/pull/2517 - Bug fix for Duration rounding calculation with largestUnit + +## Set methods: What to do about `intersection` order? + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-set-methods) +- [issue](https://github.com/tc39/proposal-set-methods/issues/91) +- [slides](https://docs.google.com/presentation/d/1ip9wR0J0DML9zxqZVI3svZcE_s5rDtaFWhk_wizB11A/) + +KG: Okay, so `Set.prototype.intersection`. As a reminder the Set methods proposal is in stage three. It's beginning to be implemented and we are running into the issues of what is possible to implement. So as a reminder, for context, `Set`s are ordered, which means that for every method, including intersection, there is a particular order that we chose for the result, although the particular order doesn't matter very much. Mostly the order just falls out of how the algorithms are specified. In fact, in all but one case, it falls out of how the algorithms are specified. However, in the particular case of `intersection`, where you are intersecting a large thing with a smaller thing, the order as currently specified has to be accomplished explicitly by sorting. And the idea is that you don't want the sort to take time proportional to the larger thing, because intersection shouldn't require time proportional to the larger of the two sets; intersection should in principle only require time proportional to the smaller of the two sets. So it was hoped (by me) that you could do this sort that you can see on the screen (slide shows spec note) efficiently in terms of being proportional only to the size of the result, not the size of the receiver. And this is true if you are using the Firefox implementation, which is documented nicely online, and the V8 implementation, which is essentially the same data structure. But it's not true for every possible implementation and not even true for every current implementation. So JavaScriptCore uses an entirely distinct data structure, which has very little in common with the one that V8 and SpiderMonkey use, and JavaScriptCore’s implementation uses linked lists to maintain order rather than a contiguous array, which does not allow for efficient sorting. With V8 and SM you could do an efficient sort by looking up positions in an array and comparing them, but in JavaScriptCore you would have to iterate the actual list to get an order. + +KG: So, JavaScriptCore does not allow efficient sorting. So we’ve got to do something. Well, in fact, we don't have to do something. One of these options is "it doesn't matter", though at the very least I need to remove this note that this note is not true for JavaScriptCore implementation. So, options for things that we could do. We could say that the order of the result will change suddenly as soon as the argument gets larger than the receiver. So where previously it was always going to be sorted so that things will be ordered the way they were in the receiver regardless of the sizes, with this it is possible the order of the result would suddenly switch based on which of the two was larger. So, you could add some unrelated elements to the argument that aren't even in the resulting intersection and have the effect that the order of the result is suddenly different. Alternatively, there's this sort of “zip” order where you pull something from the first one, and then something from the second one, and then something from the first one, and then something from the second one, etc. This would have twice as many user calls. And also means that it's basically impossible to implement by cloning in an actual implementation. You end up having to follow this thing where you iterate both sets, even when they're both built-ins and there's no user code involved so it has the right order for the result. It's not something that follows naturally from cloning either set. + +KG: Another option is just say we don't worry about it - JavaScriptCore would incur some overhead in this very specific case, but like it probably doesn't come up that much. And it's probably not that much overhead anyway - just maybe it doesn't matter: I just remove the note and say “you've got to get here, but it's up to you whether you do that efficiently”. Or maybe there's another option I am not thinking of. But these are the only ones I've got. They all have pretty significant downsides. I would like to hear from the committee what people's opinions on this matter are. + +SYG: We’ve chatted a little bit about this offline already anyway, but I'll recap some of that discussion here for the committee. So I did discuss this with the V8 folks already. the V8 engineers' first opinion is that this intersection algorithm is kind of wack anyway and specifically this switching on this size changes. So, in the case of a user-defined set-like (so not a built-in set and not a built-in map), you're calling other methods on it which is either iteration or iteration by keys, and what code gets called on it already is size dependence. So I think our opinion is that the size dependence is the weird thing from our point of view. And given that, it is what it is. For time optimality, you can't like, why even why work so hard to to get rid of the sharp edge of a dependence on order like the important thing. I don't think we should believe this completely implementation-defined because it's important to have something exactly define the order. It's a weirdness. I don't have the intuition that this order really matters, but I'm happy to be corrected here, but given that intuition, and given that the, which user, which visible user code gets called is already size defendant having the order precisely dependent seems fine from a semantics point of view. And it makes the fast paths simpler and implementation in that the fast path is exactly what you talked about with cloning the smaller thing. and if you clone a smaller thing that naturally preserves the order of the smaller thing so, yes, my preference is that one. + +KG: Yeah. So I have the code for `intersection` on the screen just to demonstrate what SYG was talking about. There's this switch based on the relative sizes, and in one case you end up calling the `has` method of the argument and in the other case you end up calling the `keys` method of the argument. So that's the switch that SYG was talking about. The user code is displayed here. Now, personally I think that the order being size-dependent is a sharper edge than which piece of user code gets called, because in the case that you have a correct set-like, where the keys method is consistent with the has method and there's no side effects in either of them, you don't care at all which of the two things is getting called. But even in that case, you could plausibly care about the order. So my feeling is that the order being size dependent is, in fact, a sharper edge than the which user code gets called being size dependent. But that is just my opinion. And if the rest of the committee is okay with that sharp edge, then we can do that. + +USA: Yeah. there's no response. Next up, we have DLM. + +DLM: I discussed this with Andre Bargul who's been handling our implementation and as was mentioned, we don't have a problem right now, but we do have a small concern of specifying a performance. In this case, then that prevents us from doing other optimizations in the future by changing to a different data structure that might not give us the performance. + +KG: I think we can't specify the performance unless we force JavaScriptCore to change their entire implementation. So they are today in this situation that you are worried about being in the future. + +WH: This situation is somewhat similar to sort, in which it is intentionally not specified which sort algorithm we use in the spec because there could be a variety of ones with different performance characteristics. Now it sounds like that in this case there might be an algorithm which just iterates through the smaller object and looks things up in the larger one. Is there any concern that there might be an algorithm which is even better than this? + +KG: I do not think it is possible to do better than that, modulo perhaps a constant overhead. So the algorithm…let me pull it up. The algorithm is fully specified. Yeah. The algorithm is fully specified in terms of which things you call; that we did not leave up to implementations. No one is really enthusiastic about the situation with array sort where things are implementation-defined, but we are largely okay with it because people are unlikely to be relying on the order in which calls occur, but we do know that people end up relying on the ultimate order of the data structure. In the case of sort, we know that people relied on it being a stable sort, even though they didn't rely on the order in which the calls happened. So I think we have to specify the order of the resulting thing, and I think we want to specify the order in which the calls occur because we don't want to make more things be implementation dependent. And I don't think we can do better than this in terms of big-O performance. + +WH: For the third answer, if we specify it this way, will it cause a problem for any implementation? + +KG: When you say it, we spec it this way, are you talking about everything apart from the sort, or are you including the sort? + +WH: Yeah, we just iterate through the smaller set looking things up in the larger set, collect the elements, and not sort them. This is the same thing that SYG was advocating for. Would doing that cause any issues for any implementation? + +KG: I don't believe so. And in fact, I think that's the best case for implementations because it allows them to be as efficient as possible. + +WH: Okay, yes, then I’d be in favor of the first bullet point [order depends on relative size of argument vs receiver]. Let's just do that and not try to sort these things afterwards. + +USA: Okay. Now, we have MM. Yeah. + +MM: So I agree with what was just said. And because the reasons were stated fairly exhaustively I don't need to go into them. I think we should not sort. I think we should just do the deterministic thing that's friendly to all implementations, and I am not very concerned about that. I recognize the sharp edge KG is concerned about, I recognize that it might be something to be concerned about but altogether, I'm not worried about it. + +DE: sorry for the notes to clarify, if MM and WH, were you expressed, his support for the first bullet point? + +WH: I think we are. I think MM, I, and SYG all agree on the first bullet point. Yes. + +DE: Thank you. + +KG: Okay, I see that everything in the queue is about the sort order. So having heard several people speak in favor of the first one and no one except me really oppose it, I will just do the first one. The order will be weird. Okay, with that done, we have more general topics. + +SYG: Sorry, were there more to your items Kevin - were there more questions you want to ask? Mine is more like a question I just want to bring to committee so I want to leave it as deprioritized as possible. + +KG: Well SFC has a topic on the thing but the sort order is the only thing I was bringing and I consider that settled. + +SYG: I'll say my piece which I think might dovetail into which things SFC is going to talk about anyway. So part of the feedback we got your invitation and when we're done, we're when I brought this back to the team is is basically, the feeling of the V8 implementers, is that we shouldn't care very much about the performance of set--likes. We should basically only care about performance, of built-in sets and built in maps Now, the intersection should of course, support set-likes but the feeling was basically that if you're using a set-like just have intersection do a Set.from or something as one of its very first steps to to, to cast the, what, not to cast, but due to convert it to an actual set and then use And that the algorithm and the rest of it, just basically is about only treating built-in Sets and built in Maps. This is not a, this is, is basically giving that go out to the concern that we had is is not blocking not, requesting the normal change. This has been litigated and relitigated in many past meetings. But our feelings are basically it's not clear how much we actually care about the performance of set-likes that are user programs. Why not just have forced them to iterate the entire set-like by casting it to an actual built-in Set first. I guess I would like some discussion around the topic if we have time because when we designed this thing especially around so class saying, non-generic on the receiver, but generic on the argument, it was with the explicit understanding of it being precedent-setting because this is a thing that that we have grapple with for many years. we want to reach a good like program to do in the future when we design new built-in, built-in methods, so so, Yeah. Now that there's some implementation experience under our belts, do people care about the performance of non built-in sets and maps? + +KG: I care. I don't care about performance in the small, but I care about performance in the large and in particular I think it probably does matter a lot that if you take an empty set, and you intersect it with a very large set, that should be fast. I would be sad if intersecting the empty set or a singleton set or any other extremely small set was slow, even when the argument is a set-like. That just should not take a bunch of time. It did not require you to iterate the entire argument. I think it is a reasonable expectation that if you intersect the empty set with something, that finishes very quickly. + +SFC: I'm next in the queue, then queue is not advancing, but I think Shu asked my question and I haven't answered it already so I don't have anything more to say. + +USA: That’s the rest of the queue. + +### Summary + +Although the sort order based on argument vs receiver size is a weird sharp edge, it is simple, has the best performance, and the alternatives are too complicated. +We discussed various options for how to deal with this ordering question and decided that the least bad option was to just remove the sort step. So the order of the resulting set will depend on which of the argument or receiver is larger and live with that being kind of weird. is at least it's deterministic. + +### Conclusion + +Use the sort order which depends on the relative size of argument vs receiver. +Explicit support from SYG, MM, WH, DE + +## Async Explicit Resource Management + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-async-explicit-resource-management) +- [spec text](https://tc39.es/proposal-async-explicit-resource-management/) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-Tkodu1RydtKh2ZVafxA?e=yasS3Y) + +RBN: In January, I presented on the resource management proposal and requested stage 3. And at the time we had conditional advancement to Stage 3, pending an investigation into whether we should be using either the `await` keyword or the `async` keyword as a modifier to the using declaration. The consensus and conclusion was that this condition was to be resolved no later than the March 2023 plenary, this investigation would be conducted via some informal polling and if we we had no clear winner, so we would probably are we would advance to stage 3 with the current syntax, which is `using await`. So, there was some informal polling that occurred. SYG performed an internal informal poll at Google. I also performed one at Microsoft. There was also a broadly distributed poll via Twitter and Mastodon. That was provided by RPRR.. And I just on Monday, was notified by Hax that there was also a poll that was done within the JS China Interest Group. + +RBN: So what does the poll that we provided look like? Essentially, it looks like this. We asked the question: “Which declaration form most clearly expresses the following semantics: that the value of the binding X below would be disposed when control flow exits, the containing block scope and that this disposal would happen asynchronously and would be awaited before continuing"? We provided in the twitter poll and the Microsoft poll a series of options, these were the current form, which is `using await x = y`, C# syntax, which is `await using x = y`, and the alternative syntax we've been discussing, which is `async using x = y`. And I'll go into each of these here is to kind of what the differences are and why we're looking at them. + +RBN: So `using await x = y` is the current proposal syntax. It uses the `await` keyword to indicate that an implicit await occurs at the end of the block block. it uses the `await` modifier, which following the statement head, which is very similar to how `for await` uses the modifier following the or keyword. This has low likelihood of collision with `await` as an identifier. This is because await is a reserved word in strict mode code. It's also reserved inside of async functions, even in non-strict mode. And we recently added a lookahead restriction for the synchronous using to support this this case. We had some concern with this syntax about the `await` being a deferred operation rather than an immediate operation and the syntax has writte may seem to indicate that we are awaiting `x` somehow. + +RBN: And we've also discussed the `await using` syntax, which is the C# syntax. This uses `await` again to indicate an implicit `await` occurs at the end of the block There's the possibility that this might better indicate that we are awaiting the using operation because it precedes `using` rather than awaiting `x`. The reason this wasn't originally considered was that this requires a cover grammar for disambiguation in that await using is already legal JavaScript before we have to disambiguate the declaration form from an expression via a cover grammar. + +RBN: Finally, we considered the `async using` keyword instead. This would use async to indicate the operation occurs at the end of the block. async is a contextual keyword prefix in JavaScript much like a sync function There's less potential for misinterpretation. of whether or not this await is implicitly, it is immediate but concerns raised by myself, especially our that it doesn't really matter. Each existing and proposed use cases of async that occur in the JavaScript language. From facing function, async arrows async methods, even `async do` async here does not imply a weight, it only permits it. They're declaring something is async here. Does not actually have any explicit or implicit indication that `await` actually occurs in those cases. So it feels to the champion at least that this is possibly the wrong term to be using it also requires a, async or a cover grammar, which like the 08 `using` syntax in that the `async` before using is also currently valid as the beginning of an async arrow. + +RBN: There's a fourth option that we didn't consider, which was `using async`. I had a number of concerns about this which is why it wasn't included. There's a much higher likelihood of collision with async as an identifier. When refactoring than there is with something like await, async is not reserved in strict code. It's not reserved in module bodies or class blocks. It's not reserved even inside of async functions. I also had concerns that it doesn't really align with the keyword order in ECMAScript or in any other language with similar prior art. So it seemed there was potential for confusion potential for refactoring issues that using await really doesn't have because `using await` has been around for quite a number of years and doesn't have - the folks that are using it generally aren't using it as an identifier. Whereas something like async. I have seen `async` as an identifier in many places. + +RBN: So, the results of the polls. I've gathered this information from SYG and others. so internally SYG posted this poll, his did not include `await using`. There was a little bit of a miscommunication there I had indicated when I first posted the poll internally at Microsoft that I was concerned that having both using a weight and the weight using as separate items might have been a bit confusing. And unfortunately in the poll the options we were using, or that we had used for these did not include anything in the form of Ranked choice Voting. But the options as of then, I think the last time I got a snapshot of this from SYG was either Friday. I think Friday right before end of business day. But currently shows that at Google `async using` had a higher interest, then `using await` does. so at least on that side async using seem to be more more interesting line with Folks at Google that are working either on Chrome or V8 or heavily use JavaScript, or or TypeScript. + +RBN: Within Microsoft, we kind of had the exact opposite. perspective. `using await` had about 41% of the vote from the number for the respondents that that responded. `await using` had a much higher incidence There's again a potential bias at Microsoft for towards the C# syntax and sets which many people are familiar with. + +RBN: We had a Twitter poll, as I mentioned that RPR had posted publicly. This had 434 respondents, some of the respondents also indicated familiarity with C#, as the reason for their choice. In this case, using await was much lower than `async using`. But `await using` was ahead quite handily. Rob also performed a poll. on Mastodon. Unfortunately the polling was a bit awkward as the Elk mastodon client at the time did not support poles. So this was done using likes, these are harder to see via the link that is advertised in the slides here but the elk web application does show the same. does show likes Which means it feasible to actually references information for the Mastodon poll. There were about 16 respondents. they generally favored `await using` but it was fairly neck-and-neck. We're talking here I think a difference of 7 respondents for `async using` eight folks for `await using` and one for `using await`. + +RBN: And the poll results that JHX gathered from the JS China interest group at eight respondents. Number of these respondents only expressed a preference that the keyword, whether it's `awaite async` come before the `using` declaration, and other than that, it's about two people seemed primarily we're had primarily People were more. interested in a way using versus one apiece for `using await` or `async usin`g with the remaining respondents only again, having preference for the keyword ordered showing the keyword keyword first. + +RBN: so, in summary, for the Google poll, we're looking about a two-to-one in favor of async versus await. And again, this did not include `await using` as an option. The Microsoft internal poll results were 11 to 1 in favor of a syntax that included `await`. And is within the syntax include to wait about six to five. Six favoring the `await using` syntax, and five favoring the using await Twitter poll Showed about a two to one in favor of awaits in some form versus async within the await it was again. Two to one of folks looking at await using versus `using await`. Massive poll results. Again, there was a smaller number of respondents in this case. It’s about 9 to 7 in favor of await has async. But 8 to 1 in favor of `await` coming before the `using` declaration in the await case, + +RBN: The champion's preference, in this case, I'm starting to lean towards `await using`. for a number of reasons, one. you still uses the `await` keyword, which clearly indicates in a weight and preserves sentiment that have been expressed by MM and others. More specifically that `await` and `yield` should really indicate interleaving points within JavaScript. I'm again, wary of introducing an inconsistent meaning of the, `async` keyword compared to anything else in the language, and it does feel to me more That the keyword order came to indicates that what we are waiting here is something related to the using declaration. rather than the X or Y. the X identifier, the seem to be more strongly favored in public polls and matches the prior art and C# which is also one of the inspirations behind async functions in JavaScript as it stands today. But given that, I still want to kind of get some feedback from committee. see on any specific preference as this has been something that's we haven't really had a strong preference from others in the committee. So I'll put this up on the screen. I don't know if we want to take some time to have folks put some answers on the queue or if we want to find a way to try to use a temperature check as a way to do this. + +CDA: We (IBM) support the `await using` form, we agree that `async` was a bit awkward for the reasons that you mentioned and, unscientifically, we just sort of expect the `await` to come first. So we kind of have a preference on that one. + +MM: Just confirming what RBN said that all the concerns that I had that led me to favored `using await` are completely satisfied by `await using`, I'm very happy with that result. + +KG: I still strongly prefer `async using` but I don't think there's any objective way to resolve this given that there's not overwhelming consensus among the community, which is the only thing that a poll could possibly have shown us that would have actually determined the outcome to this. I don't think we have much of a way to solve this other than other than just, I'm saying we pick one and as much as I would like us to pick the thing that I like, we have to pick something. and I'm okay with deferring to champion's preference here. + +JFI: RBN, you mentioned that you know `await using` was an inconsistent usage of `await`. But doesn't this apply to await to? I mean, my, my major concern with the weight using is that, you know, I expect `await` to denote that yielded. Point, like right there. And that this is actually saying that it's going to potentially `await` later. + +RBN: Well, I have two responses to that one. Is that `await`? So `await using`, I think is still more consistent than `async using` would be because again, `async` as a keyword has no similar meeting in the language. Its purposes has been both in things that are currently in the language and things that are proposed, aimed towards indicating the allowing a specific syntax within the body of a function or in the case of `async do` expressions within the body of that block. `await` to be permitted to be used. And the second thing I would say is I actually would argue that it actually it isn't inconsistent. If we were going to make that consistency argument, I think it would have applied to `for await` only in that. So, `for await` is interesting because it seems both immediate and deferred. When I say, seems immediate is that a for-await doesn't await the x declaration, we say `for await (const x of y)` it does not await. the x, it awaits the results. of calling the iterator, the symbol dot iterator method on y. much, like they using declaration will in the although it is deferred away. The result of calling symbol that disposed on exit. Some point. but again, or a way in that case, seems to be very immediate. But it was also deferred. Because if I say `for await` context of why and I write a bunch of lines of code and then it some point I might have an if that says `break` now there is also an implicit `await` that occurs behind the scenes that is far divorced from where the Declaration and the `await` actually occurs. because we still now have to await her return. So for-await, provides both a implicit or both an immediate await and implicit defered await. A using declaration of this case is just an extension of the implicit defer to wait. So I think it is still consistent with what we have in the language today. + +JFI: Okay, thanks for the response. I have a different topic I'll throw in the queue. + +SYG: I, like KG, also still prefer `async using`. but for `await using` maybe, in your slide for `await using` you said, one of the reasons you didn't go with it was because the requirement of cover grammar. So is that going to be hard to parse? + +RBN: Let me go back to that slide. I don't believe it will be. I've had conversations with oothers on the committee. I think, especially specifically KG who said that we really shouldn't use the complexity of cover grammars as an excuse not to try to pick the right fit. I mostly avoided originally to avoid the cover grammar, which is why I chose the using await ordering to begin with. the weight using grammar. if, in a L1 grammar, because we are when we are parsing the `using` we're trying to differentiate between say a weight using and weights of some other expression. When in many cases when performing that type of look ahead we might be performing to look ahead between await using we're using his lookahead character or lookahead token. So in LR1, it becomes more complex because we now need to cover grammar to differentiate, because where we get when we are deep within parsing and an expression like an `await` expression. It's a bit awkward to then back out and then parsing it again as a statement, but not impossible. Now, the interesting differentiating Factor here is that there is a no-line-terminator restriction between `await` and `using` and `using` does not allow binding patterns; therefore, you cannot have `await using [` to a you're awaiting an element of a `using` array, therefore in a conventional parser or in the TypeScript parser for example, which is a little bit more forgiving when it comes to look ahead, we don't enforce LR 1 we can use single token look ahead because the next token after await, using must be an identifier for it to be in a way using declaration. So In a parser that doesn't necessarily necessary to comply with LR 1, it's not that complex to look ahead one more token to look at that and a parser that requires all are one or within the grammar that we specify. today. we would need to implement a cover grammar to wrap this. We would also need to do the same thing for async using We're that the case because again async using Central to be. has conflated with async Arrow. At the top level, this is valid JavaScript. I have run the copied and pasted async using arrow curly into Node and it runs fine. Therefore we would need to disambiguate that again. Once we hit `async` and `using` that, the next token is in this case. You have to be a identifier and not an so, I think that complexity exists with both `async using` and `await using`. I'd also say, I don't think that that is insurmountable. + +USA: All right, you have nine minutes to go next up. We have WH. + +WH: There is no spec for this that I've seen so I can't tell if the solution works or not. I'm concerned about cases like `await using of` or `await using as` and so on for which even if you do have an identifier, what were you using? It's not always clear what to do. There are also possible issues with `using` followed by a slash. + +RBN: Yeah. so we don't have a specific syntax proposal for this at the moment. I plan to look at that and whatever that result being is long as it. My intention would be that if we have the editors' review and show that that syntax change would be normatively equivalent to the `using await` syntax we currently support. That might be. Acceptable enough to still reach stage 3 pending that change. I do think so. We have looked at things like using declarations in for of already we have to ban `of` as an identifier so we don't have: `for of of` etcetera. So we are looking at, we have looked at that for using as well. + +WH: I think this is stretching the process too far. The process requires a complete spec 10 days before trying to approve something for stage 3. And asking for conditional approval without even having seen the spec would be putting me in a very awkward situation. + +DE: I think it's reasonable to ask that if we select here, we want to go about the approach of `await using`, which I think we should, you know, resolved on a committee to. We want to try, then it's reasonable to ask that we bring this back to committee to confirm the grammar. + +WH: Yeah, that's my preferred approach. What I heard just now was that we might have conditional approval and I'm reluctant to do that without even having seen the spec. + +DE: I'm saying I think that's reasonable, but also I don't see reason for concern about this technically I think we should be able to work out all the cases. But still it's a reasonable request. + +DE: I'm next on the queue. I'd like to ask - Ron previously said maybe a temperature check and I think it'd be great if we could resolve today about whether we want to go in this direction that RBN is proposing. using the `await using` syntax. that work as a temperature. Check with the problem be. Are you happy with this sytax? + +RBN: Do we need a temperature check or would it be better to just repeat request consensus for `await using`? My biggest concern is that for the most part most members of committee that have discussed this in the past, Really have not expressed strong preference. and I think we did a temperature check in the last meeting and that really wasn't very fruitful. So, I think it might be better to just ask for consensus on a weight using as the gems of that slide consensus on. `await using` as the syntax to use going forward and that we would make those changes to make this work in the future. + +DE: Okay. if we're not doing a temperature check, I just want to register, I would be strongly positive. If we had a temperature check, I think it's great that you did so much detailed look into this. and I'm happy with your proposal. + +RBN: At this point. then I would like to request at least consensus on `await using`? + +MM: I support. + +CDA: IBM supports. + +DE: yeah. well KG are you okay with this procedurally? + +KG: We're just saying that we like the syntax but presumably the specification for the syntax will have to come back and get approval. + +DE: Yeah. I'm just confirming that we're all on the same page about that because previously there were concerns raised about that. Does that address your concerns WH? + +WH: The currently active question is, which syntax would people like? Let's stick to that question. I have no objection to `await using` in aesthetics but I haven't seen a spec. + +DE: Okay, I'm asking because we need the conclusion to explain clearly what we decided. so good. + +SFC: All right. I think I do. Good research and, you know, I was also in the `async using` camp but now that I've seen this research, I've warmed up to the idea of `await using` and I think that we should use. You should try to do things like this more often when trying to solve these like bikeshedding-type problems. And yeah, just what I've been saying like that so, I believe we have unless there are any other objections. + +SYG: I want to talk about the parsing difficulty. We will wait for the spec with the full detailed spec, which will give us an understanding of the grammar difficulty. I think that's a possible, a separate question from the parsing difficulty. What like what are your thoughts on the ranked choice? if there is parsing difficulty? Like, is your opinion? that we should stick with it? Even if there is parking difficulty or do you feel on the fence enough that we can go with? using await? If it's simpler? Even if it's like less ideal from your current View. + +RBN: If I were to rank our choices, to be honest, I would probably stick with `using await`. statements, that's already specified. the `await using` doesn't reduce complexity in the grammar. I know that I've already been working on TypeScript's implementation of synchronous `using` declarations and also its implementation of `async` using declarations, although if this is since this won't have stage 3 advancement, it's not likely something we will be shipping in our in when we reach (?) TypeScript but I have looked at the at these complexity of updating the TypeScript, parser to support parsing 08 using actually to support all three of these cases `using await` which is currently would currently be supported await using as a prefix modifier and and even modifier. Even `async using` and the again the same things that I found where that `using await` is the only one that's really simple because it doesn't require the cover grammar. `await using` and `async using` both require more than one token of look ahead. In the case of `await using`, it's always going to be. It's easy to differentiate at the statement level and TypeScript because we can look ahead to `using` and we can look ahead to if the next token is identifier and there's no line Terminator. Then this can only be a using declaration. So the parts for us isn't terribly complicated. I imagine the grammar too. to support LR1 would be more complicated for both `await using` and the `async using` case. Because again, there are existing there is existing Acts. that matches that we would have to disambiguate, + +SYG: Okay. Okay. so so I understand we're out of time but I do think it's important to to resolve this. When you ask for a consensus for for, is it okay, with the do we have consensus for await using versus using await? I don't know. People took that to mean like do you also like `await using` or you can live with `await using`? because if all the things being equal, it comes back to that we can live with both number one, and number two and simplicity might favor number one - like we also still have consensus for number one, like, I'm not sure what the outcome is. If we can live with number two? Because if we can all live with number one as well, I would prefer that we just do the simpler thing. + +RBN: I think that's a fair question to ask. can we potentially extend a time box by five minutes to talk about this? [yes] Okay, so, we know that. that we have consensus on await using, as an option. Instead, I would like to ask the committee if there is anyone that would object to instead leaving the syntax as is currently proposed `using await` as it. Avoids, The parser complexity that there is a concern about. + +DE: Sorry, you mean this as a fallback option, if we discover this significant parsing complexity, what do you mean that we've already identified the parts of the complexity and want to make this opposite resolution? + +SYG: I think that it from what I understand from SYG's question is to determine if we are all okay with `await using`, as not all people are happy. then. Are we also may be okay with keeping up with using await since that is a simpler parsing option. If and if so, then that syntax is already well defined and we could potentially advance to stage 3 with out. to do wrong, is you find me? + +USA: I think it would be much simpler if you put forward your preferred outcome and then asked for consensus. + +RBN: I'l be honest, my preferred outcome is achieving stage 3. + +SYG: Ask for sticking with number one (`using await`) and the rationale being there is a spec and the simplicity of parsing. + +SFC: I think that, you know, I think RBN made a compelling case for number two (`await using`) in this presentation, he presented us with evidence in support of option to arguments in favor of option one seemed a bit theoretical at this point. We don't know for sure. the, the impact on parts and complexity I would also point out that there's not a single poll in which option one one B option to, and option 3, I think was fairly consistently. The second choice in most of the polls that we were shown. And in terms of order of constituencies what's like, this is already a very, very you know, potentially confusing We thing for developers, so we should probably be in on. you know what's best for developers here and I think there's a fairly strong signal that at least number two is a fairly good for developers and I would not be comfortable with us saying, option one is a fall back because that's not what the evidence that I've seen indicates coming into this call. But I speak for myself, not for Google. + +RBN: so, I'd like to respond if I can. I agree that that's the probably the preferred outcome `await using` is I think much clearer. and I am not opposed to spending the intervening time between this and the next plenary session investigating the syntax and grammar that's necessary to make this work. And as I've said, I've already investigated the parsing complexity at least in TypeScript. I'd be happy to hear feedback from other implementers. If they think that there would be concern for this within their engines, + +DE: Briefly. I agree. and I would be very uncomfortable if we made a decision based on a potential delay of just want to eating. So for further, we stick to the conclusion of the articulator. the articulator. + +USA: We are on time, so we unfortunately cannot go any further. I hope it's okay to defer this I think that's we need a conclusion. + +DE: I previously noted the conclusion that we would try for `using await` that we would try for `await using`. and your back makes meeting. Should that be the conclusion or should we make it overflow item to come back to whatever We want to resolve in the direction, she proposed. + +SYG: I think there is no consensus on the thing I called for. I think we have the - I can certainly live with number two, so I think your summary stands that we come back next meeting and everyone is everyone thinks that pending full grammar and possible parsing complexity, number two is the preferred outcome. + +DE: I would also note that number one is a roughly agreeable fallback, in case we do discover these technical issues with silver. One is a good fallback. We discover issues with number two, is that Accurate to write this part of the conclusion? + +SFC: I don't think that's been discussed. + +DE: Okay, so I won't record that. Thank you. + +RBN: I would note though that I think - again this will require investigation, but it's likely that if we find that `await using`'s parsing complexity is too involved to be direction. We go. That is very likely that `async using` will suffer the same fate as it has the same. complexity. We're both looking at something at a statement level versus something in an expression level and requires the same amount of look ahead. So it's very likely that if await using, I'll do an investigation to both but it's very like chuckling wait using is not violent viable than async using will also not be viable. + +DE: Sorry, did I say it wrong? I meant to say that `using await` would be the fallback. I agree that, that's what you said. and can we conclude that? I think it was more that I think felt the statement was being made was that it's not that if `await using` isn't viable then we just fall back to `using await` but rather we need to come to this decision again and I wanted to make the point that if `await using` isn't viable, it's likely that know She wasn't viable. it's likely number three. Also isn't viable, which means we might still be falling back to 1 as the only alternative + +MM: I think since we're going to investigate something that we don't currently know. we should discuss it again. If that investigation says that we can't do `await using` then we should discuss it again, rather than trying to predict what the nature is of the surprise. so that we can now make a decision before actually having the surprise. + +### Summary + +Various grammars for async resource disposal were considered, including results from polls. The champion's preference became `await using`, and several delegates were swayed to prefer this option based on the data and arguments presented. +There are concerns about the parse-ability of `await using`, both based on practical implementations and the fit into the ES spec's cover grammars; it's unclear if certain edge cases will be easy to manage. +If `await using` isn't viable, it's likely that `async using` isn't viable, and the committee may come back to the conclusion of `using await`, but this will need to come back to plenary for future discussion. + +### Conclusion + +The committee resolves to attempt the syntax `await using`. +The grammar will need to be worked out in a PR, which will need to be presented in a future plenary for review and consensus. + +## Decorator: Normative update + +Presenter: Chris Hewell Garrett (CHG) + +- [proposal](https://github.com/tc39/proposal-decorators) +- [issue](https://github.com/tc39/proposal-decorators/issues/499) + +CHG: So the first item here is a few normative updates to the spec for decorators. These are all relatively small changes. so I kind of bundled them all together. There's six in total and well we can just go through each one individually and talk about it. + +CHG: ([PR](https://github.com/pzuraq/ecma262/pull/5), [Issue](https://github.com/tc39/proposal-decorators/issues/497)) So the first one is removing the dynamic assignment of the home object for decorated methods. You can see what this actually looks like here. Currently we call perform MakeMethod on the decorated function. This is the second time MakeMethod would be called on this function and that sets the home objects dynamically for that method. So that's a very new thing that's never been done before in the spec and I believe the reason I did that originally was just I didn't understand the full implications of makeMethods. I was just kind of cargo-culted the thing along. So yeah, we've had a few people point out that this can result in very confusing behavior. Rebinding `super` in general, seems really weird. and just seems like a bad idea and doesn't have any really good use cases. So we wanted to just remove this and no longer rebind `super`. Do we have consensus for that change? + +RPR: There's no one on the queue. + +DE: Yes, I support that change, although the committee has in the past thought about making an API for make method this is like it's like has all the cost but they're not be expressiveness. You. that's it for this change. + +CHG: Cool. awesome. Okay, So I'm going to assume that that one is okay and we will move forward. + +SYG: +1 yes please we don't know how to implement it otherwise + +JHD: +1 + +CHG: ([PR](https://github.com/pzuraq/ecma262/pull/6), [Issue](https://github.com/tc39/proposal-decorators/issues/487)) The next one is calling decorators with their natural, `this` value, whatever that would be. That would be the receiver. instead of `undefined` currently in the spec we just always called the fecorator function with undefined. undefined. I think that's an oversight because I never really intended for that to be the case. So, for instance, if we call `foo.bar`, this is an invocation of a function. So the receiver `this` value should be `foo` here and if you bound this for `bar`, at some point, it should have the `this` value that it was bound. They basically should work just like a normal function. That is the whole mental module of decorators. So, yeah, not sure why this happened in the first place, but that That would be the change. Yeah. + +DE: +1 good bug fix + +DLM: +1 I agree with this. + +CHG: Any other comments on that one? + +RPR: Nothing. in the room. Nothing on the Queue. So I think, you have consensus on that item. + +CHG: Perfect. Okay. ([PR](https://github.com/pzuraq/ecma262/pull/7)) So for three, this is just a new validation that we would do, so add initializers, a function on the context object for adding an initializer function. and currently it can receive any value and it doesn't assert that, it's a function. So we just want to add a statement that that causes it to throw an error if the value is not callable. So, that is basically all that one is the behaviors like really `undefined` if it's basically assumes it's a function after that point. So it definitely should try that or or I don't know what'll happen. Okay any comments on that one? Dan did you want to speak? + +DLM: Yeah, looks good to me. + +RPR: Right. Any corrections or the comments on number three? _silence_. You have consensus on Number three, perfect. + +CHG: ([PR](https://github.com/pzuraq/ecma262/pull/8)) So for number four, setting the name of the add initializer function itself currently it just did not have a name said ever set to `undefined` or an empty string. Again, I think this was an oversight. The spec is very large and it was the first time I was writing it so I made a bunch of little mistakes. Yeah. it seems like the thing we should do because that's generally what you do with functions in JavaScript so does everyone agree? + +DE: +1 + +NRO: +1 + +CHG: OK, I’ll take that as consensus. So now we get to this one. so I actually originally had a PR up to make this change in the opposite direction, which was to allow function names to be dynamically reassigned for decorated methods. The logic was that if you have a method on a class and you decorate it, it's always going to be like that. that new decorated function name. So you could like, decorate three, things and they would all be like, X, right? If the function return from The Decorator was x. Originally my logic was we're going to want as a user, I'm going to want to be able to see what the original function names were. But then a bunch of folks pointed out that this would actually really mess up stack traces because then you would have l the decorated function calling the original function and they would both have the same name. would be kind of confusing. There's we would have to kind of rethink how second function name works and everything for that as well. And what somebody mentioned was we could actually have this be something that you know is non-normative or I guess. I don't know if error text is not normative, but this could actually be something that stack traces insert, in order to make it easier for people to read what method is actually being called. Any clarifications needed for that one? + +ACE: Sorry, I’m not on the queue, but didn't follow what's being proposed exactly. + +CHG: Okay. So let me see if I can make an example real quick, so, actually, I think. RBN has some example here. so currently a b and c. When they get decorated we return this anonymous function right here. So, a b and c. These names would be the name of whatever this anonymous function is. And what that leads to is. You have like really odd stack traces because you, have two a's or to b's. Because you know this function is going to call the original function. and yeah, it's just kind of confusing behavior. So the solution would be Instead. you redefine, if you want to redefine the name to match, the original name, you can do that yourself manually. and otherwise it'll just be this new decorated method. So like every function here would have the name X by defaults Okay, thanks. + +ACE: So the proposal is we don't do anything special with the name. The name of the function is just the name of the function, as it would be whenever right? + +CHG Yes. Any other? questions? + +ACE: I'll just add: that is what I've been doing with decorators already, setting the name myself manually. So it seems good. + +CHG: Does anyone object? + +MAH: I was thinking we didn't make sense to conditionally sets the name to the original method name. If the original function is an animus. So if it has an empty empty name, if you return the same anonymous function for multiple providers. yeah, so if you return an anonymous function, the decorator machinery would automatically set the name to the name of the methods that was degraded. + +LCA: What I'm saying is what if what if you create an anonymous function outside of The Decorator function itself and return that same function for multiple like decorators over multiple, multiple, protocols? Maybe renaming that function. like to whatever the last. decoration is, or rights. + +MAH: Yeah. but that would only happen the first time I guess but yeah, that would be problematic. + +CHG: It sounds like that. We agree. That would be a bad idea. + +RBN: Yeah, I was just going to say the same thing and that was Illustrated. although not Illustrated within unnamed function, but in the example that Chris had up a moment ago where with no op, If this were a function, that did not have an assigned name and I were using it the same way then the difference is instead of sun functioning resulting in logging acc, would log abb in still end up with that same situation. So I still don't think that's viable. + +SYG: (from queue): Prefer no conditional setting. "Things that look declarative ought to be declarative" + +CHG: All right. so it sounds like consensus to remove set function name and dynamically setting the name. Great. Yes. Cool. + +CHG: Okay. (PR pending, [issue](https://github.com/tc39/proposal-decorators/issues/468)) So the last issue here. is a bit more involved. Basically we've added this new accessor keyword, right? That if everybody remembers is basically a way to define a getter and a setter on the class, in a single statement. That access a private storage, that is backing that getter and setter. And the idea is that it works like a kind of field, but via decoration, and potentially, in the future via othe like property syntax. You would be able to intercept that get and that set. and replace it. So in classes, on class instances this works just fine because you can see like basically desugars to something like this. Imagine it without the static on a class instance. It's just like a private field, a getter and a setter that can access that private field. cool. But once we add `static`, we run into a problem because all class fields get to find on every instance and are inherited class fields as well. for instance private fields, but for static ones, they don't. static private fields, get defined just on just like normal Fields, on. on the class itself and they’re inherited on subclasses. classes via shadowing or prototypical inheritance. Whichever it is. So what this results in is if somebody tries to access on a child class here, this will throw an error because they can't access the private banks on the Sea. On the child class. It has to be on the superclass. and this kind of just makes not at all on for inheritance on. static. properties. So the proposed change is that instead of having it desugar, essentially to this. hashtag X, we would have a desugar to a direct reference to the class itself, which is basically how you would solve this problem if you're using static private fields today, if you you were doing that and a accessing it with, you know, get and set. You would just replace this with a typically and things would work. So yeah that's that's basically the proposal. Any questions about that? + +DE: Okay, so I'm getting a plus one. This is a good fix. We spent two years discussing this case, about what the static private semantics should be. I don't know if we need improved documentation or something? Anyway, the fix seems correct. + +RBN: I just want to make sure and I need I regret that I did not see the there is no PR the addresses of fixing this, but I want to make sure that when we're talking about this that we're talking about, this isn't actually about return a.hash X. It's actually because if a itself decorated with something that replaces the constructor. It's not the value of a, at the end of decoration, it is the value of a before any decorators are run so that we're not making that confusion because that would be just as bad as this. this. + +CHG: I believe that it will be the same A that private fields. access in general. I believe that. all instances of a get rebound to the return value of the decorators that are applied to the class itself. I'm pretty sure what you would need to make sure that it is whatever the value is that the private field is installed on. + +RBN: yeah, my comment is correct. It should be the decorated class. We had this discussion before about the fact that the all private static fields end up installed on the decorated version, otherwise, static methods don't work? So yes, that's correct, I'm sorry. It should be the whatever the final version of what that class binding is + +CHG: Yes, And that is the other thing we tried to preserve with decorations, is that you wouldn't ever end up in a split world where you'd have like one reference to a meeting one thing? and another reference to a meeting like the undecorated thing. So I'm pretty sure it's that all references to mean the decorated thing I'd have to have to look again to be 100% sure, but I'm pretty sure. Sure. Okay, I remember the reason why people didn't want that. to happen. The reason was they wanted decorators to run after fields were assigned and which would have forced fields to run in the un-decorated world. But we solve that with class initializers that can run after fields have been assigned. Okay. any other topics on the queue? + +NRO: Yes. I'm sure that this least like PR for this practice. We make it through here, but the private field is only on the Decorator class appears on the credit class and like that. but that's the only option as we work. I would be happy to review this spec for this PR, stuff like that. There is only a very possible answer here. + +CHG: Yes, yeah. you can, you can look at the spec text in general for that, that has been rebound and everything properly in the specs currently. So we would just continue through with that. + +CHG: Do we have a consensus for this change? Pending the actual spec update? + +RPR: Any. any voices of support for this? + +DE: Yes, I support this + +JHD: +1 + +RPR: All right, any objections? Doesn't. sound like it, so yeah, congratulations. You have consensus on number 6 as well. + +CHG: Awesome. + +### Conclusion + +Consensus of all 6 changes: + + 1. Remove the dynamic assignment of `[[HomeObject]]` from decorator method application/evaluation ([PR](https://github.com/pzuraq/ecma262/pull/5), [Issue](https://github.com/tc39/proposal-decorators/issues/497)) + 2. Call decorators with their natural `this` value instead of `undefined` ([PR](https://github.com/pzuraq/ecma262/pull/6), [Issue](https://github.com/tc39/proposal-decorators/issues/487)) + 3. Throw an error if the value passed to `addInitializer` is not callable ([PR](https://github.com/pzuraq/ecma262/pull/7)) + 4. Set the name of the `addInitializer` function ([PR](https://github.com/pzuraq/ecma262/pull/8)) + 5. Remove `SetFunctionName` from decoration (PR pending) + 6. "Bind" static accessors directly to the class itself. (PR pending, [issue](https://github.com/tc39/proposal-decorators/issues/468)). Pending updated spec text. + +## Decorator Metadata Update + +Presenter: Chris Garrett (CHG) + +- [proposal](https://github.com/tc39/proposal-decorator-metadata) +- [slides](https://slides.com/pzuraq/decorator-metadata-update-march-2023) + +CHG: So yeah, Where we left off last time with decorator metadata is we broke it out from the decorator proposal which was at stage 2. So decorator metadata started at stage 2. and we ended up having a incubator call where we discussed it, and we all came to the conclusion on that call that yes, metadata is definitely needed, and there are some valid use cases for it. and we came up with the basic strategy for how we wanted to pursue metadata in that call. However, there has been some debate over exactly how that will be implemented and everything. So today I'm going to talk about the kind of the current proposal, a few different options, the current proposal, which is my preferred option, and several others, as well. Very strongly, but then a more minimal version, that would be a little bit more restrictive in some ways. And then what I see as kind of a compromise solution. + +CHG: Quick refresher. Why is metadata useful? Well, it's used for a lot of things like dependency injection. ORMs, runtime type information for like, you know, various type checking frameworks and frameworks that use that type information. serialization unit, testing routing debugging, membranes, and to top it off, `Reflect.metadata` is the single most used decorator library, which definitely suggests that this is a useful pattern overall. + +CHG: How metadata used to work: In legacy TypeScripts, or a Babel decorators, they would receive the class itself directly and because they received the class itself, we could do things like place a value directly on the class, You could define, underscore underscore types, or symbol types, and for other types. And put your Types on the class directly or you could use a WeakMap to associate it in a way that was private with the class. So, this gave people a lot of flexibility when defining metadata. However, this is no longer possible because in order to make sure that decoration was as static as possible, and didn't allow people to change the shape of a class dynamically. we no longer give people the reference to the class itself. In fact, we don't give them a reference to any value that they can tie back to that class. So there really is no way that you can currently. with the stage 3 decorators proposal. side-channel any metadata. specific to that class in an ergonomic way. In a way that would be one to one with what was previously possible. There have been some proposals where you would maybe create an entangled decorator like have that decorator itself hold the metadata. But that has been called out as having way too much boilerplate. Basically, every time it's been brought up. So that's why we're adding metadata as a first-class supported thing. + +CHG: So yeah. Let's talk about the current proposal. so, basically, what we would do is pass in a plain JavaScript object called metadata on the decorator context object. Every decorator, applied to the class would receive the same object and they would be able to define whatever they wanted on that object. And then at the end of class definition that object would be assigned to Symbol.metadata on the class itself. In addition this metadata object would inherit from any parent classes metadata object. So it would look up prior to being passed in prior to decoration, we would look up `Symbol.metadata` on the parent class, we would, you know, `Object.create` with the parent classes, `Symbol.metadata` objects and if it existed and the child class would be able to then look up parent class metadata using standard prototypical inheritance. + +CHG: So the pros of this approach are it's very simple and straightforward. We can do public metadata where if a decorator author wants to for instance share type information or validation information, they can share that in a public way. they can create a well-known key. Either. a symbol or a string key, and then other decorators can then read that information and use it in a way. And this is something that we see a lot of in the ecosystem. And at the same time, you can create truly private metadata by using a WeakMap. You just would use the metadata object itself as the key in the WeakMap and then it would work. kind of similarly to you how it would work previously. and then we have inheritance with that, you know? works just like class inheritance, just like prototypical inheritance. by default things Shadow and it's pretty easy to override metadata on a child class but just like with prototypical inheritance, you can manually crawl the prototype chain to find out what the parent classes metadata was. so, the major con with this approach that has been pointed out, is that this creates a shared namespace, a global namespace effectively where anybody could and will add string names to this object. and it could become a space where people are fighting for particular names or decorators are stepping on each other's toes. And it could cause weird undefined behavior and things breaking, in other places. + +CHG: So, with that in mind, we come to option 2. So the option 2. idea is basically we kind of similar, we pass in that object. that metadata object on. context. However, it is a frozen object. and as a frozen object, it is only meant to be used as a key in a WeakMap, and it would also have a `parent` property, so you could look up parent parent metadata as well. so this this would basically force people into using private metadata all the time and there would be no way to have a shared namespace. The pro is it is private by default, and generally encourages what is probably the best practice by default, and there is no shared global namespace for people to accidentally collide in. The cons are there is really no way to share public metadata. with this setup. And we've discussed this. Supporters of this particular solution had pointed out that if you wanted to make public metadata, you could export a WeakMap or an API to look up the metadata for a decorator. My personal worry there is that, that really just exposes the intricacies and details of the build systems themselves that are exposing those modules. I mean, we already live in a world where duplication is very common, and it is possible that you could get multiple modules from the same library in a single build of the app. And if we haven't fully duplicated every single one, you might end up with a split world where metadata that is logically part of the same set can only be accessed partially from one part of the app or the other. And if that's the case, you might say, well, you could just put the metadata on the window, and make sure that every instance of the library that is exposing it shares all of its state. But that brings us right back to where we were before. We have a shared global namespace. What's worse is that that doesn't work in secure ECMAscript contexts for we wouldn't be able to because you're not allowed to put things on window. So I'm not sure that it would be better to push people in that direction. And in addition inheritance is a little bit trickier to use, but that's not a major driver for me, personally, I'm more worried about the complexity that this solution would introduce for sharing public metadata. + +CHG: Okay, option three. The idea behind option three is basically to have it be the same as option one in terms of what actually gets exposed: it's an object, it has inheritance, and it's just a plain JavaScript object, not frozen, but we would guard access to that object with getter and setter functions on the context that the decorator receives. The getter and setter functions would force users to only use symbols as keys. This would help us to prevent collisions because by defaults users when they make a new symbol unless they're using `Symbol.for` will be making a unique simple. And unless they, you know, expose that symbol if they export that symbol the only way for anybody else to get it would be to use object. prototype or get own property symbols. And that would have to be used on a class after being decorated. It's multiple steps. and it makes it just a bit more inconvenient to use a well-known name and accidentally have collisions. Other than that, it's basically exactly the same as the option one. And the idea is that it kind of addresses some of the concerns: it encourages people to avoid the issues that come up with option two, but still allows people to intentionally share public metadata. When they want to That's that's pretty much all three options. + +MM: Okay so I have a question that's mostly about option number one. In the normal use um, for embedded where you're concerned about putting things in ROM and for security where you're concerned about hardening things, making many properties, many objects frozen, for option number one and option number three, where you've got a object that's mutable. During initialization, if I do understand. this proposal does not end and given the nature of the proposal should not for options. One in three. specify that after the class is initialize that the object, The Meta object get transitively frozen. My question is in an environment in which something else would transitively freeze them after the class is initialized, do you expect that the patterns of usage that you know, that you've seen that, you expect people to use the meta object for would or would not get broken. by transitively. Freezing The Meta object basically transitively, freezing, the meta object, at the same time. That one is transitively is transitively freezing the class itself, the class prototype and the methods of the class—basically everything, transitively. Freezing the class means, of course, freezing the things that are reachable from the class that would include the metaobject, right? + +CHG: I would not expect that to cause any breakage metadata is typically used in a very static way, The values that are provided on there are not mutated after the fact, they really just define declaratively what's expected of the class or whatever from a particular decorator. The only usage that you I've seen, that is mildly problematic with these cases the case where, you know, to decorators choose the same key on metadata and collide but I've never seen The metadata object used directly as a store of information. + +MM: And then with regard to option, three, you said that the context object would enforce with accessor properties, that you could only set symbol-named properties on the metaobject. So that means it's only just confirmed because I think I initially misread the proposal as implying that option. Number one, option number three, The Meta object need to be exotic. in order to allow the creation of symbol name. Properties. but not allow the creation of string name properties. where you're not saying that. but the Text object would also. not as so, how would the context object? enforced us without the context? Object itself needing, either something exotic or proxy? + +CHG: So, the context object would just have two functions `setMetadata`, and `getMetadata`. It would not have direct access to the object that gets defined on `Symbol.metadata` users would be to use context set metadata to set a key on that object that it's backing. this. And for instance, if they wanted to use a WeakMap, they could either set the key to a value that is a symbol and then then use As WeakMap key. now that we can or just set it to an object and that use that object as WeakMap key. So it would be one more step for people who want to use WeakMaps for privacy, But it's not that much overhead. Okay, thank you. + +MM: I'll refrain from stating an opinion at this time. but you've answered everything that would have been an objection to any of these three proposals. Yes. + +KG: So Chris knows this but for everyone else, I am very strongly in favor of Option 2 over Option 1, and in particular the shared namespace aspect of option, one seems like it ought to be fatal to me. I understand not everyone shares that intuition, but for me, the con is: we are introducing a new Global shared namespace. I would prefer to kill most proposals over having a new global shared namespace. So, for option 2 the only listed cons, there are two. One was that it makes inheritance trickier, which I kind of agree with. The other is that it doesn't allow you to share public metadata, but of course it does allow you to share public metadata as Chris pointed out. You share the way that you always share data between libraries, which is that you export a function that allows people to look up data. The downside of this as again, Chris pointed out is that this means that you might have multiple versions of a library built into your application and they would have different versions of the metadata. But to my mind, this is like I'm actively good thing. This is a property that we want, the whole reason that the build system in, for example, npm, works the way that it does is because the requirement that you have like a globally unique version of every library and as soon as you start including multiple versions of a library things break, that was a very bad situation and the situation that we like is where two different things can import. from different versions. of the same library and both bits of the library can be in the same application. And that works. I don't have to worry about if I upgrade you know, 1.0 to version 2.0, which changes how the type property on the metadata works that breaks everything because I have a different library that's expecting version 1 of the type meta data and like have one library that expects version 1 of the type meta data and one version that expects version 2 of the type meta data. And now, those fundamentally cannot operate because they are working in the shared namespace that is a bad situation. which we should avoid it. But we avoid it the way we always avoid it, which is that you used imports and exports and the build system just wires things up for you. so, I really don't like having a shared Global namespace. I think, that is very bad thing and importing and exporting works great for everyone except TypeScript to as a genuinely unique constraint, but you do get shared metadata with option two, you just export a thing that lets you share the same way you always share. + +CHG: I do want to respond to that. It's not as simple as you make it seem a great example of where this could absolutely fall down very easily is dependency injection. If you have a single container for your app, which is the entire point of dependency injection, and that container gets a version of the metadata that does not fully describe everything that is supposed to be injected, your entire app will fall apart and you can easily get into that situation if you have a minor version difference between two libraries, not a major version even just a minor version. And so now, they're bundling two separate minor versions, and build tools. make these types of decisions all the time. This is not spec'd. We do not have a spec for when you deduplicate modules and when you don't. So you basically are pushing this problem to modules, and you're saying like, Okay, it's up to the build system to figure out, and now everybody has to learn all the details of the build system to figure out. To duplicate everything just so they can have their dependency injection. + +KG: So you have exactly the opposite problem with option one, which is that now, if I have a version bump, that should have been not breaking because I have one part of my library that expect it to work, and now, the format changes And like now I can't have two versions of the library. Like, I was that not exactly the same problem problem I didn't. Grade, it shouldn't have broken anything because I just needed like some separate part of the application. I have a different version of the library, But like now, the format of the public namespace has changed. And so the thing that previously worked didn't I mean this is + +CHG: this is one of the things that when you're working on a dependency injection framework or something like that, you’re very careful about those changes because you're aware of that + +KG: okay? So if the solution to this problem is that people should be very careful, we shouldn't produce a new thing they have to be very careful about. They should just live in the world that they're already in, where deduplication and non-duplication of libraries is a thing that comes up. That's a thing that is familiar to people already. + +CHG: and that is it's a thing that doesn't necessarily even have a solution like some build tools. Don't give you the option to deduplicate things properly. great, but like the names will just collide. + +KG: That is not a solution to the problem, it's - "You only get deduplication" is no more of a solution to this problem than like "deduplication works for metadata the way that it already works in the rest of your application". + +DE: there's a lot of people in the queue. You made a good point. + +RBN: So my concern. is and it's like KG has articulated, as being a TypeScript specific concern, but it really isn't my concern is not about library deduplication, although I do think that is a valid concern. My concern is about: Script is a valid parsing goal. there. I don't have numbers but my intuition is that the largest majority of existing JavaScript code run on the web is still scripts, not modules. A system that purely depends on the concept of public metadata being only available with modules is not actually public. And so the TypeScript specific example is that TypeScript would like to be able to at compile time as a compiler option, inject a decorator that adds metadata about the types of the things that you are decorating. This is the TypeScript emit decorator metadata flag, that we've had since 2015. and we'd like to be able to bring some of that capability forward and that is one of the main motivating reasons that we even introduced metadata as part of this proposal. For TypeScript to be able to work correctly and inject a decorator that can attach metadata in a script environments, I cannot depend on a module. depending on the module require an await of an import. Each. because I can't necessarily import. And I can't break the users expected. They expected semantics of code. that does not `await` currently by randomly injected in an await. And then, it could also be in the middle of an asynchronous function that you declare a class declaration. So that means that we cannot depend on modules. So if the only solution is to say that public metadata is only valid, if you're using modules. completely cuts off a very large percentage of people that are currently using JavaScript with scripts today in bundlers, etc., and saying that this solution can only be solved, very far down the lin when everybody has moved modules, and I don't think that that is a valid position or I don't think that that's a very strong position, and while I completely understand the concern about having a namespace that allows for possible collision having a mutable, object does not prevent you from using WeakMaps to resolve those potential collisions, It does prevent well-crafted and well, written code that can use a namespace like this. successfully from being able to do a very important feature in a script environment where they don't have access to modules to have that type of behavior. The other concern that we have is that – we've kind of rehash this over and over again on GitHub – but every solution that's been presented requires significant amounts of rewriting to support it, and we really don't think that that's valid or that's a viable option. So our strong position is that option? One does not prevent you from having the type of isolation you'd like out of option two but option two completely prevents the types of behaviors, we'd like to be able to use in a script environment. So we believe option two is completely completely a non-starter. + +RPR: Okay. and so on the time box, we only have four minutes left but given that we got through the first item. happy to let this run till 4:30. so that's 13 minutes. There's still a lot on the queue so please keep that in mind. Justin. so, Justin's point was about a practical example of Library duplication. + +JFI: Yeah, I want to point out that the library just location problem and solving that has dangers on, on both sides of things. So, like, yes, if your light library is choosing to use immutable metadata object and choosing to use a of versioned and generic key. You can run into problems of collisions on that. On the other hand if a library naively uses the meta data object as a key into a WeakMap, things can break, and in my experience break more frequently, than the danger of collisions there. And this is because it's just so easy to get in situations where you have duplication. so maybe you're importing your decorator from a library that for some reason, got its own copy of the base class of the thing. you're decorating installed and so that chain in the dependency trees using one module that has the metadata WeakMap in it. and then the base class, the user is decorating Imports a different version and so is using a different WeakMap. And now the class when it boots up cannot see the metadata that the decorator applied because they're in two different WeakMaps. So, I think that libraries are going to have to be very careful. No matter which way this goes. and one way, you know, they're going to have to have some kind of versioning scheme for their metadata keys or they're going to have to hang these WeakMaps off of the global object, with a version two property name off of there, to ensure that they share when they need to. They have to do one of these two things like libraries need to be aware of versioning and when they can reuse the same WeakMap or the same key, So I Sirius is very, very equivalent. And I hope the library offers are going to be careful here because if you're writing to metadata, you can't lose a generic. You also can't use a naive. WeakMap map still in the module. both of them are going to break. far too too often. + +DE: I think we seem to have a shared understanding that Library duplication happens. and the question is what's the behavior that we want to occur when this happens? And KG and many other people have articulated kind of different things about what behavior should occur. I think we should be approaching this kind of problem pragmatically, rather than from first principles. and I think we have a bunch of practical pieces here, where we do want them to be referring to the same. the same thing. I don't think it's useful to have a strong first principles argument about how you shouldn't have any shared namespaces, because we have a ton of these all over the place. Like how do you get to anything? so, I'm in favor of the first option. + +MAH: Yeah. I'm not sure that module identity discontinuity is problem re specific to metadata decorators made of metadata. This can happen in a lot of other cases and it's more General ecosystem problem. problem. that I'm not sure should wait that heavily on the decision here. + +CHG: I think the reason I bring that up is not because it is a specific problem to decorators.I don't think that’s the way it was being framed. Was that Global namespace is have all the problems and modules fix those problems. And thus, they have no problems. And my point was just like modules had problems, too. like. we have a hard problem here and I think no matter what the solution we choose is, there's going to be a lot of trickiness in dealing with all of the problems of it because the goal is to share public metadata in a global way that is act that is the actual goal. by the way, + +MAH: I've have a little nit on that. It's not a global name space. It's a namespace scope to the class being decorated + +CHG: yeah. I think we're using that as a shorthand, but yeah. we tend to move forward. + +DE: so, just overall, I think it's important that we keep this system simple and I'm happy that stage one that option 1 is a very simple proposal. It's great that the decorator proposal just keeps getting similar and simpler. + +SYG: Yes. I like the simplicity but I'll stop it there. I don't think judging by the matrix chat. Don't think I'm actually qualified to give much of ecosystem opinion here. but just implementation. I like simplicity. + +JFI: Yeah. I mean I wanted to comment, I mean, I think number three is interesting when you look at it because you it supposes that is getting rid of this. The shared namespace here. But then you see the example, use a simple dot for all throughout it And you're like, well, it's just using a different global namespace. I think that. And then, it's just very cumbersome to use and so I don't know that the supposed benefits actually materialize in are worth the cost of the extra. syntax look for option. Three. three. But I think that generalizes to two other things here. I think that what libraries like, what I maintain are going to have to do, is specifically seek out a global namespace and that there's a lot of equivalence is here, right? Symbol that for and window dot underscore, whatever you want to name your property for your WeakMap are basically the same as a key in the metadata object and and and I don't know. I think that we run the risk of, you know, making the API Is but having very equivalent hazards, no matter which one we choose here. here. + +CDA:. I don't have anything to add from what's already been pointed out, just expressing support for option 1. The downsides of two and three I think are a little more hard to swallow than for option 1, I find them less ergonomic and more limiting. Of course, we don't want people stepping on each other in option one, but still support it. + +JHB. Yeah. So I'm asking basically what's wrong with option 2? But it like with it, take away the frozen object question and just provide a symbol Yeah. + +CHG: So the reason we're using a frozen object is so that you can access parent metadata during decoration. Otherwise we could use a symbol but it has all of the downsides of option to and basically has all the same problems. + +JHD: Well, I guess I'm confused. In the frozen object case, the object is meant to be a key in a weak selection, right? Ah, the prototype has a parent always, so you climb up the frozen object, right, thank you. + +JFI: Yeah, just real quick. I want to iterate because it gets asked a lot when we're going to use the new decorators implementation and TypeScript and we simply cannot use it until we have metadata. So, as that type of constraint here, you know, my number one concern is that metadata before word at all. So I don't know how many other libraries in this are in this position where that even though starting to roll out you can't use it until that a bit of the curse but I want to reiterate that That's all. This is an important topic. + +MAH: Yeah. I want to say option 3 for me is a non-starter because it actually enables for a managed decorator to interfere with another decorator. by having a disconnect between the context subjects which in Option 1 & 2 is the identities preserved the creator can set this metadata that would conflict. The fact, it's the symbol doesn't mean there is no possibility of overwriting, another decorators metadata. It just means like non-malicious ori by chance overriding would not happen, So imagine decorator can go and set. the metadata. of another decorator and override it and if even if they do and give an option 3 forces, if you want private data you have. to use a single entry and associated with a WeakMap key. So now all the sudden you could get that overridden and there is no way to protect against against that + +CHG: that's, a good point. Yeah, I do think that that would be something we'd have to design around. maybe we could make it so that when you initially define metadata, like, what's that metadata, you could specify that it has makes the property like non-writable or something but that again that seems like extra complexity that personally I would not prefer, + +MAH: right. + +CHG: so, I don't know where we're at now with this. Basically, my plan was to try to get consensus here. on one of these options. so that we could propose for stage 3 at the next plenary. It sounds like we do not have consensus for any of the options at the moment though. + +KG: I'm not going to say that this proposal can't advance unless it has my preferred semantics. I have said my piece. There are people who still prefer option one. We are not going to reconcile those. So if the proposal is to advance, we need to pick one. I acknowledge I am in the minority and am ok saying we can live with option one. + +MM: So, I'm going to register that option 3, given the answer to MAH’s question for me is clearly disqualified so that I'm okay with either one or two. + +RPR: I don’t think anyone is pitching option 3 at this point. So as DE says, how about CHG would you like to ask for consensus on your preferred option? + +CHG: Yeah, Do we have consensus for option one? one? + +RPR: Hey, is there any support for option one? I think there was so DE, PDL, CDA. So you have three supporters for option one. Are there any objections to option one? Or any non-blocking concerns? + +RPR: I think JHD has a non-blocking concern. + +JHD: I prefer option 2 but can live with option 1. I agree with everything Kevin has said. So all the same reasons. + +MF: I also agree with KG. + +RPR: Okay. So we have MF, KG, and JHD who prefer option 2. and we had another voice supporting option 1 from Justin. So the moment looks like we have consensus for option one. + +CHG: Okay. the nice thing is also like a spec. It has. already been written for that. So if y'all want to review it, I will be presenting for stage 3 at the next plenary. So yeah, cool Thanks everyone. + +### Summary of Key Points + +- Metadata is necessary for several core use cases of class decorators. It was omitted from the Stage 3 decorators proposal for excessive complexity, and it is now at Stage 2, with hopes to progress to Stage 3 next meeting. +- There are three alternative semantics proposed for how decorator metadata could be supported, all based on an object shared throughout all the decorators on the class. +- There were blocking concerns from some delegates about malicious interference with Option 3, which was allowing Symbol-keys only, as this didn't do a good job solving the problems that the mutable and immutable alternatives were attempting. +- Some committee members expressed that they prefer that this shared metadata object be immutable, to avoid risk that different decorators from different libraries (or versions of the same library) could be using the same property name for different things. Avoiding that would all decorator authors in every library to coordinate with each other in advance to ensure names are not re-used. +- On the other hand, in many cases, libraries are falsely duplicated, and it is a requirement that multiple duplicates are looking at the same piece of metadata (and are willing to evolve this metadata in a backwards-compatible way to account for that). Core issues for using module/library state to store the metadata is compilers emitting code for applications using scripts (rather than modules), and risk of duplicate library instances + +### Conclusion + +- Consensus for option 1. Metadata being a mutable object. + - Non-blocking concerns around modularity/compositionality expressed by JHD, KG, MF. +- Spec text for this already written; will be proposed in the next meeting diff --git a/meetings/2023-03/mar-22.md b/meetings/2023-03/mar-22.md new file mode 100644 index 00000000..b81683e9 --- /dev/null +++ b/meetings/2023-03/mar-22.md @@ -0,0 +1,1322 @@ +# 22 March, 2023 Meeting Notes + +--- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| -------------------- | ------------ | -------------- | +| Chris de Almeida | CDA | IBM | +| Ujjwal Sharma | USA | Igalia | +| Istvan Sebestyen | IS | Ecma | +| Waldemar Horwat | WH | Google | +| Luca Casonato | LCA | Deno | +| Ashley Claymore | ACE | Bloomberg | +| Richard Gibson | RGN | Agoric | +| Daniel Ehrenberg | DE | Bloomberg | +| Guy Bedford | GBD | Fastly | +| Daniel Minor | DLM | Mozilla | +| Yulia Startsev | YSV | Mozilla | +| Frank Yung-Fong Tang | FYT | Google | +| Willian Martins | WMS | Netflix | +| Ben Newman | BN | Apollo | +| Ben Allen | BAN | Igalia | +| Michael Saboff | MLS | Apple | +| Peter Klecha | PKA | Bloomberg | +| Jordan Harband | JHD | Invited Expert | +| Justin Ridgewell | JRL | Vercel | +| Linus Groh | LGH | SerenityOS | +| Mark Cohen | MPC | Netflix | +| Sergey Rubanov | SRV | Invited Expert | +| Ron Buckton | RBN | Microsoft | +| Nicolò Ribaudo | NRO | Igalia | +| Luis Fernando Pardo | LFP | Microsoft | +| Philip Chimento | PFC | Igalia | + +## Election of the 2023 Chair Group + +(no notes for this topic) + +### Conclusion + +- Chair group and facilitators group accepted by acclamation. +- New chairs are + - Rob Palmer + - Ujjwal Sharma + - Chris de Almeida +- New facilitators are + - Yulia Startsev (leave of absence from March) + - Brian Terlson (leave of absence from March) + - Justin Ridgewell + +## Import reflection update + +Presenter: Luca Casonato (LCA) & Guy Bedford (GBD) + +- [proposal](https://github.com/tc39/proposal-import-reflection) +- [slides](https://docs.google.com/presentation/d/1F62Jia5erIm6m6nqkm_2pFIlNLOVF0E4ewrVRytSJEs/) + +LCA: Hi everyone. This is the import reflection update for this meeting. We’re currently Stage 2 and Guy will be presenting. So I’m going to split the presentation into two parts. The first is the general concept of import reflection and import phases. We have sort of figured this out during the module meetings and the second half is going to be Guy explaining the scope of our proposal and how are we scoping and what the plans are for the future. So a quick recap on module loading. Module loading can be thought of as happening in approximately five stages. These are the resolve, fetch and compile and attach evaluation context and link and eval stages. Some consider may be more stages in-between or may combine some of these stages. But I think this is a good overall representation of module loading. This is what we will be going with for the rest of the slides here. Let’s go through each of these stages. The first stage is the resolve stage. In this stage, we take the import specifiers from the refer module and combine them with the referral module to create the new import specifier or whatever the resolve asset for that module is. For example, this would be doing import map reds legislation and URL resolution in the browser or in node. The next is fetch and compile. This is where the asset that is resolved is fetched from the network or disk or loaded from some internal bundle in the application. And is compiled, for example, JavaScript is compiled to – is parsed and things like that. On the web is network fetch where CSP is enforced and content security policy on the web would be enforced here at the web stage. At this point, your module is still completely stateless. It can be instantiated multiple times. This is the source code and there’s nothing to it. Next is attached an evaluation context. The source code loaded is turned into module instance that has the identity. At this point the module link space is attached and under various things like the module graph are at this point. However, nothing is linked yet at this point. The next stage is linking. This is where all the module instance is static imports are linked by means of doing the module loading steps for all dependencies and linking them together. And the final stage is evaluation. This is where the module source is executed and link bindings is accessed and used. If there is wait, at this point we wait. And once that completes the module is considered to be Completely evaluated. The current state of imports is that imports currently unconditionally execute all of these five stages. So if you call import foo from “foo.js” and compile it and take it into account EXP and evaluation context and create the name space and link the dependencies of the module and evaluate the module. At that point can be used from the module. There’s however many scenarios where it makes sense to choose to target different loading phase when importing. For example, you may not want to be evaluating immediately but want to defer evaluation until the actual use of the module. This is deferred evaluation. You may want to only fetch the module but do the instantiating yourself to multi instantiating a given choice and instantiate and not link it and manually link it for other modules for mocking or timing or testing. These are different loading phases. If you want to target the phase, this is asset references. And this is sent to committee in the past and statically analyzed with URL and permitted URL and support import maps and portable. Another phase that can be targeted is the fetch/compile face and targeting the import source. This would be resolving and fetching the source. If this was implemented the full host resolver could be used and full host loader and CSP is for all other imports. The source is used for multiple instantiation and talk more about this later. The next step is targeting instance imports so this is where you’re targeting a specific instance. This is an instance that represents a module record object in the spec. This can be met with linked to – it is evaluated the first time the module on the name space is accessed. The final stage is targeted eval phase. This is the regular imports we’re already familiar with. So to be able to target all of these different phases, we considered multiple ways to express this in the language. And we ultimately came to the conclusion that this should be represented using syntax in the import statement. The reason for this is the loading phase that you’re targeting can influence the valid syntax the rest of the import statement. For asset references and source import only defeat binding can be defaulted. And no module space can be affected. Only one possible valid syntax in this case which is the default binding syntax. For deferred eval imports only start at name space is imported and here the phase is influencing the valid syntax. Because of this, using something like import assertions or attributes would be confusing because the valid syntax at the start of the import statement would be influenced by a completely customizable option at the end of the import statement. The import attributes and import assertions are modify module loading. We will talk more about this in the next presentation. And import loading phase exit from the module loader at the predetermined loading phase. I’ll explain more about this in a little bit. And we will cover it later in a later presentation. The syntax that we have come up with is this one. It is import phase keyword and then a binding and the binding could be a default binding and could be a structure and could be a star as namespace. It could also not exist at all, the binding, if there’s no phase specified or regular import. From specifier, this is the specifier, you’re importing and with or assert attributes. This is the import assertion/import attributes proposal that will be the next presentation. The phase informs syntactically bindings. At the beginning of the import statement. This is designed to make it easier for syntax highlighters and to be able to unambiguously binding depending on phase. The syntax was designed in coordination with other proposals that are covering the space. And with import assertions and with deferred evaluation on the modules call that we have every two weeks. Then dynamic import form mirrors the static form. The difference is that phase is an option in the options bag where it is passed as a string. This is the same. With again here maybe asserts or with depending on what the import assertions proposal is. And this phase syntax is a convention, not a specification. We don’t want to – this is exactly how all imports must look for future proposals but right now this is a convention that we have come up with that covers the cases that are presented so far very well. So one thing did I want to mention here? + +GBD: So we’ve really tried to simplify the scope of the proposal by considering it as existing within these different type of phase modifiers that can also exist. For this proposal itself, we are only currently now specifying a source phase for the module import. It’s module source imports and specifically the initial use case that we’re looking at is for web assembly source imports. And so to explain as we have mentioned many times before, what these web assembly use cases are. Next slide. The motivation for the source imports is that currently on the web when you load web assembly, you need to be using the fetch function or using run time that doesn’t have fetch or same features. You will not have a portable way of loading web assembly modules. It’s not statically analyzable and you end up having to use all of these complex loading mechanisms and new URL and meta URL and try to understand all of that. And even with the web assembly integration you still need to get access to the web assembly.module source objects that not source objects that are able to be much instances instantiated with different memories and substantiated with different import and resolution rules that weren’t necessarily exist within the host resolution. There’s some quite specific use cases around that. What does that look like for web assembly if you consider importing ES build we’re using the source phase and loading the web assembly module in the source phase. What we get back is already what is the specified in the web assembly JSI on the web that is the web assembly module object that represents fully compiled but uninstantiated and you can pass the imports to the instantiated method and do that with what you specified and spaces and you finally get back the executed module. The benefits with the new syntax is build force can understand with the web assembly modules and optimize that and understand that and make sure bundling process. Ergonomic to develop this. It’s clear how you load web assembly to look up the multi line process used today and it’s really the tooling and benefit that it’s integrate into the tooling in a way that all types of different tools can understand and work with this as a standardized way of loading web assembly in the instance uninstantiated form and we have the security benefit that is source is able to be tied to the original location in the web and integrated with policies currently today when you do compilation and policy that a lot of people don’t want to have. That’s the kind of motivating use case we have. The source import. Future we have this for the modules and we don’t have this. And the motivating case is the one that we’re currently pushing forward as the source reflection and the JS module source reflection would have to be basically what is the currently called the compartment layer 0 and that would be a module source JS object able to represent the same corresponding concept for JavaScript modules where it is fetched and compiled but not yet actually evaluated or associated with the evaluation and can still be linked against any module. So that concept is what we would need to build against if we want to – when we want to expose the source import or JavaScript modules. The specification is ready for review. We have built it on top of the modules refractory work working on and able to expose this as a custom object on the module record that represents the module source and allows any types of modules to have their own source representations. And so it’s this module source object slot on the module record and this is able to be populated during the host load imported module and integrate the web assembly module and all the sources as necessary. So what do we do for JS right now? At the moment, if you import a JS module at source, the plan is basically just to throw and add the function as soon as we have the source available for JavaScript. We expect to have module source available for JavaScript. That involves some degree of alignment with the other specifications that are happening. But once you get that for JavaScript you will be able to have some very convenient work flows for being able to re-instantiate – and allow module virtualization for JavaScript in a very easy way and getting those same static analyzability benefits and you know what models are virtualizing. There’s is one question with spec point of view with ordering where we have these source phase modifiers imports next to other imports how is that going to affect load ordering? Strictly speaking we don’t specify load ordering although we do trigger loads through the host. If we want to maintain a clear ordering in which those load module host calls are being made. Next slide. So here is an example if you import two modules A and B are the first two modules you import them as source in their source phases but then you have other modules that are being imported in the ground for the same time, the source phase won’t load dependencies. It isn’t assuming the linkage of the dependencies. It will load the single load and then stop. And if you import the module it will import the dependencies. The order in which the load hook to the underlying environment is being called will still match the order of the import statement. We had to do a bit of work to ensure that that will be maintained so that at least there’s a relatively understandable way in which these loads get fired up so we’re not doing something as a separate process. And then also to mention again as Luca mentioned with regards to the phasing and now it represents an earlier phase of the same module. So this item property of imports is fully maintained where if you import a module and whenever you have the same module, you have the same source. The source is part of the subset of the idempotence of the module load itself. There’s no reinterpretation going on and no multiple source objects that exist for the same module. There is one source object and then all instances of the module will be based off of that source and you can share the compilation and you can share all of that loading process for the module and not going to be dealing with repeated work unnecessarily and this is also what distinguishes these phases from other things like import assertions (inaudible) where it can change what you get. This is the distinguish between the left-hand side and the right-hand side in the specifier. So we are looking for Stage 3 reviews. We already have a bunch and ready to get that review going and really appreciate some help with review. Hopefully to be able to work towards stage progression in the subsequent meeting. Thank you everyone. Any questions? + +JHD: So the slide that you showed where you had import source and produced a `WebAssembly.Module` - it seems really strange to me that syntax would produce an object that doesn’t come from user code or from 262. Like, I understand there’s going to be differences in the properties and attributes of the source based on the module type and whether it’s WASM or JavaScript or something else. It seems strange that a complete object and inheritance structure could just be kind of magically snuck into the syntax this way. I wonder if there’s those that care about deniability and whether an object is not deniable because it can be produced to syntax this seems like it really opens up the floodgates of allowing any object that a host sticks in there becomes undeniable.What sort of design consideration has been put into – is there a way to have that produced, the kind of object specified in 262 that still enables the capabilities specific to the module type? + +GBD: So if we had a source object for JavaScript that we could also reflect for web assembly, that could be an option. Right now we do not. So that’s not an option for us right now. If there was alignment on that and it could be done in a way that were to be compatible with web assembly, without sacrificing the use case and without sacrificing the ease of use, that could be an option. But short of the fact there’s no current proposal for that or past proposal for that that is clear right now or that is currently being proposed at this same stage, it’s difficult for us to rely on that kind of requirement for our specification process. + +JHD: I guess I’m sort of thinking it could be an opaque object that you put into what you want, and it spits that out. + +LCA: The module, I don’t think we covered this, you want to reflect the bindings on the source in code. Here you want to know what imports this module is requesting and what exports it is requesting. In this case web assembly.module this opaque module would have to specify that and represent all possible types. + +JHD: To clarify what I was suggesting is that the opaque object you use with another API to get the object that has that stuff. 262 wouldn’t have to specify any of that stuff. It would hand you the equivalent of a symbol. It doesn’t have to be a symbol, and another API can use that as a key. + +GBD: That would actually be a little bit more similar to how we imagined the asset references proposal behaving and that would be a different phase reflection and a different proposal and a different approach. The benefit of this is that because you are getting it at the source phase, it means you know what you’re getting back is an object that’s a compiled module and it’s being fetched and ready to be executed and linked in. That’s an important distinction with the source phasing, you know what you’re getting back is the module that’s in the actual module system of the engine, it knows it’s a web assembly module and knows at what stage it is at. + +JHD: I get the phasing guarantees you’re looking for. You can use the same sort of API and handing a token to get the thing you want. This is not the only way to solve it. It’s the only way to address the concern I’m bringing up. + +DE: I’m a little thrown off by the deniability thing. If you import web assembly module you may get web assembly.memory objects and such. If we supported web assembly integration, I think if we’re interested in deniability, the right approach is with the compartment later zero thing where we have the import hook and that can be used to deny access to such phase. Aside from deniability of the web assembly module constructor, do we have other reasons why this is strange? + +JHD: Yeah, I mean, that’s just – that was just when I was surprised that hadn’t come up from the folks from whom that is important. I think that is strange. That’s just not the way that our syntax works. We don’t have syntax in 262 that produces DOM objects, for example, or anything like that. The objects produced out of 262 are all things that are defined in 262 that may be extended by hosts. + +DE: So, for example, with CSS module imports which was kind of explicit goal of the import assertions, import attributes proposal, those produced, you know, CSS object and module things during the import statement. That is HTML providing this thing where the import statement default export is created by HTML. This would be similar. It wouldn’t be the JavaScript specification providing the source, it would just be the Wasm providing it. It’s completely analogous. + +JHD: This is different from host code producing things, it’s reflection. + +DE: I’m having trouble understanding why import source should be different. For source is only one export instead of multiple. There’s only the default source export. + +USA: We have two minutes left for this and a huge queue. I don’t think we can get through it. + +DE: Thanks. + +LCA: Do we have overflow time? + +RPR: We do. We should move to the next one and do it Thursday. + +KG: Are we capturing the queue and then moving on to the next agenda item or overflowing? + +USA: If that’s okay, Luca, and guy. + +LCA: If we have two minutes we could finish up. + +USA: Sure. + +KG: JHD, I agree with you in general that most syntax should not produce things that are not defined in 262. But I’m fine with it for import specifically. Because imports are necessarily wiring the rest of the world. I can sort of see a distinction between import source ES build than import specific names. But I don’t think that difference is important in practice. I’m imagining a reason to care about that distinction. And I think when you are wiring up imports, you have to care about the rest of the world and it’s fine for reflection, import reflection in particular to be a host define stuff. + +NRO: DE already said what I wanted. This is already how CSS modules work. + +USA: There’s two more things on this topic, if you can go through them real quick. Next up we have SYG. + +SYG: I think I agree with KG here. I think import – specifically the whole module machinery is special. It is the thing that we decided that we would use to coordinate with different assets and different kinds of code in an app. And given like that is its nature, it seems hard to work around the fact that you want the embedder and the host to give you different things via module syntax around imports. + +YSV: Would the overflow time be in the mornings or the afternoons? + +RPR: Based on people’s constraints and requests. I have the comment here and I won’t be able to attend in the afternoon. So depending on if my point is at all important for the two people who are before me, might make sense to check on that. + +LCA: We also have 30 minutes after YSV’s topic currently allocated. + +YSV: Perfect. + +LCA: That can be overflow there. + +USA: Okay for the chairs. + +RPR: At the moment, yeah. Not to violate any constraints. + +DE: To YSV’s comment to be safe and then continue in that slot? + +### Conclusion + +Topic will be resumed in an overflow slot + +## Import assertions/attributes for Stage 3 + +Presenter: Nicolo Ribaudo (NRO) + +- [proposal](https://github.com/tc39/proposal-import-assertions/) +- [slides](https://docs.google.com/presentation/d/1Abdr54Iflz_4sah2_yX2qS3K09qDJGV84qIZ6pHAqIk/) +- [PR](https://github.com/tc39/proposal-import-assertions/pull/131) + +NRO: Import assertions that being demoted to Stage 2 the last meeting in January. The reason is we integrate in the proposal with HTML, we found that import assertions was not well and HTML needed and use them to affect how a model was – the purpose of a model. And there is potential solution to that, just delay some constraints that import assertion proposal had. The first one is that – the second one, the last bullet point is import assertion must not influence the interpretation of the module and need to remove it. + +NRO: And follows that import assertions should be able to be part of the cache because that affect how the model is working. And unfortunately the solution is not easy. When you remove these restrictions, some of the parts of the proposal start falling apart. There are two main consequences. The first one is should the assertion list still be extensible? This is a question that came up multiple times when discussing removing the restrictions. Second is ignoring unsupported assertions still a sensible choice? Right now import assertion proposal cost only the assertion that the host support. There is some – there is a list of known assertions to be able to fit. And the unsupported assertions are just. And can we still call them assertions or should we call them something different? To the first point. There are two different access in which we can divide how these modifiers for imports with the host would work. One is that if they can affect or not affect module loading. And whether we want a full object that can be potentially extended in the future by hosts by adding new assertions here or if you want to restrict these to a single string or the final set of assertions and I’m bringing these up because the – if you remove the restriction on affect model loading, the proposal become potentially more powerful so we might want to restrict it now. However this question is already being discussed many times in the past during the proposal so the group doing the plan to change our answer to this. So the proposal will still keep having the options. And because there are different potential use cases that could come up in the future and we don’t want to affect it. There is not only time. For example, that was considering a resolution mode assertion. Only import and some request for this. Have some ideas for how they could proportion assertions in the future. Not necessarily specific but some ideas coming from them could make sense with the HTML, for example, importing some specific layer of the module. We had many cases being in the import reflection proposal since it was not clear in the past and fitting import reflection and what is for import assertions. We decided to keep the proposal at the same level of accessibility as it already has. There are some concerns about these because this would be full object whose case and values are up to the host and these might cause code not to believe portable. And with the layer with the imports wouldn’t work in JS but however there are already examples of objects that are complete host. And there are many and import methods and object we provide only for the host. And can be problems with this. There are venues to ensure cross host compatibility for example WINTERCG and browsers. And the current situation where, for example, custom import specifiers to understand how they want to modify the imports. These would for sure not be a step back. And also we can impose minimum level of portability such as restricting what should happen when it is defined. This is the proposal by saying if there is a type attribute, that is JSON the host is – and define their own attributes and can be convention to make sure the users are aware when they’re using something that is standard and when they’re using something that is like custom to extension that was not – such as code in the name and matchable. For this question our answer is not change current proposal. Second question. What should happen with the import assertions? We thought right now unsupported assertions where unsupported means unsupported by the host. And this is because it’s generally okay to ignore unsupported the assertion since the general expectation is the assertion will pass – if it’s unsupported you get maybe less security but the code will not support it. And that’s the motivation for the new changes. Not really. Because now assertions could affect what the model is. The assertion could give you a completely different – could give a completely different model. For example, if we have import type – an import with the type just an assertion, HTML could send to the server and accept HTTP header telling the server to expect the model. If the type attribute was not supported, it would send such header and the server could decide to serve a JSON model. The proposed solution is throw on unsupported assertions. It might make it stronger. We believe to do it when there is a problem and let the user see some strange errors. And finally are they still assertions? Well, no. I know that in English assertion has a more general meaning than what it means in most other languages. But come from earlier, those kind of assertions, some should be true. If it’s not true, the execution of the current program or function should just terminate. And these attributes are not just about actually affecting how modules interpret it. And that’s why we’re now referring to the proposal as import attributes than assertions. We want to propose changing the keyword from assert to with to actually reflect the assertion. And while that would ideally give us solution, there’s the potential that compatibility risk. And import assertions have shipping limitations. You can see it’s been more than one year, it’s been two years for Chrome. So we still think that supporting the keyword is the best choice we have. And that the compatibility risk while there here, might be smaller. For example, yes, Chrome supports import assertions. Does not support it in any other browser. At least on the web, there is some reduced risk because usually support in only one browser have less usage than the support with the browser. Node.JS has support for import assertion. Waiting for a signal. Given the changes of the proposal, they are waiting to flag them again or start working when used to see if it’s actually possible to move away from the certain keyword. And mostly waiting for us – they’re waiting for the meeting to decide to flag import assertions or not. They will have the next – and the third Deno seems much harder to integrate at the moment because Deno doesn’t have changes and it’s a server plot form that is specific and they can fully use import assertions. So how do we deal with all of that? We propose to have a very slow migration path here to start shipping with the with keyword and still existing implementation to also keep the old syntax. This would probably mark normative option in this and legacy so it’s clear that only have for network compatibility and not what the proposal is. Not something that we can remove right now and maybe we will be able to remove it in the future if existing usage goes down enough. If it is possible, that will be probably years, maybe after the proposal has reached Stage 4. And with this path, we must also be okay with the possibility that we never get rid of the assert keyword. And we think that the choice even if it is two different ways to express the same thing because this proposal is not about assertions anymore. So those were the changes we made to the proposal since the last plenary. And this is how we would like to go to ask consensus to go back to Stage 3. I have a few bonus slides. I prepared them before knowing about Luca’s presentation about the import phases. I will go through them because they explain different with the import assertion proposals and how and why these modifiers should be something else and not something at this part of the import assertion. So import attributes. This is before seeing the presentations and again we think they should be two separate parts in the same text. And as you already saw, we have all of these phases that go from resolve up to evaluate. And they’re covered by different proposals. Maybe we’ll not have them. It’s all the phases we are thinking about. And how are import phases different from attributes? First of all they don’t stack. You usually can make sense of different import attributes for the same import. It doesn’t make sense for different phases. You can’t have import the first source from – as Luca mentioned before the effect of the syntax. It would be quite surprising if the syntax was affected by something that looks like some options. Again, as mentioned before, this import phase has keyword modifiers don’t modify what the module is. They modify the phase they’re exposing. Also they can always be advance to the final phase using different ways depending on which phase in the process. For import attributes, they completely change what it is. As mentioned before, the model will give you a completely object is completely unknown of ECMA-262 object. And there is no way to then reconcile imports with different import attributes. They’re just completely different. Potentially are just completely different modules. And also if we think about the cache key that is in ECMA-262 and the final HTML, like, when we have both import phases and attributes the cache key would clearly only convey and specify the attributes and import phase will not be part of that. We want different phases of the same import so actually reflect the same line model. And that is – so just to recap, we think we should not change how much the proposal is probably means hosts that have different needs to support assertions in the future. However, we think it’s better when using assertion that is not known to the host. And not assertions anymore, so we are proposing low integration and changing the keyword. Lastly we plan to keep all the other import phases separate from this. And that’s all. I think we are ready for Stage 3. There is only one potential problem which is during the last meeting when going back from Stage 3 to Stage 2 we forgot to ask for Stage 3 reviewers. I’m not sure how the process works when there were already Stage 3 reviewers for reviews for the previous Stage 3. And asking for Stage 3 again. + +YSV: Before we get to the Stage 3 discussion, would you like to address the queue? You have a couple of questions there. + +NRO: Yes.This is my last slide. I had another slide that is the proposal still import assertions since not assertions anymore, we plan to rename import attributes. Yet to minimize the amount of changes that we have. And let’s go to the queue. + +YSV: First up we have MM. + +MM: So first of all, from what you stated about the current situation with regard to assert versus with, it doesn’t seem like we’re stuck to where we need to pay all the costs for a slow transition. You said that the node is willing to reflag and waiting for us. First of all want to say yes, please, the signal from us should be please reflag. Chrome is certainly a substantial portion of the web, but one browser out of N is not the part of the web that’s compatible with only one browser. Never been the mandate of this committee to not break a single browser specific portion of the web. It’s the cross browser web that we’re man dated not to break. Now, you know, Google ship that in good faith. I would also like – you know, like a signal of their willingness to not make a slow transition. That was the first thing I want to say. The next thing I want to say although I’m very friendly to this proposal and want to see it proceed to Stage 3, I’m not willing to agree to let it go to Stage 3 today. I’m not sure if that was being asked for, to go to Stage 3 today. But I’m not willing to do that largely because of the same issue that was postponed from the previous topic which is there’s a whole lot of activity around modules and imports and phasings and different languages and this whole multi-dimensional set of considerations. Very much reminds me of the early days of trying to figure out what classes were in the committee and there were all sorts of ideas and there were lots of monoorthagonnallites and we talked about this in committee and talked about having a modules epic. I think one of the things we need to do to the extent we can and it looks like we can here is until we understand better what kind of process we want to have to coordinate the overall module space, that we should hold things back from Stage 3, having things go to Stage 2 while we’re figuring out the process is fine. But I think all of these should be held back from Stage 3 until we have an overall sense. I’m enthusiastic to see this go forward, I don’t want to agree today to let it do so. + +NRO: So with integration, like, yes, it’s possible that we will be able to integrate and make sense not to flag and yes Chrome in the single browser and still like to be – I hope this will be able to (inaudible) careful with that. It’s mostly how it will come I believe from implementers that we tell us if we are able to integrate or not. Regarding the process point, import assertions has never – when we started discussing about how do we coordinate all the models’ proposals and make sure that the proposal fit for us and design around single proposals and group proposals and import assertions never part of the discussion because this is a proposal that just defy itself quite well and that can be easily detached from all the other proposals. Additionally here, we have some urgency with this. We agreed that we had these – like, we discussed about this urgency during the last meeting. This is already shipping in the browser and the other two browsers are ready to ship the proposal. We’re just waiting for us because we had these last minute changes that we wanted to propose after hearing back from HTML. So while I agree with you that for the other people, we should make sure to have some proper coordination. I think this proposal should be able to move forward by itself. Like, we have this regular meetings where we coordinate the changes and make sure that even if we’re trying to work with this process, it doesn't really fit what we're doing, just to make sure all the proposals are properly considered like how they integrate together and to make sure that they are not like creating possible problem in the language. + +YSV: Thank you, NRO, I will keep us moving through the queue because it's long and people should look at the time. We have 20 minutes left. Next up, we have DE, please go ahead. + +DE: I would really be interested in hearing your response YSV, whenever you're ready to give it. I don't see it in the queue. I was hoping that the regular model calls would be serving as this process. People working on each of the module proposals are engaged in this. We didn't discuss the plan for this proposal and the other proposal. There’s no lack of coordination. If there’s more to explain to the Committee about this, then that's important to do so. But that would be possible with a more concrete question. I disagree that we should only consider cross-browser compatibility issues. So we talked about chrome as well as Deno and Node and I just because a cross-browser is clearly very important. Doesn’t mean that we can take out of consideration other things that are not that. So yeah I agree with the point about urgency. If we do not progress here, then we risk things being in an unstable state. If you can make your requirements for the process more clear, then it would be actionable, but what do you think should be done beyond what’s going on in the regular module calls. Thank you. + +MM: Just let me briefly respond to that. Because I know NRO in particular is really – has a lot of expertise about the overall module design space and taking that – interpreting his comments in the context of my knowledge of his expertise, I am willing to go ahead and agree to the stage 3 on that basis. But in general, I’m reluctant to see things in this space go forward until the overall integration and some better sense of the process is understood. Regarding the assertion and breaking – certainly it’s a trade-off, like I said, before deciding not to go on the slow path of retiring assert, I want to get google – I see a reply from SYG, who I see is on the list. I want to get a reply from Google about whether they're – with regard to chrome users to agree on not taking the slow path, and but Deno if I understood the slide correctly is already not shipping assert. There’s no breakage there – + +LCA: That’s incorrect. We are unable to ship at this time + +MM: I’m sorry, I misunderstood, that does change things. So the wait towards the slow path given Deno already shipping is different than I thought. I’m willing to reconsider that as well. + +DE: I want to say, although I agree with your characterization, I don't think we're making the structural error of relying on just a single person. There's at least four or five people in the calls who have a similar cross-cutting understanding. + +NRO: I agree with you MM, should get priority for us. Maybe not for the whole Committee but at least for the people working on the proposals to figure out what the process for us should look like. + +GBD: Just to mention very specifically for Node.js, there is that need for intent for Tc39 and stage 3 or not, if we can capture in the notes that the existing syntax will be sticking around or how it will be sticking around. We will categorize that in some way for projects to understand how they can move forward because that needs to be clearly communicated at this point. + +NRO: Was my representation clear and we need to summarize it or do you need me to clarify that + +GBD: If we can just include that in the summary, I think that would help a lot. + +YSV: We have one response from SYG. + +SYG: So to MM’s question, the lack of cross-browser support is one signal on shipping anything. It’s not sufficient to unship stuff in chrome just because they lack cross-browser interoperability. It has shipped for basically almost two years, given the amount of time we don't have a choice. We're doing a disservice to chrome users if we do not do the slow path transition. We need to start collecting use counters for how much use of import assertions there is in the while. Those use counters need to hit staple release, which I think chrome releases are every six weeks. It takes a week for them to hit the largest population to give us a real picture of how widespread it is and are we – do we have to live with the assert syntax in the near future. Do we have a choice there? Because of how long it has shipped unflagged, there is no way we can just rip the Band-Aid off. + +MM: Thank you for that clarification. I remove my objection to the slow path. + +SYG: There is – going to the next topic as well, because it's basically asking about the unshipping stuff. There’s – as I understood from the presentation there was a semantics change for the throwing of unknown assertions. So what is your proposal for the slow path of – when you use the assert syntax, does it retain the old semantics. + +NRO: Right now the only use import assertion is type, so changing that to throw, separate key would very likely be 100% compatible and we don't believe there would be any compatibility problems with this. So the proposal as of right now uses the new semantics for the old syntax because that's not where we see the risks. + +SYG: Also, don't we currently throw on unsupported keys + +NRO: No, supported keys in the current limitations are just – + +SYG: I’m being slow, explain again why you think the compatp risk is low to change to use the new behaviour. + +NRO: The only effectively used assertion everywhere is only type and like there is no code using other assertions just because they don't work in any host. + +SYG: Wait, so the hypothesis is that “type” is the only thing that's used, so there is no breaking change to the behaviour, and throwing would not break code in the wild because we can’t foresee a reason why somebody would have even shipped that code. + +NRO: Exactly. + +SYG: Okay. Okay. So all right. I think Chrome’s position is we can live with the `with` keyword. Think I would like to impress on the rest of the Committee it may be possible that we are just going to be assert for a while because it has shipped like forever even in perpetuity because it's ship for two years. My personal position still stands as like I will be fine with not changing the key word. The semantics I understand. But, yeah, I am willing to squint and just like think of assert as the broader English word assert. But I might be in the minority here. But otherwise I can live with stage 3 with the slow path – with the slow migration path, but I think it's an actual open question whether we can get rid of the assert key. + +YSV: All right, thank you, SYG. Do we have responses or questions for SYG? + +DE: Yeah, the next item and – so I think this hypothesis that NRO mentioned about how people probably will not be using assertions in ways that would break here is testable and see if we find evidence to the contrary, then this proposal could be changed to include a little bit more complexity in that case. I mean, I would – would be open to adding counters to this to see whether that kind of thing is encountered completely. + +SYG: If I may interject. Just now on the unshipping topic or the migration topic, let's say. + +DE: Yeah, this was specifically about the error thing. I am not talking about the assert keyword. + +SYG: Okay. Okay. Maybe I will add it to the queue. I’ll add it to the queue. + +DE: Going back to GBD earlier, I definitely liked the idea about being explicit in the notes about what the idea for the slow migration path is, are you happy with the way that the proposed explainer would handle this? + +GBD: Seems like that would satisfy the questions from Node.js, as far as I'm aware. It would be worth investigating if the new semantics will affect Node.js, we have already exposed assertions to Deno, we don't know what the usage of that is so I don't know if having some new key throwing behaviour is going to affect some of that existing usage and how that will play out. But at the very least, having in this intent that the assertions is going to stick around is what Node asked for. + +JHD: node has already gotten that signal from V8, the assert syntax will remain for a while no matter what we do in this room. + +GBD: Right, but they want it from TC39. + +DE: Makes sense. + +YSV: Okay, SYG has your topic been covered the one that's currently on the queue regarding existing semantics, has it. + +SYG: Yes, it has. For the notes from NRO was the assert key word will have the new throwing semantics, not the existing. + +YSV: Okay. The next speaker then is DLM from Mozilla + +DLM: Thank you, we have discussed this as a team and happy with the direction that this is taking. I think represents sensible decisions, design compromises, happen we the with key word and definitely support this for stage 3 if that’s up for today. + +YSV: Thank you. So I have my topic. I think this is better covered in the overflow for the next segment. I did post my slides that were intended for Committee to the chat, if anybody wants to take a look at them, please go ahead. They’re of course missing context. There is not much new in terms of layer proposal work that KKL has done if anyone is unfamiliar. Rather it was trying to put into better context what he was trying to do and not so overwhelming, so trying to simplify and name and build a couple of rules around it. I can of course take folks through the high level of what that looks like. I presented to the small conference about some of the thinking around layering that had been going on because it was something that was discussed in Committee and was discussed at length in the module meetings. But I also have some of my own thoughts about how this should be organized and what the guiding principles should be, which I think are quite strong, and whether or not Committee will adopt them, I will probably be writing a blog post, taking the example of continuations integrations in scheme and how the design decisions done there can influence the future evolution of a language and comparing various ways, for example, tail calls, continuations, delimited continuations and throwing exceptions have been implemented having different maps for each one of those. If you look at the slides you’ll get an idea of what the maps look like and how they are organized. So the blog post would be partially for my interest but also maybe inform the work of the Committee. It doesn’t have to. This aspect was not something I presented at the talk. So I don’t want to take up any more time and I'll move on to the next topic. Go ahead, DE. + +DE: So, okay, about the topic of Node flagging, I think the policy that chrome and Deno have decided to apply sounds good, and I take it that there was – although node wanted to have compatibility, they also wanted to get a signal for us on whether to reflag the assert keyword. They’re saying in the chat they already flagged it, okay, but I do not think it should have been ever on the table to like even try to listen to us. If we say it should be reflagged the compatibility even with the not multiple browser state is important thing that we shouldn’t be casting doubt. + +NRO: Flag was – came up from Node.js and not from us. Reaching out to us saying hey we are discussing – to flag, we are waiting for the outcome for the next meeting to be able to make a decision. + +DE: So as long as we're including in the conclusion what – + +GBD: There could be another clarification if we have more time, which is how long we expect this legacy deeper indication process to exist. For Node.js compatibility on the Deeper indication cost. So that will just be something that’s in annex B where it will sit around potentially forever or is this going to be a slow deprecation. Because the flagging decision depends on how long the assert syntax is going to remain available. + +DE: So I would say that this is a – at least a year-long likely multiple year-long path. I think the presentation was a little unspecific about that. + +DE: But that’s how I’ve been picturing it. + +GBD: In that case, we would still need to be flagged for node 20 because that would be too little a time. + +DE: Years – + +GBD: Yeah, multiple years and ideally indefinitely for node ^inaudible ç. + +DE: Yeah, I would kind of expect it to be multiple years for – + +YSV: I would like to jump in here because we only have a couple of minutes left and a few more things in the queue. Would you folks be able to continue this off-line. + +DE: It’s relevant to the conclusion, but we can stop this discussion now. + +YSV: If it is relevant to the conclusion, I’m happy to allow it to continue, I wonder if the other topics on the queue might need a little bit of air time for discussion. + +DE: Let’s go on to the other topics, but we will kind of – + +NRO: I think the flagging we don’t have a conclusion but not necessarily relevant to the stage 3 request. So might be okay to defer that. + +DE: Let’s continue down the queue. + +YSV: This was SYG’s topic on hearing from tooling people and I will possibly skip Shu reply and go to JRL saying the tooling concerns with removing assert. + +JRL: Hi. So my personal opinion is that we should stick with the `assert` keyword. I’m fine if we switch to `with` keyword given the slow path. It’s not just browsers, and not just node. The tooling ecosystem added `assert` parsing support. If we were to immediately change `assert` to `with` in tooling, it would cause all kinds of headaches for users of bundlers, linters, and transforms. The ecosystem adopted this syntax. If we take the slow path here where the current `assert` syntax is still valid, all the tools parse can still use it and all the transformers can still access it, that’s fine. If we have some form of deeper indication, a warning telling people to switch to the `with` keyword, also fine. My big concern is just that we continue to support `assert` for some indeterminate amount of time into the future. + +YSV: Okay. Thank you. And we have two – sorry. JHD, is your response to this topic? + +JHD: JRL, yeah. + +YSV: Please go ahead. + +JHD: Yeah, I just – I mean, when you talk about the slow path or fast path, it’s not something I think TC39 has the ability to dictate. Implementations do whatever they decide to do and I would imagine that if I maintained any implementation that supported the assert syntax, I would definitely have a time where I supported both assert and with, I would provide code mods and deprecation warnings and at some point I drop the assert syntax when I felt like everyone moved over. I hope that’s what most of the folk will choose to do, but that is up to them. I don’t think we need the assert syntax in the spec in order to make that happen or allow that to happen. I think it is inevitable that any ecosystem or smaller chunk of an ecosystem that depends on the assert syntax will continue to support it in the tooling until they feel like everyone migrated. + +SYG: Wait, you’re saying – + +JHD: I am saying it’s time that we message – suggest that folks try to migrate them to the new syntax, but I don't think the spec needs to specifically talk about the assert syntax to do that. + +SYG: The spec doesn’t need to talk about duration, but it needs to talk about the assert syntax because the spec should reflect reality. Like, if you’re saying that we should all just for this space that we should just be non-compliant, that is a take and I have grave concerns with that take. But I thought that was not on the table. + +DE: Yeah, I agree with SYG, that we can and should coordinate expectations here. That is a valid function of the Committee. The role of the Committee is to have this shared standard language. I think it’s totally within scope to have in the conclusion these suggestions which obviously we can't enforce for the migration path and are have in the spec the legacy syntax, that we have a clear unambiguous thing to parse for the subset and would be explicit this is an optional legacy – subset that wants to support that. + +JHD: I don’t think we would want new implementations to normatively add support for the legacy syntax. + +DE: This is why the request has a special text in it that explains this is legacy and that we're not encouraging it. It is useful to document these things. In the past when we tried leaving things out of the spec, this happened in ES2015 it led to confusion. It is important – + +JHD: The assert syntax isn’t in the spec now and yet a number of implementations have implemented it. So surely we can point them to something outside the spec that tells me – + +DE: Historically we know that doesn’t work as well. We have had discussions in TC39 in hope that implementers see those and as an adjunct to the spec and that did not work well as a communication tool. + +GBD: I agree that making run times non-conforming with the spec would be a bad idea. + +DE: Okay. So I wanted to discuss something different, which is we have many other tools out there in addition to JRL here, I’m glad to hear that JRL is here for this path, even if it is not the preferred option. For example, NRO in Babel capacity what do you think about this. + +NRO: I would not want the fast path. We would start implementing the proposal at the same time and let our ^inaudible ç. For the way it works, we don't care about this. This proposal. Because we do not allow the proposal, we provide parsing separate for that. However, we also talked with Parcel, Vite, and generally been happy with the direction of the proposal is moving because they – it is something that they said in the past that we as a Committee were not placing into the ecosystem. + +DE: And we have other tools here like TypeScript. + +DRR: We’ve started a direction where we have tried to introduce deprecation saying within five versions you will have the opportunity to suppress these and then they're just never after that. That seems like it is in sync with the slow migration path. We're interested in trying to see how reasonable that is. So we can try to take that probably by type 5.5. That is definitely slow enough as a migration path for people and eventually it would kind of fade out. Whether or not it’s in the spec in some editorial fashion I think the preference that we have is just explain this so that people know where this thing came from and doesn’t only exist on a random MDN page or something like that. That seems like one of the major discussion points right now. I cannot get much more clear than that. + +DE: But in general are you up for the current direction overall of everything? + +DRR: I think so. I think we would prefer to go with the better design over time. Or the – conceptually better design had we had all the context from the first step. Assert keyword – sorry, the with key word. Sorry. + +DE: We’ll eventually get it right. + +YSV: So just to be clear, we're currently over the time box and I've spoken with the other facilitators and we've chosen to prioritize this topic for now. There's an additional 15 minutes left to this morning and we can return to the topic that I interrupted earlier if this one is complete. + +NRO: I would like to ask this if there is support for stage 3. I’ve heard most of support and the objections have results. Discussion like some like minor charts or something mostly – well, one is about work with ?? but that doesn't really impact what is written the spec or not. And I would still want to ask if we have consensus for stage 3. + +YSV: We had objections but I believe we were retracted. Does anyone have an objection they would like to raise at this point. + +DE: Do we want to ask for support and non-blocking objections. + +YSV: Yep, this would be the appropriate time to give explicit support. We have had explicit support from Mozilla and a couple of other groups, and additionally we’ve had some concerns raised if anyone wants to voice those now for the conclusion. + +MLS: So I’ve heard some concerns that people want to keep the existing key word. I do not think we resolved how we put assert in documentation. I think that JHD’s idea is in some way good. The documentation can be quick to deprecate it even though it’s slower. Certainly I would imagine with all of them, fairly quickly. I have concerns that SYG hasn’t spoken to it, but says it has the possibility of perpetuity. I would like to hear what his comments are. I’m just concerned that – I don’t want to bifurcate implementations by having the old way and the new way and the old way is acceptable is my concern. + +DE: MLS, what do you – + +SYG: Do you want me to reply about the perpetuity thing? + +MLS: Yes, please. + +SYG: I just meant that – so I think the conversation has gotten more complicated. I think it would be a disservice to the web and JavaScript if we don't have this in the spec, that is my starting point. With that stipulate, I think we need to own up to the possibility that we cannot unship this because in the – personally I believe it’s not likely at this point, but in the unlikely event that there is widespread use today and then maybe we can – okay, perpetuity is shorthand here for many, many, many years. I’m not saying actually in perpetuity, but more than like two years. And we own up to that possibility. But I don't think we need to talk about duration in the spec anyway. But basically that's all I meant. + +MLS: So the concern I have is that if we have perpetuity, which to you is – okay, longer than two years – we now – the spec is to make – is there so that we have compliant implementations so that we allow broad compatibility between implementations. And if we have assert in this non-normative legacy or whatever it was put in on the slide, doesn’t that effectively mean that other implementations need to add assert as well as with? + +SYG: Hopefully no. + +MLS: Because if we do not, then we’re not compatible with this broad number of changes on the web or in the case of note or other things, the broad code base that supposedly may use assert instead of with. + +SYG: Right, two thoughts there. One is if it is true that there’s a large corpus on the web that uses that keyword, that is a fact of the world and it is a product decision on each other implementation to want to be compatible with it, whether – regardless of how much they dislike the keyword. That is first remover risk, that is the world we are in. That is a fact of the world. The second response is hopefully the combination discourages new implementations from doing that. If we don’t think the combination of normative option but legacy serve that is purpose, maybe we need to have stronger editorial nudging there. But that is the point of why it’s normative optional and legacy. + +DE: MLS do you think of the normative optional legacy marking in this document? + +MLS: I would call it ‘deprecated’ myself. + +DE: Sounds good to me. + +MLS: The thing about normative optional basically means that, okay, all the other implementations should ship with and the normative optional, because it's going to be supported by other implementations that we need to be compatible with for an unknown but not a small period of time. + +SYG: But I mean, okay, I want to take a step back here. I feel like it’s easy to scapegoat V8 and chrome here. The important thing is that – why are we interoperable? We’re interoperable for a better set of web programs, right. + +MLS: Yes. + +SYG: It’s so support the world and the reason – if the reason you want to be interoperable with chrome must be because that it becomes clear that it’s a fact of the world that there is widespread use of the existing syntax. We simply do not know if that’s a fact of the world today and I was saying earlier because of the time it takes for use counters to roll into stable, it takes a while for us to find out. But like, our decision making follows from whether we believe it is a fact of the world that there is widespread use and if there is, then it behooves other implementations to be interoperable. It’s a better state than just saying we don't like the English word assert. I say this not as – like, there’s no personal opinion here, there’s just an empirical thing which is to find out is this true If it is, we do one thing, if it’s not we do the other thing. + +MLS: I think you’re implying then that it’s your – you said it’s your preference that we keep assert and that other implementations should probably also ship assert, irrespective whether they ship with. + +SYG: Probably – okay, the second part doesn't follow. The first part is still true. The first part is still true, but that is not what chrome is doing. What I propose chrome to do is to find out whether this fact of the world is true or not and because we are all agreed in this Committee that ideally we would like to only ship `with`, we are tentatively optimistic that it is not widely in use and that chrome can unship it in the future as well. So I don’t think we would recommend other engines to also ship assert today, if they have not already shipped. + +MLS: How long would it take you to find out the use – + +SYG: Confidently the time line I think is three releases. Let me pull up the schedule here. So the current schedule is that we are on 114. It will take about I guess four – like, two to three months to hit stable we add a use counter for the syntax, we wait two to three months for it to hit stable, then we get our first data point on what the incident rate is in the wild, with the stable population, if it looks sufficiently low, then we will intent to unship there. Start deprecating and if it looks not sufficiently low, then we need to have a new conversation on like next steps. + +MLS: Good information to have. I don’t think the Committee can ask you to do that, but I think it would be a good information to have. + +SYG: I am volunteering to do this because it is clear to me from this conversation that the Committee would like to have only `with`, despite my and folks like justin's personal preference. If that’s the ideal end state, I want to see how likely we can – how likely it is we can get there, but I want to go into it with +eyes open that we might not be able to get there because it’s already shipped. + +YSV: So as next steps for this part, SYG, you’re volunteering to add a counter to see what the current web usage is. +Yes. think that’s a good conclusion for that for now. + +MLS: I appreciate that, Shu. + +YSV: We have six minutes. I see that Luka has a plus one for stage 3 with slow migration. If you want to speak, please go ahead. + +LCA: Yeah,explicit support. + +YSV: Justin, would you like to speak about this topic of supporting both + +JRL: Yeah. I do have a response for MLS here. The – if we get to the end state where we can’t unship `assert` and we are in support of changing to the `with` keyword, is it so bad that we have both in your parser? Like, the semantics should be the same, it’s just the keyword to parse and you can have both of them. Is that a significant enough burden that you think we should not change the `with` keyword or are you okay with having both and adding support for `assert` so we can add the larger ecosystem if there’s a significant amount of people already using assert? + +MLS: It’s not the parser that’s the issue, it's that the standard has two ways to do the same thing and how we got here, the history dictates that they both exist so on and so forth. And I think we want to be clear as to what the future is and as JHD said, `assert` is not the standard now. Effectively we're going to add it to the standard, but also going to add `with` to the standard. Seems like – interesting way to run a standard. + +YSV: I believe we also have a response to this topic from DE. + +DE: Yeah. The hope is that we all transition, I think the text tries to achieve this, but I think we can also make edits to it, like changing ‘legacy’ for ‘deprecated’. I agree with MLS that it’s important that we give a clear signal and I think this text expresses what we know now. + +NRO: Like, I’m happy to change the wording to something different from “normative optional legacy” to “deprecated”. We hope that the “deprecated” shows this is syntax we don’t want and we want implementations to not add support for the syntax and like we can then relax that if we find out that like 50% of the web use import assertion, old browser that is we want to implement the old syntax. Like for now I’m happy to come up with a stronger – stronger wording than just legacy normative optional. + +YSV: JHD has a +1 to DE and a message. DE, you have new topic comparison with web specs which had early shipping. + +DE: Yes, I think although this situation is new standard it’s not new for web standards. It’s very common that one browser ships something earlier and then other browsers and they collaborate on figuring out what the standard is, which then ends up differing. In those cases the standard does usually just reflect the final good version, but then there’s often this sometimes very long and gradual transition that sometimes requires other browsers implement this legacy version and sometimes not. I think it’s great that we’ve been able to avoid this state because we do kind of a lot of thought ahead of time and a lot of coordination ahead of time and we should continue this. When we find ourselves in this state, it’s not especially unusual and it doesn’t imply that the transition is not possible. So custom element zero, initially shipped by Chrome, but then with collaboration with Firefox and Safari, significant improvements were made. We should see this in the same kind of light as that. It is not globally speaking for browsers, it is not unprecedented. Transitions happen. It is different from their tendency in those cases to only have the standard include the kind of final good version, but given the the various different parties that have to be involved in this transition, I think it's good for us at least for now where this transition is completed to document both versions. Yeah. So it is slightly different strategy, but overall globally in terms of what is shipping, totally normal. + +YSV: I will move us along. Because I believe that the champion still wants to get a result here. JHD, do you want to speak or – + +JHD: Yeah, I will be brief. I support stage 3, I have a non-blocking preference that we omit `assert` from the spec, I’m fine if we come up with a stronger category and even better if it indicates that this section will be removed in a future version of the spec if possible. Because then it’s clear that once it’s unshipped from everywhere, if it can be, we would hopefully be able to delete it from the spec entirely. If we made that clear in the document, that would be a nice thing to have. + +YSV: Thank you. So NRO, I want to give the floor back to you with a quick summary. There have been a number of expressions of support for stage 3 from various parties. There has been a con +cern – Michael correct me if I'm wrong, this is a non-blocking concern with regards to shipping both assert and with due to the fact that we have – we don’t have assert currently in the ECMA-262 spec and preferably we wouldn't have it. Chrome offered to include a usage counter to see what the burden would be to do a transition there, however, expressed doubt that it would be not possible to ship both. There have been comment that is people would be okay with shipping both, although shipping with alone would be preferable, is that a correct summary of what we’ve had so far? + +MLS Yeah, it would be my – as JHD said earlier, we would not include assert in the spec, but that other documentation by implementations would be used for that. I’m not going to block on that. I do appreciate NRO wanting to use something like Deprecated. + +YSV: Thank you. NRO, would you like to ask for stage 3 again or ask for a conditional stage 3 based off the information you've gotten here. + +NRO: I would like to ask stage 3, like I – like, with the agreement that we like – like update the wording to be like more strong than just normative legacy. + +YSV: This would be a conditional stage 3 on updating the wording to a stronger – to ‘deprecated’ for example? + +NRO: Yes. + +YSV: Are there any objections to this? + +RPR: Conditional due to lack of reviewers + +YSV: Do we have available reviewers who would be able to take a look at this? + +NRO: I could not find the reviewers from the previous advancement; I checked the notes and nothing was captured. So I would like to ask for stage 3 reviewers. + +JHD: I’ll be happy to review, but I’ll want more reviewers to review the grammar parts. + +NRO: Thank you. +YSV: We have one volunteer. Anyone else who can do it? + +JRL: I can also review it. + +YSV: “” That was JRL and JHD , I believe we usually have three, but two has been okay in the past. Do we have a third? Then I believe we are going with two reviewers for now, is that fine. Okay, I’m going to assume that’s fine since no one said it wasn’t. In this case we’ve had no objections to conditional stage 2 and we have two reviewers. The advancement is conditional on the reviewer's reviews and additionally the changes that were discussed earlier. And I believe – sorry + +DE: Just being specifically on changing legacy to ‘deprecated’ with something stronger implying future removal. We discussed many possibilities but that's the conclusion. + +JHD: Yes, and the reviewers are JRL and myself and also the editors will review it. + +YSV: Yes. Then we are done this topic. Thank you very much. I believe we're moving to lunch. + +RPR: Thank you for staying longer than originally anticipated Yulia. That was very helpful. Please can someone share the notes? I think it's important that we write down this conclusion. Just do that. I think not everyone needs to stay for that. There are tacos in the next room. Please enjoy. And we will resume on the hour at 1 p.m. scroll up in the notes. + +### Summary + +Import attributes are the path forward for the standard, having re-achieved Stage 3. +The keyword is `with` +As previously, there is an options bag following it +The options can form part of the interpretation of the module and "cache key" +Unknown attributes in the import statement cause an error. +Although a couple delegates would prefer sticking with the keyword `assert`, the majority preferred switching to the long-term optimal solution of being more semantically well-aligned using `with` +Significant debate focused around how to communicate the deprecation. + +### Conclusion + +`assert` will remain in the specification, marked somehow as "deprecated", with the intention to remove it eventually, though with an anticipated timespan of at least multiple years before final removal. +JS environments which currently ship `assert` are _not_ encouraged to remove it, but environments which do not yet ship `assert` are discouraged from shipping it. +Chrome will gather data on usage of `assert` on the web, which can inform the deprecation path. +Conditional consensus for Stage 3 on this proposal, with the conditions: +Reviews are still needed from the reviewers who volunteered – JRL and JHD, as well as the editors +The wording for normative optional+legacy needs to be updated to something stronger, probably "deprecated", and explaining the goal to remove it from the specification. + +## Async Explicit Resource Management + +Presenter: Ron Buckton (RBN) +[Proposal](https://github.com/tc39/proposal-async-explicit-resource-management/) +[PR](https://github.com/tc39/proposal-async-explicit-resource-management/pull/15) + +RPR: There is one thing we should announce at the start from RBN, because on async explicit resource management, he may potentially be able to bring that back for an overflow tomorrow, maybe Stage 3, but in order to do that, he’s looking for an editor review on the PRs that he posted earlier in the delegates channel. Ron, did you want to clarify this any more? + +RBN: Yeah, I can, and also repost them as well. I put together a set of two requests against the spec in the async procedural. One is for cover grammar for-await using and one is for cover grammar for async using so we can investigate the – any potential syntax complexity and see how that sorts out. So if anyone is able to spend some time looking at that, provide feedback on the cover grammar if we can get something we think is satisfactory and meets the same normative semantics that we were expecting with using the weights and matches with the syntactic change for a weight using, we’ll try to see if we can bring that back tomorrow, then + +### Conclusion + +SYG, WH and editors to review new grammar for `await using` syntax, to be proposed for consensus tomorrow + +## Iterator.range for Stage 2 + +Presenter: Jack Works (JWK) + +- [proposal](https://github.com/tc39/proposal-import-assertions/) +- [slides](https://docs.google.com/presentation/d/1ecfsO-KyLs5UFxbFQ9RWXIDp8kycul6NZXQPZr71BCo/) + +JWK: Let’s have a quick recap of this proposal. As you can see on the slide, it’s a simple proposal that’s return an iterator and designed to be used with the iterator helper proposal. One of the big problems in this proposal, should it return iterator or iterable. This is an endless discuss – we have discussed this about three years, and my final solution is to rename the API to Iterator.range to make it less likely to be misused. Previous: Number.range and BigInt.range, Now: Iterator.range where Iterator global object is bring by the iterator helper proposal. +I hope we can get to Stage 2 this meeting since iterator helper has been Stage 3 in November. Also, we have another issue recently raised from JHX that should we allow floating point numbers? Mostly the reason to ban floating point number is the precision problem. The developer might write code that accidentally and finally hit bug for some numbers. I am okay with both option. I wonder how the committee thinks? + +RPR: The screen share has stopped. + +WH: Your connection is breaking up. We did not hear the last few sentences you said. + +JWK: Sorry. I said I am okay with both options that allow or disallowing floating point numbers. And I wonder how the people in the committee think of, and that’s the updates with `Number.range` proposal. Is there any questions? + +CDA: Are you able to put the slides back up? + +JWK: Yes. + +CDA: Thank you. + +KG: I see Mark is on the queue for talking about those + +MM: Yeah. So I just want to recount a bit of a back and forth on the issue list between myself and if I remember correctly TAB. Is TAB in the room by the way? + +KG: No. + +MM: I think we should – if we allow floats, which I’m okay with as well, then I think we should in the explanation of how to use this explain that the preferred – recommend that the way to express things like the example that was shown is still to use integer numbers and then to use a `.map` in order to divide the integer numbers by the right denominator to get the floating point outputs. Usually you can get the effect you want reliably by using a map and doing the divide just the floats that you desire on the outputs. TAB raised some examples where – that don’t really fall into that where the natural thing really is to have float inputs. I don’t dispute that. So my only request is that if floats are allowed, that additional explanation of how to use it robustly be included. + +JWK: Yes, we’re using multiplication instead of addition in the spec to make things easier, but there are still some precision problems. + +MM: Right. There’s the issue about what the iterator does, and using multiplication internally rather than addition is fine, but it doesn’t solve the precision problem. My statement was simply that to recommend patterns of use that actually completely avoid the precision problem by doing the floating point normalization on the outputs rather than the inputs + +JWK: That will be strange if the input is already a very, very small floating point number. And that might be surprising. + +MM: No, but the recommended pattern of use is one where the inputs would only be safe integers. And it’s just a recommendation because I’m not saying that – I’m not recommending that we prohibit floating point inputs. Just recommending that we explicitly recommend that people don’t use floating point inputs unless it’s really compelling for their case + +JWK: Yeah. + +CDA: Next in the queue we have WH + +WH: MM, how would your approach solve the example given in the presentation? + +MM: Simple. Instead of 0.39. It would be `input.map(x => x / 10)` + +WH: The example that was on the screen was range(0, 1, 0.3) producing 0, 0.3, 0.6, 0.8999999999999999. + +MM: So multiply the inputs by 10 and divide them by 10 on the outside + +WH: The example in the presentation had 0.3. + +MM: So it would be 0, 10, 3 and then map – + +WH: Where did you come up with 0, 10, 3 here? That’s not the example. + +MM: So humor me. I’m probably misunderstanding something basic. If you wrote `range(0, 10, 3).map(x => x/10)`, what would that yield? Okay. Go ahead. + +WH: There’s also a 6 in there. + +MM: So for the – I think the problem with the first line is that the – we imagine that the output that was intended was something more like the actual output of the second line, so if you rewrote it as the second line, you would get the outputs that you were probably intending. Am I not understanding something? + +WH: Sure, you can write something differently, but you were talking about floating point accuracy and doing this approach, and I don’t see how it solves the first example. + +MM: I’m not suggesting that the first example output be different. I’m suggesting that we have explanatory text recommending that if you thought you wanted to write the first line here, that you should actually write the second line here because it will probably give you what you are actually looking for + +WH: You’re just asking for an informative note in the spec? + +MM: That’s right. That’s right + +CDA: Okay. Next up, I think we have JHX in the queue. + +JHX: Okay. JWK, could you go back to the top. This issue, because I realize that was the discussion of this proposal, I realize that there’s no robust usage of floating range. There are three problems here. I list here three problems. The first problem is what we already see that the people who want to use – who write the range 0 to 0.9 and with a step of 0.3, what the people expect to get, I think most developers do not expect there will be – they expect 0.9, not 0.8999. So okay, I understand that this is the point of – this is the issue of the floating point. But the point here is what developers expect. So this is the first problem. The second problem is I think it’s much more serious that the rangej 0, 0.9 and 0.3, it actually gives you four numbers, but people actually expect three numbers, because the range should be inclusive. So it shouldn’t have the 0.9. But because there is 0.8999, so we have extra number here. So this is a much serious problem. And the third problem is actually, it’s very easy to write a workable program, but if you change arguments or if the argument is calculated, so it’s very easy to – the behaviour is hard to predict. +So I think this is the problem that we face. Yes, we can provide the – support the floats, but it’s hard to say there is any robust usage of floating points. So I think it’s – once again, the problem is – so would we really like to provide such API to the developers if the most usage of it will not meet the expectations of average programmers. So that’s my point. + +CDA: Okay. I think next we have Shu. + +SYG: I think this – so I was trying to understand why we don’t disallow floats. Like, who is – asked the other way, I was wondering who is advocating for allowing floats given the footgun we have discussed. I heard TAB. So I guess my two questions is who is advocating for floats and what are the use cases for floats? + +JWK: Floating point numbers, it comes with this proposal on the first day. I didn’t really think of this too much. + +SYG: Okay. And to say a little bit about what’s been happening on the Matrix chat, KG found that one, python range does not support floats. It seems like it does the hard end coercion such that if you pass 0.1, the step is actually 0. And two, the main counter argument in the issue came from TAB, which basically can be summed to – summarized as including floating point steps is fine and reasonable and people want to do it, and if we don’t allow floats, people will be angry and then they will just write their own. My reading of the overflow question about the python range - and TAB’s reading as well - is that people aren’t angry and they get told of this footgun when they find out that floats don’t work, which seems fine to me and makes me lean towards disallowing floats. + +JWK: Okay. It looks like we don’t like floating points in this proposal, so I will update the spec to disallow floating + +SYG: Hold on. I’m not sure if we have consensus on that we should disallow floats. I was asking – I was not calling for consensus. I was asking if there are advocates from the other side. + +CDA: Next on the queue we have JHX. + +JHX: About use cases, actually, I tried to find some use case of float and range. And, actually, I can’t find many. Because first in many cases it seems people actually want not floating, but fixed point numbers, or they might want decimal. And there might be some cases in – JWK, please go to the top. I wrote about a use case about when you have – yeah. When you have charts, you may want to calculate the x and the y and you don’t care about whether it’s very precise. But in this case it seems some other HI, like, Lin space, like that is much suitable. So myself can’t find solid use case of floats in range. + +CDA: Just a quick time check. We’re just under 10 minutes left. Try and keep that in mind. Next is WH. + +MM: No, next we have me. Quick point. I’m okay either way, but if we decide to disallow floats, then we should also decide to disallow unsafe integers. If we – + +??: Yes + +MM: Okay. Good. + +WH: I’m not okay with disallowing floats. It seems gratuitous here. There are many examples for which floats will work just fine. Truncating them or banning them will lead to worse confusion. + +JWK: What if we use .map(x => x / 10). That will also produce a iterator with floating numbers but with less surprise + +WH: What do you mean, divide by 10? + +JWK: Divide by 10, this one, the latter one. This will be less surprising + +WH: Yes, you can write that, but if users write the first one, then that’s what they will get, and that’s fine. We should not try to fix that. + +MM: Why not? + +WH: Because fixing it would be incorrect. What looks like 0.3 is not exactly 0.3. + +MM: Fixing it by disallowing float inputs. Why doesn’t that fix it? + +WH: Sometimes you want to use floating point numbers. That’s fine. + +SYG: But can I get an example? I don’t even need a very comprehensive set of – just like one example + +WH: A range from 0 to π in steps of 1. There’s nothing wrong with that. + +SYG: Yeah, but what do you use it for if you want a non-integral floating point step? + +WH: I didn’t use a nonintegral floating point step. It’s a perfectly sensible thing to do. + +SYG: I’m really confused. I don’t think anybody is saying that shouldn’t be allowed. + +WH: What I’m hearing is that people are saying that shouldn’t be allowed. + +MM: That’s what I took both your and my preferred positions to be, is that it shouldn’t be allowed. Now, I could imagine that we don’t allow it for the base or the step, but we do for the limit. That would be a little bit weird, but it wouldn’t hurt the reliability + +SYG: Yes, I guess my main issue is with the step. Thanks for teasing that apart + +JWK: Or can we disallow at first, and if we find developer really needs it in the future we can add it back? + +WH: No, we should not do that. + +PDL: Is there reason why we have to do the mathematics in floats? As in, the input can be floats with all three fields allowing float. The addition can be done in mathematical realm and then cast back to a float on the result. If you go to the double in the meantime + +KG: That is what happens. When you write 0.1, that doesn’t mean the real number is 0.1 + +PDL: I realize that, but only in a float + +KG: You’ve written 0.1 as the argument, and the computer has not received 0.1, it received the closest floating point approximation of 0.1 + +PDL: And if you cast that to a double, now you have cast, then you do your math and you recast it back to the float. + +WH: If you do what? + +KG: Right, and then you get output on the first line. That is what happens. The thing you’re describing is how it works. The computer doesn’t ever work on real numbers. But, like, the problem – + +PDL: I’m saying we should be working on real numbers. Can we do the math in between on real numbers. Is way it works is when you type 0.1 – + +KG, JHD: No. + +CDA: Quick point of order, can we avoid speaking over each other so I and others can understand who is speaking and what’s being said. Please utilize the queue as well. Next we have EAO + +EAO: No need to speak, just what’s on there: Not allowings, for example, division in map which produces better final results for the user, end of message. + +USA: I have to first admit, I mean, I understand that there might not be very strong use cases for the floating point case, but I agree with Waldemar in that, you know, this is nothing specific to range and this behaviour sort of exists everywhere you use any numbers in JavaScript. So I’m not sure why we’re trying to protect developers from what is well-known behaviour. That’s how doubles work. I mean, if it is unexpected, it can be unexpected in all sorts of places. + +CDA: All right. Next we have JHX: “Not specific to range. Range does have no specific foot gun.” We only have a couple of minutes left for this and I think there is an ask for consensus, so you can try and be quick. I do see end of message marked there. + +WMS: I want to save us time. I’m just the same stance as UJiwall value and Waldemar. We don’t have so much use case, but I think that’s not a problem the range proposal. That is a problem on the floating point on JavaScript engine in general. And I think if we disallow floats, are we considering doing the same for decimals or allowing decimals + +JWK: Yeah, if decimal is in the language, we will definitely support decimals + +CDA: Next we have JHD + +JHD: Yeah. I think echoing the same thoughts, I think the developers will expect the same thing as when they try and do the range manually, and that’s how JavaScript and numbers work. It seems weird to me to add a protection to protect them from a footgun that is universal and omnipresent. It’s of course certainly a thing we can choose to do. + +SYG: I might try to summarize what I’ve considered the compelling arguments from both sides. Compelling argument for disallowing non integral number values is to protect users against the foot gun and lack of use cases and data point from python’s decision as a language despite having nonintegral floating points generally, disallowing them in its range. The other side is to allow all number values, because that is a simpler mental model. It is, like, it’s easily understood. There’s no additional complexity around anything. It does what you would write it out to do and you get the same behaviour, including the foot gun. Is that – does that – is that a fair characterization for folks who feel aligned to one of these, if you care at all? + +JWK: I believe so. + +MM: Sounds fair to me. + +SYG: Then I think given that – so I started this saying that I was leaning towards prohibiting non-integral floating point. I think that one conclusion we can draw from the python data point is that it is relatively rare. So I can certainly live with both, and I am somewhat convinced by the appeal to simplicity with the rest of the language. If most folks can live with both, I think I am happier with having – allowing non-integral floating point at this time. That’s it for me. + +JWK: I’m okay with both. + +CDA: Okay. We are out of time. Are you seeking consensus for Stage 2? + +JWK: I want to seek for Stage 2, but maybe a conditional advance since we haven’t have consensus on the floating point issue. + +MM: I think we do have consensus in that I’ve heard an objection to the restriction and I’ve heard everybody who wants the restriction being willing to live without the restriction. + +JWK: Sorry. Can you rephrase that. + +MM: Yes. What I believe I heard in the discussion is that WH and perhaps others do not want the restriction. Several of us would prefer the restriction, but what I believe I heard is that everyone who wants the restriction is willing to not have the restriction. So I think we can achieve consensus immediately on not having the restriction. + +SYG: Yes. JWK, please ask for consensus for without the restriction. + +JWK: Okay. Can we go to Stage 2 with the status quo, which means we are allowing numbers to use – + +JHX: Sorry. Before you ask for consensus, could I still have an item on the queue. + +CDA: Can you be brief, because we’re past time already + +JHX: Yeah. Essentially, I think in many cases even foot guns, I’m okay about them, but in this case, in the iterator range case, such footguns – in many cases, you can use the tools to avoid the foot guns. For example, the TypeScript or the linter. In this case TypeScript or linter cannot help you. This is many case why I think it’s really bad foot gun, and I really hope we should not allow floating. Thank you. + +CDA: I do see DE in the queue, but again, we’re already a few minutes past time. + +JWK: Daniel means the same thing as I want, which means conditional advancement + +DE: So to be clear, I don’t think this is conditional advancement. I think this is advancement of the current proposal, knowing that it’s possible to it rate on this issue during Stage 2, until Stage 3. So I would encourage you to just ask for consensus for Stage 2, not conditional, as see if there are objections. + +JWK: Okay. So can we have unconditional advancement to Stage 2? + +WH: I support this for Stage 2 in its current form. + +CDA: Okay. We also have explicit support from MM, JHD, DE, and CDA. And WMS. Thank you. I’m not seeing any objections. Going once, going twice. You have consensus for Stage 2. + +JWK: Thank you. + +KG: Ask for reviewers. + +CDA: Yes. If somebody can kindly pull the notes up for the quick conclusion summary before we proceed to the next item + +WH: I will review. + +JHD: I’ll review it as well. + +ACE: Need to end the screen share. + +CDA: That’s right: Thank you. Someone can kindly flash up the notes. Thank you, Ashley. + +ACE: Anything in particular we want to – part of the summary? It sounded like where the foot gun, it’s more in keeping with the language to accept it? In the summary we note that people made arguments on both sides. On one side people argued that this is a foot gun. On the other side people argued that it’s more in keeping with the language. + +### Summary + +DE: There were arguments on both sides. On one side there is a footgun. On the other side people argued that it is more in keeping with the language. We decided to proceed to Stage 2 as it is now. Does anybody disagree with the conclusion, including plan to it rate during Stage 2 on the floating point restriction? Maybe instead of iterate, continue to consider. + +### Conclusion + +Consensus for stage 2 +Plan to iterate during stage 2 on floating point restriction +WH & JHD to review + +## Float16Array for Stage 2 & 3 + +Presenter: Kevin Gibbons (KG) + +- [proposal](https://github.com/tc39/proposal-float16array) +- [slides](https://docs.google.com/presentation/d/1dwAZG2TFK4GiXIk5nir5m7JkB4_VVUWmd4QWxpRgrn4) +- [spec](https://tc39.es/proposal-float16array/) + +KG: Yes. Float16Array for Stage 2 and possibly 3. So Float16Array. This would be a new type of TypedArray. There’s a link to the complete proposal here. This proposal was last presented in 2017. So I’m sure many of you were not aware that this proposal already exists and is at Stage 1. Thank you LEO for getting it to Stage 1. He’s kindly agreed to continue to champion it since he has not had time lately. Yeah, let’s just get into it. + +KG: So the contents of the proposal are a new TypedArray float 16 array, which is exactly like float 32 array in every respect except that it uses IEEE float 16 rather than IEEE float 32. This is also sometimes called half precision floats by contrast to double precision floats. There would also be two additional methods on DataView, getFloat16 and setFloat16. Vary I having the float 32 and float 32 (???). And as the proposal is currently written, I do not include anything else, the float 16 array and the two new methods on data view, but I’m open to additionally including a method probably called hfround, something like that that would be Math.fround. As a reminder that gives you the nearest number value which is precisely representable as a float32 value and Math.hfround would give you the nearest value that is precisely representable as a float16 value. I’m not sure of any cases for it. If there are I’m happy to include it. I just left it out because I wasn’t aware of any reason to want it. So the place will be left in this proposal last time was that people wanted to hear more about motivations. I think the motivations in the intervening five or six years have been quite firmly established. The reason I’m bringing this back right now in particular is that the ColorWeb CG CG, the group which works on the HTML canvas API is intending to add a new type of canvas that is backed by floats or in particular that is backed by a higher precision data type. Currently HTML canvases are backed by bytes by you entypes and 8 bits are not enough to represent colours. As I am sure you are aware, you need at least 10 to have reasonable cough vag of the human colour scale. There’s just sharp edges between colours, if you are only able to use 8 bits to represent each channel. So this entirely out of our purview. The canvas group is intending to add a canvas type that is backed by a higher precision data type and in the conversations about that, they – you have to expose the data from the canvas and they had originally said we would do float 16, but there is no float16. So we got to do float32 array. That’s a waste. That’s silly. You don’t actually need 32 bits for colours. 16 is plenty. And they would want to do float 16 array to back it if it existed. It just doesn’t. So that’s the reason I’m bringing this back right now, the reason this has most recently come up. But of course it is not the only motivation. I also wanted to point out that this is not WebGL – this is WebGPU is not shipping anywhere, but is, I believe in Chrome canary unflagged and Firefox also has an implementation shipping flagged. Although Firefox is behind. The web GPU specification includes as its first extension the ability to operate on 16 bit floats. And this is a screenshot of the specification. I also do ask explicitly the web GPU working group if they thought Float16Array would be useful and got explicit confirmation that they thought it will. It is useful to write 16 bit shareholders (???) when the – you really want to include Float16Array to do that correctly. Also just there are was some skepticism that float16 would be useful in the future the last time this came up, so I want to give drawing from current events. I’m sure most of you have heard about stable diffusion and in many cases those models are very, very large and often they are 7 billion 32 bit floats and I don’t know about you. I can’t fit 7 billion, 32 bit floats into my VRAM on my graphics card, but I can fit 7 billion 16 bit floats and this is the case for many people, that you will be able to use half the memory at the cost of some precision by representing your – float 16 is not going away. It will continue to be useful for this sort of thing basically for, because there is no upper bound on the limit on the size of these models. This is a screenshot of an issue that was opened or a thread on the three.js, which is a rendering library, very popular learning library and screenshot that they’re discussing having float 16 array would be awesome. And it is possible to do in user land. There is a polyfill of this proposal on NPM. It has been for some years. It gets hundreds of thousands of downloads everybody week, but it is inherently limited. It can’t integrate with the web platform. You can’t clone one of these things. You can’t post it to a web worker. You can’t transfer it, any of that stuff, so it’s not integrated with the web platform, which is really a same for something that is supposed to be low level. And of course it has quite slow reads and writes. So I don’t think there’s any design space here. Like, this is – that’s why I’m asking for Stage 3 as well as Stage 2. I don’t think Stage 2 actually would have any conversations. So I have complete spec text. The spec text is entirely trivial. You say there’s a new global. You say that it holds two bytes per element. You define the conversions to and from JavaScript to number values. I guess this is to and from JavaScript number values, yes, and you add the two new methods on DataView, and that’s it. You have just seen the entire spec text. These reference IEEE binary 16 format. So we don’t have to specify any of those details ourselves. Of course the implementations are not trivial. (???) If you prefer something simpler, it should be easy because compilers generally have support for the under score under score FP extension, which is – let’s you have 16 bit floats and convert those to and from regular floats or doubles which are JavaScript numbers, but optimizing things is always more work. Anyway, that is the proposal. That is the full semantics and motivation. I would like to ask for Stage 3. + +CDA: We have a number of items in the queue. First we have Waldemar + +WH: I have two items. One is you’re introducing a new numeric data type, so you should have a conversion to that type. Conversion from that type is trivial — just identity conversion. But you should have a conversion to that type, so yes, there should be some kind of a round method. For example, if you’re doing a search, reading float values that you’ve stored into one of those arrays and comparing them against a float, when you do that comparison, you should round the other float to a half float before doing an equality comparison, otherwise it will not do what you want. + +KG: I’m happy to include that. It so happens I’ve already written the spec text for it. So I’m happy to add that to the proposal. + +WH: Great. That’s item 1. Item 2 is more controversial. There are two float16 standards. Both are widely used. They are incompatible with each other. There is IEEE float16 which has 10 bits of mantissa. There’s also bfloat16 which has 7 bits of mantissa. The latter one is used for machine learning quite a bit. + +KG: Yes. I am familiar with it. But if we go through the motivations, polar web CG is proposing to use IEE16. three.js – sorry, some machine learning, as you point out uses Google's bfloat16, but not everything. This stable diffusion is using IEEE float 16. User land is using IEEE float 16. I agree that adding bfloat16 might be something to add in the future. I don’t think this should preclude having bfloat16 at some point in the future, but IEEE float 16 is well motivated and deserves the name float 16 because that’s how everyone differentiates, there’s float 16 and there’s bfloat16. I agree the standards both exist, but I’m only proposing the IEEE one right now + +CDA: Next we have SYG. + +SYG: I also have two back to back. Yeah, okay. I think the first one is about the multiple competing standards, which you did address. I think because of that and because of some ML is motivated by this, I’m happy with Stage 2, but I don’t think we should move to Stage 3, rush it through Stage 3 right now. Because I think the design space is not completely rigid because of the competing standards. There are – like, I understand that part of the motivation here is you sensed some urgency about the canvas spokes about that. My reading of the this thread which I’ve been roped into is I don’t think the sense of urgency is shared by all browser renders there. I’m not saying float 16 is ill motivated. I do think float 16 are useful, but my read of this is neither Chrome or Firefox believe that float 16 is the right answer. It’s only matcha (???) who believed IEEE is the right answer. I also have implementation concerns, but I image Dan minor will speak to that. + +KG: I do want to emphasize that Canvas is only one of the many motivations for this. WebGPU also exists. Like, WebGPU is not going away and it has IEEE float 16s. It is well motivated even in the absence of canvas + +SYG: I think it is well motivated. I think the urgency for Stage 3 at this meeting is not well motivated + +KG: Oh sure, I didn’t intend urgency, I just didn’t say any design space. So I can come back again for Stage 3 but I’m just going to say the same thing. I think I have said everything there is to say, as far as I’m aware. + +SYG: But, I mean, given that there are competing standards and that we probably want `hfround`, like, I think there is some thinking to do on our float 16 versus bfloat16 and the – there is some WASM alignment to do. In that if we have float 16, what is the – what do we want to happen on the WASM side for this. I don’t think it’s just straight to Stage 3. + +KG: I mean, okay. I’m happy to only ask to Stage 2 + +CDA: We have several more items in the queue, and we’ve got just around 15 minutes left. Next up we have DE. + +DE: SYG, are there particular action items you’d recommend for champion or are these things the rest of the community might look into + +SYG: For the champion, if KG wants to do more work on the WASM alignment, that might be interesting. I guess a rough sketch of – an argument for why bfloat16 and float 16 and a future with float 16 operations - like, currently all this does is specify the storage and interchange format with an `hfround` maybe. So in the future if we also want float 16 arithmetic on math or something and given they’re competing standards, is it completely orthogonal. I want to be convinced of that + +KG: I’m happy to put together slides on that, but also – we have float 32 arrays, but we didn’t think about that question, because no one was ever going to propose arithmetic on float 32, as far as I’m aware. I wasn’t intending to design this for a world in which we have arithmetic on these, they’re just a storage format. + +CDA: Next on the queue we have DLM. + +DLM: Thank you. In general, agree with SYG’s assessment that this is fine for Stage 2, but we wouldn’t want to see it go to Stage 3. Stage 3 usually means we’re ready to implement. From our perspective, we’d like to study how it is implemented before we’re ready for Stage 3. Doing a quick bit of research, I get the sense there’s no support in the C plus plus right now. That might be coming in the following year. Not clearly supported across all CPUs that we might want to target. And so I mean, I guess general concern would be maybe we don’t actually end up that much faster in the library because we’re simulating float 16s on top of F16, which user library is using. In general, yes, we think it’s fine for Stage 2, but we’d like to have more time for researching implementation before we support Stage 3. I don’t think there’s much more work for you to do in terms of spec text, but this needs longer for us for the implementation + +KG: Sounds good + +CDA: Next we have SFC + +SFC: I think WH and SYG already covered this, but I’d like to see the document explaining more explicitly about how we see bfloat16 fitting into this picture. I think that would be helpful + +KG: Sure. I will put a slide up on that next time. But the answer is that some ML systems do use bfloat16, but a number of them do not. IEEE floats are also widely used for ML and in particular, WebGPU, which is a standard which exists and will have support for IEEE float 16s for doing computation on the GPU, which is computation on the GPU is not quite entirely but largely entirely synonymous with ML and is only going to have IEEE floats. While there is use of bfloa16 for ML, there is also widespread use of IEEE float16. + +SFC: I still think Float16 is motivated. I just think it would be better to be explicit about the full picture. + +KG: Sure + +CDA: Next on the queue we have PHE + +PHE: So looking at this from the perspective of embedded devices and Ecma 419, I’m happy to see this proposal come back to life. I think there are potentially some interesting uses of float 16 on the devices that we do where especially ram storage is limited. So being able to store data more compactly is desirable. It’s great. Looking at it from the perspective of the XS engine, I think I share some of the concerns that both SYG and DLM raised about implement abilities as we look across all the different types of CPUs we have to handle and compilers that exist, I can’t say with confidence that we can do this. And I’m not prepared yet to say what we learn in the process of studying that won’t have some impact on the design space. So I think echoing SYG, I’m perfectly happy to see this move to Stage 2, but not yet ready to say Stage 3 + +KG: Okay. + +CDA Next we have SFC. + +SFC: Yeah. I just wanted to say that float 16 is useful as an interchange format. You often don’t need to carry 32 bits of floating point stuff in your – in data files. I know I’ve been dealing with this a lot the last couple weeks and I’ve been trying to reduce the size of internationalization data. So just wanted to point that out as another motivating this case possibly is for data interchange. You don’t really need all 32 bits for a floating point number. + +CDA: Okay. Next up we have SFC again. + +SFC: Yeah, my next question was maybe this is out of scope for the spec and this is something for implementations, but there are various hardwares that have special support for float 16 operations, in particular, for example, F16 to F32 hardware level conversion, things like that. Is that something that we should say in the spec or is that something that you envision that implementations would leverage, if possible. This stuff can also be done in the CPU, it’s not that hard. F16 to F32 is like, shift some bits around and you’re done. But you can probably save, you know, a couple cycles if you use the hardware instructions. So I was wondering how you see that fitting in with the proposal + +KG: Float32, I think is an instructive example, also in the specification already, also has widespread although not universal hardware support. Specification has absolutely nothing to say on the topic of arithmetic on float 32 types. I think implementations are of course welcome to implement optimizations to use hardware. But that’s available for doing arithmetic on float 16 when they can see that values are within that range and if computation starts happening on float 16’s lot, then they might well want to, but it’s certainly not something I would expect to require. It’s not something that happens that much that you have hot code that is operating on float 16. Like, just the ways I have seen people wanting this, it’s like I’m defining a shader. It’s nice for that not to be very slow. It’s important that I will able to pass this shader a web worker, but the actual process of defining the shader, it’s okay if it uses float 32 arithmetic and then does conversions and that’s not going to be a significant effect on the performance of my application. So yeah. I think hardware support when available is something that implementations are certainly welcome to use, but the spec is not going to have anything to say about it. + +WH: Regarding this last point, the IEEE floating point standards are cleverly designed so that they have interesting identities which always hold. For example, if you do float32 operations – a single float32 operation — it’s equivalent to doing the same operation with float64 arithmetic followed immediately by rounding to a float32. That’s useful because it means that if you have code which uses double arithmetic and follows every operation by rounding to float32, then it’s valid for a compiler to compile it into float32 operations. The same thing is true for float16 and IEEE doubles, which means emitting float16 addition, multiplication and such is just a quality of implementation issue. Now, that’s true only if you round after every operation. If you do a bunch of additions and then round after all of them, that doesn’t work. You need to round after every operation. + +WH: The other thing I’d like to mention is regarding the possible implementation concerns. I don’t think there is much of an implementation concern. The core of this is taking a double and turning it into a float16, which is just a bit of bit-banging. You could fit the entire code that’s needed to implement rounding from 64 to 16 bits on a single slide of a presentation. + +KG: Yes. I think implementations have tried to optimize things and the optimizations are the parts that I have heard concerns about + +WH: It’s just a bunch of bit shifts. + +KG: I hope that will be the case. I’m not going to ask for Stage 3, but hopefully implementations will go and check and everything will be simple + +CDA: We have a few minutes left. Next we have MM on the queue + +MM: Just a quick bikeshed, the name `hfround` I find – I don’t know what the H is. I’m guessing half, but I’m guessing half only because I know that 64 bit which JavaScript just calls Number is also referred to as double. Since JavaScript just sees Number as normal, the HF round is just obscure. I think if we called it `f16round`, anybody seeing it in code would immediately guess correctly what it means + +KG: That sounds good to me. hfround is what it was called when I came to this proposal. But f16round sounds good to me. + +MM: Thank you + +CDA: All right. JHD, you’re next on the queue. Did you want to speak? + +JHD: I support Stage 2 and even though it’s not happening today, I would have supported Stage 3 and with the rounding function, whatever it’s named + +KG: I would like to formally ask for Stage 2, including a rounding method probably to be named f16round, and I will come back at Stage 3 as soon as I have put together some more slides and given implementations time to check on implementability. But only asking for Stage 2 at this time. We just got an explicit bit of support for JHD, but I’d like to ask for consensus for Stage 2 + +WH: I support that. And I sign up as a reviewer. + +JHD: I’ll review as well. + +KG: Thank you. + +CDA: As we mentioned from JHD. We have WH and CDA (IBM) does support this for Stage 2. We also have support from MLS and SYG. + +SYG: Kevin, are you willing to reach out to the WASM CG and if we have 16, any interest in their adding IEEE half + +KG: Yes, I will reach out to them + +CDA: Can you stop the screen share and somebody can kindly pull up the notes to review the conclusion and summary + +### Speaker's Summary of Key Points + +Implementations were not comfortable with stage 3 because they need time to determine implementability +interest in ‘bfloat16’ to be explored +interest in wasm interop to be explored +should include a rounding method + +### Conclusion + +- Stage 2. Explicit support from JHD, CDA (on behalf of IBM), MLS and SYG +- Rename ‘hfround’ to ‘f16round’ +- Reviewers: WH, JHD + +## Decimal Stage 1 Update + +Presenter: Jesse Alama (JMN) + +- [proposal](https://github.com/tc39/proposal-decimal/) +- [slides](https://docs.google.com/presentation/d/10a7dcAPPgIYaHOFjYlltmvQUOgJXI7pDo8dZ1DkKTkI/). + +JMN: My name is Jesse Alama. From Igalia, working in partnership with Bloomberg. Glad we had a couple of mathematics talks. I’m hoping most of you have your mathematics hats on. This is another one about that. This is a Stage 1 update. What I’m looking for is to get some input and feedback. Sketch some of the use cases for decimal. I’ll give you a bit of a sketch from the design space and dig into one of the dimensions from design space and sketch a path forward. Again, no requests for permission to do anything, just to warm you up and get some feedback. + +JMN: So it looks like we’ve already seen a bit of motivation in earlier talks about why Decimals might be interesting. Add that to my ever growing list of use cases. Just to remind you to nail down the terminology. Decimal numbers are supposed to be an exact real noninteger numbers in base 10. Meant for human creation and consumption. In this setting 0.1 and 0.2 is 0.3 without any rounding. JHD made the point earlier that when you type 0.1 in JavaScript it’s actually 0.1. You’re very, very close, but not exactly 0.1. –. +There are all sorts of use cases. Some of these focusing on the front end. Some on the browser in the back end. Some kind of node JS powered app. As you might imagine the main use cases have to do with finance and business, where exact representations of things like prices, tax rates, currency conversion are not just nice to have but must have. Your application needs this. There may be legal requirement to have exact representation of numbers. For instance, the EU mandates that currency conversions are done with six decimal places, always. So there's a legal requirement there. If something in the front end might be charting or graphics, you might imagine having graphics in a browser where the points or numbers labeled there have to be exactly correct. + +JMN: How many decimal places do we have to represent? Well, in most of us in this room, typically two if we're dealing with prices. But sometimes we need to do more, in the case I just mentioned, with currency conversion, you might have to do four or six decimal places. If you’re dealing with cryptocurrencies, maybe you need nine digits or more. These are contexts in which you might need a lot of support for these exact representations. In science, this is another case where human-generated numbers and numbers for human consumption are needed. Measurements are intended to be human readable. Think about unit conversions and think about scale. Think about, for instance, a smart scale that does some kind of unit conversion for you. As an American living in Europe, this is an issue for me. I don’t know how many times I have to pull out my calculator and try to do some kind of conversion. Sometimes I need it to be exactly correct. + +JMN: Another interesting use case is in a database. This could be either frontend or backend. The point is, your JavaScript engine is consuming some kind of data coming from a source that already supports exact decimals. Many exact databases already support these and have for many years. But if you just turn these kinds of numbers into JavaScript numbers naively then you’re almost allways going to be losing something, and possibly getting something wrong. + +JMN: That’s why many developers often work with such numbers as JavaScript strings and then try to jump through hoops and do some kind of tricks to get this to work appropriately. + +JMN: A while ago, at Igalia, there was a survey we sent around. DE was involved in this. We were trying to solicit some feedback from the JS developer community to whether there were use cases for decimals. The previous slide was just me spinning my own wheels about what I think use cases might be. But if we try to get out of my own head and go into the world and get some data about use cases, there’s all sorts of things that you can imagine here. These are quotes from developers. So things like representing and manipulating currency. Calculating fees on top of a base price. Performing operations on precise decimal constants. Imagine some kind of scientific context where you need to work with high precision numbers. I like this one: "we are a bank and we want to make sure we show correct numbers to our customers, mistakes lead to confusion and decrease customer’s trust in us as an institution". Next time you look at our bank statement, ask you’re whether there might be binary floating point errors. + +JMN: So this is something on the front end. “Including custom steppers. We talked about stepping a couple of talks ago, the range. We use big.js, that’s a package for handling exact decimals to do some kind of decimal conversion.” Here’s another one: “Right now users are supposed to do a big decimal library if they use a decimal SQL type, but the reality is that a lot of devs think their value being returned as a string is a bug. 0.3 is not exactly what you think. 0.9 is actually kind of 0.8999, et cetera.” And a lot of developers, understandably or not, this is their world and mistakes can happen when you’re working with numbers out-of-the-box in JavaScript. + +JMN: Charts! Just imagine you go to your bank or you have some kind of financial institution where you produce some kind of fancy graphics and maybe a requirement of your application is that the numbers that you display here are exactly correct, not some kind of close enough approximation, but exactly correct. This [on slide] is perhaps not the best screenshot in the world, but if you zoom in there, there’s a number displayed. I put my mouse over one of these fictitious charts, I got this courtesy of chart.js. We could go on for hours talking about all sorts of charts, and banks and so on, and again one of the motivations might be: it’s not okay that these numbers are merely close, but they have to be exactly right. + +JMN: How many of you have a home or some kind of property, then perhaps you’ve seen charts like this where you try to find out how much can I pay for my property or for my home, how much money do I need? How long do I want to pay this off and so on. You get some kind of table like. This you might imagine that you get some kind of table of say expected monthly fees from some kind of bank or mortgage institution, this might – the institution might be on the line for correctness of this data. You don’t want any errors. This might be a serious mistake if you use regular floats for these kinds of numbers. + +JMN: Just to really nail down this point, because I think a lot of developers just don’t deal with – aren’t up to date, their state of knowledge about numbers isn’t what we in this room think it is. I love this example. Imagine you have a 70 cent phone call and you have to charge 5% tax on it. This isn’t rocket science. You find out that this is exactly 0.735 cents. That’s simple math. And your business logic, you might round that up to 74 cents because you can’t charge a customer 73.5 cents. This [example on slide] is one way you can get 73. I think, again, a programmer is maybe a bit misguided to do it precisely this way, but it’s understandable what they’re trying to do. They might not know that the decimal that they’re working with is not exactly up to the task. + +JMN: So, again, [looking at slide] if you played the board game Monopoly, there’s an aspect of the game where you might get money or you have to pay fees, unexpectedly, just like in life. Community chest “there was a bank error in your favor” where you just get money for doing nothing. And the question is this can happen if you’re using floats, but the thing is, it could be a bank error in your favor or in favor of the bank. Maybe you’re paying too much. I’m trying to nail down the idea that they feel right, they look right, but they give us different semantics. + +JMN: So that is the motivation for having decimals to JavaScript. The design space for decimals is very interesting. There’s a whole series of questions we can ask and give different answers to different questions. Do we want to have new syntax for these—literals? There’s a couple of data models, or more than two, that you could think of. I’ll talk about those in just a moment. Whether you want arbitrary-precision decimals or some kind of restricted range? Do we want to overload existing arithmetic operators like plus and times? And are decimals going to be objects or primitives? All sorts of things we can talk about there. We could spend all day talking about these things. I don’t want to bore you. + +JMN: I just want to mention some of the key questions here about the semantics. Because at least two – maybe I should update the slide. At least two valid approaches for representing decimals, arbitrary precision and limited precision. I think as a starting point it’s fine to think of decimals as arbitrary precision things. We’re in the world of mathematics and if we have decimas, then we can have as many as we want on the right-hand side. Arbitrary precision has no limits. Limited precision puts some kind of a cap. There are good reasons why limits might be needed and why arbitrary precision might be not as great as it initially appears. But an interesting question is: if we do put some kind of limits on there, what are the right limits? We want to support the use cases that we talked about before and use cases similar to those. We don’t want something that restricts us from expressing kinds of things that we want to express. Arbitrary precision. If I use the word “big decimal”, this is what I mean: some kind of arbitration-precision semantics for decimals. The semantics is, well, it’s basically just math. I mean, you can do whatever you want. There’s no limits. There are essentially strings of digits or two big integers, something like that, left-hand side and right-hand side. + +JMN: But when one thinks about this a little bit more you realize that you get into a little bit of trouble, because dividing two finite decimal numbers rarely gives us another finite decimal number. Sometimes you get lucky and you get something that is exactly representable as a decimal number, but usually you don’t. It needs to be truncated. Or maybe there needs to be some kind of rule for how many digits we get. + +JMN: There’s a limited precision approach to decimals. And here is one in particular. IEEE-754. We heard about that in the previous talk. There’s another part of that standard called decimal128, which is a specification of floating-point numbers. Decimal floating point numbers, that is. In this universe every decimal number uses 128 bits. We can get up to 34 significant digits and the exponent can be plus or minus about 6,000. So that’s a lot of space. That gives us a lot of room for writing down numbers. I mean, have you to ask yourself how many times have you needed more than 30 numbers? Again, for a human-readable number. I’m not talking about arbitrary numbers. I’m not talking about use cases where I need 100 digits. If we're talking about money, we haven’t yet reached the stage where we need 34 digits to represent someone’s salary or wealth or something like that. Not yet. + +JMN: There are interestingly in the standard IEEE 754, some things like Decimal64 and Decimal32. I think one difficulty there is that these models offer us too few digits. So, one gives us 16 digits total and the other gives us only 7, which is probably not enough. You want to represent numbers that are a bit larger than say 7 and 16 digits. So therefore Decimal128 would be the reasonable thing to look at. + +JMN: There’s an interesting thing that has to do with normalization. The IEEE-754 spec, the semantics they use involves the semantics where they deliberately do not normalize. Normalization: you eliminate any trailing zeros. If you have 1.20, that’s actually 1.2. We don’t try to work hard to represent 1.20 somehow as different from 1.2. In IEEE-754 those are different numbers, although they compare as equal. One of the issues that I believe was discussed and settled here [in TC39] is that if we were to go forward with decimals, then we would try not to expose this to the JavaScript developer. So you wouldn’t really see any difference between 1.20 and 1.2. By the way, this is not just Decimal128. Even BigDecimal would require facing this issue as well. + +JMN: So here is a kind of pro and con of comparing the semantics of these two approaches. If you think about Decimal128, there are some pros. There’s support in C++ libraries. Bloomberg has a library, IBM has a library. There are probably more. Some compilers support this out-of-the-box, probably by just sneaking in some kind of library there. If I think about the ergonomics of Decimal128, I think it’s a little bit better, because the operations with Decimal128 arguments always produce Decimal128 values without any extra context having to be given or additional arguments having to be passed. So I can multiply these two things together and I always get a value that’s in that same space. +Also, especially if you start doing more calculations, I know at the end of the day 128 bits are used to store my values. This is something that is not true for BigDecimal. If we think about some downsides to this Decimal128, well, if you put some kind of limit on the space of numbers, then, yeah, some things are going to be left out, by definition. The question is: if we think about the intended use cases—and again, our original motivation that these are supposed to be human-readable and human-consumable numbers, how often is this [space limitations, that is] going to be an issue? There might be a complaint that in some cases we don’t need all this space. Think about the number 0.1. Do we really need 128 bits for that? The answer is, in a sense, no, right, but the counterargument to that is well, we’re already using 64 bits when you write 0.1. That’s already heavy in some sense. Underflow and overflow are still issues in the Decimal128 world. True. But I would argue that really happens only in rather extreme cases. Again, keeping in mind the intended use cases. You have to get some extremely big or extremely precise numbers for overflow or underflow to affect anything, although it is admittedly possible. + +JMN: BigDecimal, pro: unlimited precision! That means that all use cases are going to be supported, right. Dig decimal is basically just math. So, yeah, of course this is going to be the right solution for pretty much anything that you want to do. It’s easy to polyfill and the issues of overflow and underflow are kind of not really there, not in the same way as they were with Decimal128. Some downsides of BigDecimal, in my view, is that some operations require specifying the number of digits [of the result of an operation]. So you say 1 divided by 3. Well, in the Decimal128 universe, that just works. That’s fine. That’s a simple answer. You’re done. But here you have to give some kind of extra argument or some context to say after how many digits do I cut off this calculation. Of course I don’t have a finite answer to these [that is, ⅓]. So the ergonomics might be bulkier. + +JMN: Complex calculations! In a world where you’re doing more than, say, the 70 cent phone call example, BigDecimal has the downside that the number of digits that can involve, especially when you’re doing multiplication and—even worse—division, the number of digits can be very big. To the programmer might program something that’s fancy and complicated, he wants to show off to his boss and it takes up a lot of memory and these big big numbers can generated along the way. It will be slow and memory intensive. That’s not so optimal. + +JMN: My understanding in preparing for this presentation and doing some research on discussions about decimal, is that there’s a question about whether it’s worth doing this [Decimal, that is] at all, whether they might be too heavy, whether this might slow down an engine in an unacceptable way. And here is one modest proposal that I think would satisfy developers and implementers. I think not everyone is going to like every detail here. But that’s the nature of compromise, right. + +JMN: One path going forward is that the underlying data model can be Decimal128, with values always normalized. This keeps if ergonomics really clean and we don’t have to make our programs dirtier by specifying the number of digits all the time. There’s a new Decimal class. It had the operations for addition, subtraction, multiplication and division, and that’s it. We can’t do the trigonometric functions or all that stuff. This is a good match for the use cases coming mainly from science and business. We can have the new literals, that’s probably not too complicated, but I would propose not doing is overloading arithmetic operators. My understanding is this is a very difficult thing for the engines to do. We don’t have a good story about overloading, so in the absence of that story, we won’t do it [for Decimal]. And don’t add new primitives. So these would be new objects, not new primitives. The motivation is to keep implementers happy. Adding primitives can be a costly and invasive change. + +JMN: That’s all I wanted to say. This is something on my mind: Do we want to do decimal in any form at all? And if so, what form should that be? I sketched one kind of modest proposal here, I think something that many people can accept, perhaps as suboptimal. At least the data would be there, these decimal numbers. Maybe it doesn’t give us everything we need. So that’s everything I’ve got. + +SFC: I just wanted to raise a third data model, one that we’ve used in CLDR for representing unit conversions, we use rational numbers, bigint over bigint. You didn’t mention that in the slides, but it might be a thirty data model to consider. There’s big decimal and decimal128 and also rational, I think has some merit in maybe worth listing as an alternative. That’s all. + +JMN: Thank you. That’s right. This is an interesting topic. There’s an issue where we discussed this whole idea of rationals as an alternative to decimal. Not just a new data model, an alternative to it. Would be kind of an appendix to the idea of rationals. I think one of the concerns there maybe you have real-world data that shows this is not true, but one concern I would have would be the cost of normalization of these things, ultimately serializing these things for human consumption. Intermediate calculations involving repeated GCDs. That’s a theoretical concern. But if you have data that shows we can be clever about that or that’s not as much of an issue as we might think, I’d be happy to – + +SFC: I'll respond to that before DE, is that they are very useful especially for unit conversion, which is a big case here where you can basically chain several rational calculations all together and then you basically at the very end realize it as a normalized thing. So when you’re doing calculations, you don’t necessarily normalize it every step. Normalization is a last step that you can get. + +CDA: So we only have a few minutes left for this item, and the queue is quite full. Jesse do you want to take a look at the queue and see if there’s any particular item there that you want to address with the few minutes we have left + +JMN: Sure. Just a moment. Wait a second. I lost my queue. Or somebody can flash it up + +SFC: I can go through them quickly. + +DE: Maybe we should give each person on the queue a time to talk and then go into – maybe you could freeze this for potential further discussion tomorrow also possibly + +CDA: DE, do you want to go ahead? + +DE: Skip me + +CDA: Back to SFC + +SFC: I just want to say I’m quite concerned about the assertion that normalization is something you want because one of the biggest problems with floating point and internationalization is that trailing 00s are quite important. "1 star" and "1.0 stars" produce different localized strings. And that's one of the compelling reasons for decimals – which is a capability we don’t get with floating point. In Stage 1, I would hope that we could explore that space, whether that’s something we can support. + +JMN: Sounds good. + +WH: I’m on the opposite side of this. I want 100 to be the same number as 100. If you allow significant trailing 0s, then 100 can be different from 100, even though they look identical. That can be a huge hassle. If you want to represent precision for internationalization then a precision argument is the way to go. The other couple things I have is I would want equal decimal numbers to be `===` to each other. And also I see you don’t want to do any of the common functions such as trig, but things like exponentials, logarithms are used a fair bit in finance, so omitting those would be an issue for finance applications. + +DE: Which operations? + +WH: Exponentials and logarithms. + +DE: Can you – + +WH: We don’t have much time, so let’s go on to the next person. + +DLM: Okay. I just wanted to point out that the use cases here seem rather specialized and we have a large design space, so I guess I’d like to see when maybe this comes up for advancement an argument as to why this can’t be done in user libraries and give users a choice of precision rather than in the language. + +SFC: Yeah, given that we don’t have much time, I’ll just point out that regarding that, you might look at Intl NumberFormat V3 for how we were thinking a bit about arbitrary precision decimals. I talked to you a little bit about that yesterday. + +CDA: All right. We are out of time and already running behind schedule. So we are going to have to move on. + +### Speaker's Summary of Key Points + +We sketched some use cases for decimal and investigated some competing semantics for them, offered a middle path, so to speak, between some controversial design space points. + +## Next steps for RegExp escaping + +Presenter: Jordan Harband (JHD) + +- [proposal](https://github.com/tc39/proposal-regex-escaping) +- no slides + +JHD: I don’t really have any slides because this is sort of a free discussion. The last time I brought regex escaping to the committee we agreed it should be brought back at Stage 1 for further discussion. My position as a champion and as a developer is that the thing that I want in the language is the thing that every library that provides this functionality pretty much does, and which everyone has been using for well over a decade, which is a function that takes in a string and spits out an escaped version of the string. There has been pushback in the past for a template tag approach that creates the entire regular expression at once so it has the full context and can do better escaping, and I believe there is a library that exists since this proposal was first brought, I believe, on npm (I forget the name of it). I can look it up later. It offers this approach. It’s a template tag function and will construct a regular expression. +So one direction that could be pursued here is to withdraw the proposal; another direction would be to pursue just the template tag approach. Another would be to pursue just the prior art approach and then a fourth could be pursuing both approaches. I want to kind of get a sense of the room and what people think are viable directions to explore before I commit time to it. + +CDA: We’ve got KG in the queue. + +KG: Yes. I very strongly support having regex escape method. It’s gotten trickier with v-mode regexes, because v-mode introduces a handful of punctuators that need to be escaped. And u mode does not allow unescaped characters – so we would need to modify those so that they allow those escaped characters as identity escapes. But with that change, it is +perfectly possible to have a regex.escape that escapes a thing in such a way it can be used in any context within a regex except in the repetition context. Of course it will mean different things in different contexts, like things will be escaped properly. And, like, that is a thing that people have wanted forever and we have been telling people for a long time, we’ll work on it, and we can’t just continue not doing it and saying we will work on it. It is possible for us to say we are never going to do this and I would not be in favour of that, but that would be a better outcome than the current state where we say we’re going to keep working on it. Because if we say we’re never going to do it, then node is just going to ship the thing that everyone wants and probably browsers will as well and everyone will have the thing that everyone wants, it’s just that we won’t have specified it. That’s silly. We should just do the thing people want. We got to deal with the extra complexity from V mode, but we should just do the thing people want + +JHD: My reply is just what he said, the function form will be shipped if we don’t decide to do something because that’s what everyone wants. + +MM: Composing code by appending strings together is inherently dangerous. It is something that led to the early appeal of HTML followed by decades of misery of where injection attacks and such things, various kinds of coding confusion was just the bane of everyone’s existence. In exchange for a little bit of convenience up front. We should never make that mistake again. The template tags were explicitly motivated and put into the language to solve context-safe escaping. This is a perfect use of it. MSL and I were the ones who put the template tag proposal into the language in the first place. MSL put together a high quality template library for regular escaping. I’m glad to see there’s another one in use in NPM. From the beginning I did not say we should not do anything. I from the beginning said we should do the template tag the safe regex. I continue to have that position. And I hope that’s what we do. With regard to the other one, yes, the committee should say explicitly we will never do the other one, because it’s bad software engineering and to take context sensitive languages and append code together by string append + +KG: So I have a reply to that, MM. I agree with you in general. And in fact the community agrees with you in general. For example, there are a number of very popular libraries for – There are a number of very popular libraries for using template tags to compose SQL expressions. And those are a prime example of a place that need to you do that. And it works well. So the community is not opposed to using template tags where appropriate to compose strings that will be interpreted as code. But for regexps it’s just not necessary. You can just escape everything and it works great. + +CDA: Next we have EAO. + +EAO: Every time I’ve gone looking for this function on MDN, at least possibly two, three times because I presume it exists. I’ve been looking for the `Regex.escape` version of this one, because the input that I’m dealing with is something that is untrusted input that I want to escape. And needing to go through a template tag in order to get at what I actually want out of it seems clumsy. Strong preference, personal for the regex.escape form. + +MF: How is a template tag clumsy? I didn’t understand that comment. + +EAO: So if you look, for instance, at the proposal itself, it ends up needing to have you use regex.tag/then dollar, curly braces and then actual thing you’re operating on close the curly brace, close the slash. This is three levels of wrapping around the thing that I’m trying to escape. That just feels really, really clumsy when you’re actually writing it + +MF: So you’re saying that the untrusted input that you are trying to escape, you’re not embedding that within a regex. You just want the escaped result of that thing and then later you’re going to use it as a regex? + +EAO: Or that I want to use this one in particular, for example, in a template tag that I’m using to construct a regex expression as a whole. I don’t want to escape the whole of it or I want to escape this variable that’s coming in, but not that one + +MF: I think you don’t understand how the template tag works then. The parts that are not interpolated are not escaped. They can be control characters. The parts that are interpolated are untrusted and can be escaped. So if you’re saying you can escape this thing and put it into a regex where you’ve written trusted content, you’re using it improperly. You didn’t understand how the tag works + +EAO: How with the template tag format can I construct a regex that takes two different variables, one of which I do want to escape and the other one I do not want to escape? + +MF: So you’re already at a point where you’re mashing strings together, then. You’re saying that the variable you do not want to escape is a trusted input that you want to treat as a regex. + +MM: Can I jump in and answer that. Now that you clarified that. MSL’s template library is an elegant solution. If the substitution value itself is a regex object, then the RegExp object is already a well formed pattern matching and it goes ahead and treats that as composing pattern matching. So it’s very nice and powerful that it both does in a way that’s intuitive for both cases. The composition of regex to (?) registerer as well as the escaping of untrusted input + +JHD: Feel free to say this is not something we should talk about, but I don’t know how that works with flags and anchors and other things. Is it extracting the source out? + +MM: I don’t know either, but I know that it’s the kind of thing that MSL would have thought deeply about, and it’s been years since I’ve looked at his library, but we did go over it together. We wrote extensive test cases for lots and lots of odd corners. So I’m happy to – if we can agree to go forward with the template approach to this, then I’m happy to take a look at that together, to bring Mike Samuel back into the conversation and to explore that + +EAO: Related point here is that effectively with at least as proposed interface with the template tag, it makes the escaping implicit and magically hidden inside. Whereas when I’m dealing with user created I want to be clear that this is absolutely escaping. This is sounding like we’re making a really complicated thing around just – by trying to work around the ability to then possibly combine two strings later on. It’s just – the template tag approach seems just way too complex for what’s needed here. + +RBN: This is actually a follow-up to the discussion about the template tags discussion that Mark Miller was just talking about. We’ve talked a couple times in the past about when we had the proposal for modifiers which I’m still working there, one of the things that was in that proposal at the outset was this concept of prefix modifiers, and I’ve posted in the chat as well this I think there is a potential to kind of blend all of these things together, regex escaping using a template tag, using prefix modifiers to supply tags that only works at the beginnings of the string and allows it to specify V mode or X mode to handle spacing would, I think, be a wonderful combination of these various capabilities, because I think that if we wanted X mode and I’ve had a lot of comments in the past few weeks about X mode and folks wanting to have some form of multiline regular expression that a template tag might be the solution for that and the solution that combines these two approaches might be viable. + +MM: So for me, as long as it’s safe against the dangers of string append, which it sounds like what you’re considering would be, I’m certainly happy to consider it. + +DE: I just want to say generally I’m glad you’re raising this JHD. This is an important topic. I think either approach or both escaping or the template tag are reasonable. And we should do at least one of them. + +JHD: Can I ask, MM, if this proposal included both options, would that be acceptable to you? + +MM: Well, when you say the proposal includes both options, it is not acceptable to me that something that has the string append dangers becomes part of the language. It is simply disqualified. So for a Stage 1 proposal, for the Stage 1 proposal to include both things as an option is not very much of a commitment. I think as a Stage 2 proposal, absolutely not + +KG: Yeah, Mark, like I said, I get where you’re coming from with string composition being dangerous. But it is just not applicable to regular expression escaping. You can escape every punctuator that is meaningful and define those to be the identity escape. And then there is no danger from string append. + +MF: The thing I want to say is similar to what KG just said in that in spirit I agree with MM. The general solution is to use a template tag. The more complex the language, the more important it is to do that. The language being the language of RegExp. Regex are simple enough that our escaping function can successfully escape for any context we care about and it provides an equivalent level of safety. That’s the rationale I use to bring myself around to being okay with an escaping function, so I hope maybe that would work for you too. + +JRL: I’m trying to reason through my head. I think most things are representable, however, the tag template builder can’t represent all regular expressions. For instance, you can’t match a back tick in your regular expression because it’s a special character inside of a tag template and you can’t have a back tick without preceding it with a backslash and then that backslash will also be represented in the output. So it’s impossible to match on a lone backtick. Similarly, it’s impossible to match on a lone backslash if it’s the final character of your regular expression, just because of the oddity around tag template expression parsing. So I don’t think it's a 100% foolproof solution here. + +MF: I think you’re thinking about the raw template value and not the computed template value + +JRL: The template tag has to use the raw value or else you need a dozen levels of escaping + +KG: As a template tag that does not have the raw value is unacceptable. And if we use the raw value, then these two cases become impossible. + +MF: I don’t think that’s true + +KG: If I put a backslash between my backticks, that backslash has to mean I’m escaping what a thing within the regex - we shouldn’t talk about this live though. + +MF: I’d like to talk to you more about that offline when I’ve figured that out, though. + +CDA: We do have a little bit of time left. JHD? + +JHD: My takeaway here is what I’m hearing is a lot of support for the function form, and minimal opposition to the template tag form. However, there’s some strong concerns from MM about safety with the function form and there’s some – what I hear is ergonomics concerns if the template tag is the only form. So it sounds to me like we could ship both if we could address the issue JRL brought up and the safety concern that MM has about the function form. And I think I’m going to then proceed with that hope that when – by the time I seek Stage 2 advancement, I would have resolved those questions + +KG: If MM is never going to be okay with a regex escape function, I would like to know that. If he is – like, if it is possible we might get regex escape, I would like to go forward with that and not try to design the much more complicated template tag first. I would just like to know if regex escape is possible in any world + +MM: So impossible is a very high bar. I can’t predict what – how I might change my mind in the future with certainty, but let me just say today I find it unlikely, very unlikely I do not find it to be – certainly I would reject today any such proposal and it’s hard to imagine that I would be changing my mind on this one. + +KG: Why? + +MM: Why? + +DE: I can’t understand the reason given what KG explained about how escaping doesn’t have this issue + +MM: Let me double check some things that I’ve heard about this conversation. That this fully escaped form that Kevin is talking about, it’s meaning is not stable across different contexts; correct? + +KG: I mean, if you put a thing within a character class, then if you have two A’s that doesn’t match an A followed by an A. It’s just that you have repeated yourself within a character class. But back slash left bracket will mean left bracket within a character class or outside of one. So it is stable in the latter sense. It is not stable in the sense that character classes are [dirft] from the rest of things. But every thing that is escaped means interpret that – like, the interpretation of the escape sequence is the same everywhere. + +MM: I’m willing to take another look at this offline. So I will not say now that I’m – okay. I withdraw that I’m almost certain not to change my mind. I’ll approach this with more of an open mind. But I also think that the sense that there’s an awkwardness to using the tag template form, I think, is mostly based on misunderstandings of how to use it. If you think it’s awkward, you’re probably not holding it right. + +JHD: Okay. So what I now understand my forward direction to be is with KG’s assistance, we will attempt to convince MM that the safety concerns are addressed. Assuming that that is successful, then we can proceed with for now just doing the function form and leaving open the option for a follow on proposal to do the template tag form + +MM: No. I would really not like a consequence of my agreeing to consider the function form to be that we de-prioritize the template tag form. The template tag form is just better. And it’s the right precedent for considering all other embedded languages as well. And that part we seem to have agreement on: If it’s good for all other embedded languages, and I don’t think we can describe the regex language as simple. Kleene’s regex language was simple but programming languages are not simple. And the general way in which people should approach putting untrusted input into embedded languages should be template tags. So I think that should remain the priority of what to investigate going forward. + +JHD: And with that understanding, it sounds like assuming that you can be convinced of the safety of the function form, your preference would be that we go forward with both choices at the same time + +MM: Go forward in the sense of neither one is yet disqualified. We have a choice to make, and I would still, even if I’m convinced of the safety of the escape form, I think the template tag form is much superior and I would like to convince everybody of that + +JHD: Bearing in mind that node at least - that the only reason they haven’t shipped the function form before is because of this proposal. I can’t speak for browsers, but I suspect they might do the same. And then that functionality would still exist. It’s just we wouldn’t have defined it. Okay. Sounds like this is – I have clarity on what needs to be explored moving forward. Thanks, everyone, for your time + +### Speaker's Summary of Key Points + +MM is concerned about composing embedded languages by string mashing +MM expressed an opinion that the tagged template form is the superior solution +Everyone else who expressed an opinion is happy with the escape function +KG agrees with the string mashing concern in general but thinks that in this case in particular can be made fully safe + +### Conclusion + +MM to revisit whether a single Escape function can be made safe, but still thinks that the template tag is unambiguously better and would not like it deprioritized even if Escape advances + +## Type Annotations Proposal Update + +Presenter: Asumu Takikawa (ATA) + +[proposal](https://github.com/tc39/proposal-type-annotations/) +[slides](https://docs.google.com/presentation/d/1OraKn6TUdmtug-lkgijq-43Xou_1lB1_DJ54x6ssnRY/edit) + +JRL: We do want to start with announcing how this is going to be moderated, though. So essentially, I imagine this is going to be rather contentious. We’re going to have a lot of topics popping up. I’m going to be moderating the queue so topics that are similar can get moved together so we can have that discussion quickly and move on to the next topic. Is anyone opposed to that happening? + +ATA: Thanks. Yeah, so my name is Asumu. I work at Igalia and I am presenting with DRR and other people who have helped out with this presentation that’s listed below. We’re here to give an update on the proposal, not asking for stage advancement. The primary topic will be addressing feedback that was brought up before. First I’ll go over the proposal context and history. The type annotations proposal got to stage 1 in March of 2022. And just to recap, the motivations for the proposal are static types are widely used in the JavaScript community these days and most can’t be directly run by web engines. So we’d like to improve the ergonomics of using types by allowing engines to process typed JavaScript code. So we can summarize this with main goal of unifying JavaScript with typed scripts. This is another slide showing static types is still popular. This is from state of JS survey a couple months ago that shows that static types is still one of the top requested features. The last time this was brought up to committee, there was this initial presentation focusing on erased semantics for the types so the type would be ignored by engines, just parsed. We heard some feedback that we should also investigate run-time type checking, so the types would have run-time checking. So this will be main topic of the presentation after we give updates. I want to emphasize that we would like to focus the discussion on this topic. So if people with keep that in mind for the discussion, that would be great. + +ATA: And so that’s the brief history and summary. We’ll talk a little bit about things we’ve been working on for the initial proposal. There was a tentative grammar for the types and text that was included in the proposal repo and you can see a link for that below. I’ve put a screenshot in that. You’re not expected to be able to read that, just to show it’s a thing. The goal was to accommodate existing type syntaxes and in existing types in TypeScript, Flow, etc. while leaving room for forward compatibility. I just want to go through some examples to show a bit about this. Before the examples, recently we’ve been also working on refining this proposal, so we’re identifying how to extend the tentative grammar to meet the types that are in the tentative systems that are listed there. We’ve been putting forward a syntax comparison table that we plan to add and that’s linked at the bottom there. Some tentative conclusions we reached are that the current grammar supports a lot of the constructs we looked at actually but there are a few cases where we would need to expand the syntax and I’ll give some examples. + +ATA: But first in terms of forward compatibility is designed in a way that it supports things that are not explicitly encoded in the grammar. There are angle brackets or square brackets or curly braces, the tentative grammar will ignore tokens between these brackets so that lets you to support exact type, which is curly brace and vertical bar sytax that’s there or flows read-only properties that’s the plus modifier on the key. Or things like typed parameter variance annotations, which is the bottom example. +So that’s supported in definitive grammar, things that aren’t yet supports or things that would be needed, one is type application. So you need type application for supporting generic terms and it turns out the syntax is exact same in Hegel with the F and the angle brackets, so that’s probably not supportable in JavaScript currently because of how the less than operator is parsed. So there’s probably a need for a new kind of operator like maybe a double colon with angle brackets. So this is a change that is very likely needed and would be needed for a lot of these existing tools. +And then there’s some examples where we would need more discussion to figure out if we could put them in syntax or not, prefix types or Flow cast is a little more complicated. It has parentheses or colon operator. There are also recent operators in TypeScript that need more discussion. + +ATA: So the bottom line here is that we’re evolving the syntax based on the needs of multiple type systems that we’re looking at. We want to ease any migration to whatever system we come up with and the overall goal is in service of trying to unify JavaScript. +So this is the early stages and the focus of the presentation is going to be mostly on the semantics of the types. So that’s the bulk of this talk. + +ATA: So on to the semantics, so there are two paths to adding type to JavaScript in terms of what the types mean. One is what the initial proposal that we bought last year is proposing, where types are erased and engines don’t use them for anything. And the alternative is that types are enforced at run-time, and they are interpreted as some kind of run-time check by the web engine. And this is what we heard feedback about last time that people wanted us to investigate whether this was something that could be done in this proposal. So let’s consider if this is feasible or desirable. And just to be clear, I’m going to spoil the ending here. We the people who are working on this proposal don’t consider this feasible, but we want to explain our reasoning in detail and why we think that’s the case. Let me propose a rubric for trying to evaluate whether run time checks would match with this broader unifying JavaScript goal. So we can break that into sub items. We want it to be easily migratable in the ecosystem. Many people already use types as mentioned earlier so we want them to migrate to whatever system. We want this to be extensible to support different systems people use. And this should also be practical for use. So these all fall into the "unifying JS" goal. + +ATA: And the big question is are these goals in conflict with doing run time type checking as part of the annotations proposal. We argue they may be in conflict with these goals. If we consider run time check insertion may be surprising to existing users. And as to whether this supports different tools in the ecosystem, having these run time checks will require us to fix a particular type system semantics that we use to create these checks. Finally the question of whether it’s practical. Turns out these checks can have high overheads in many cases. I think the examples will mostly focus on the last bullet, how practical it is in the performance overhead. So that be the topic, about the challenges of run time type checking. + +ATA: So before I go into examples, let’s set some assumptions on how these run-time checks would work so we have a shared understanding on how the examples work. The first assumption is compiler would insert run time checks. The second assumption I think maybe not everyone would agree with, but I have a slide for that later in the presentation. Let’s assume that the inserted checks will do their best to uphold the guarantees. I’m not going to talk about soundness for the rest of this talk for clarity, but that’s what it corresponds to if you care about that. + +ATA: Here is a simple example. Let’s say we have function F and it has functions and we annotate the parameter X as having type number and Y having string. What would happen in this run time check scenario is that the compiler would insert a check for the number type on the X parameter, so when you call F it will do this type check. In this case it will produce an error because foo is not a number. For these first order types, the checks are very straightforward, there’s nothing complicated going on. But it’s well known that for in general, run-time type checks, they can introduce performance penalties and require better compilers and language design in order to accommodate them in a practical way. So there’s a bunch of literature on this. One of the most relevant things is "Safe TypeScript" which designs such a system for TypeScript and they did see performance overheads on examples. I can provide more details. There’s also a bunch of academic work on run-time checks for this type of language that show that performance overheads can be quite high and depends on the details of how your system works, but it’s especially hard for structural types, e.g., function types, object types, etc. One question that comes up a lot in this context is why can’t we use these types to make the code faster in a way using optimizations. They would somehow offset whatever performance penalties we might get for run time checks. It’s unclear if engines can actually take advantage of these TypeScript or Flow-style checks and do optimizations that would compensates for the checks. We definitely welcome implementer feedback on this topic and generally on the topic of performance overheads. + +ATA: Let me go into examples of why type checks can be difficult. Here’s a function `F`. This time instead of taking number parameters. `O` has type object where there’s a field `X` with type number. And so the compiler needs to insert some kind of check to check the structural type. So you might think, well, we can just put in a check that whatever object is passed for `O` has a field that’s `X` and has a number, sure. But the problem is that it’s more complicated than that. Imagine if `F` in turn calls `G` and passes `O` to it, , and the function tries to mutate that. This initial check that you do on function entry is no longer valid and so you actually could still get a type error. So with this kind of thing you need to do checks in a lot more places, you need to rework the program in general to insert checks that either writes or reads some combination of those in order to make sure that mutation, for example, can’t go unchecked in a way that will violate the type in variance in your program. + +ATA: And so one thing you might say is “I don’t care about mutation”, you just want that simple check that it’s an object with this field and that’s fine. Well, that doesn’t work when it come to things like function type. Here function F takes a function parameter G instead and this has a function type that’s number → number. And the issue here is that this type can’t be trivially checked on entry to F like in this previous example where you could check some properties, not the object type. You can’t check anything here on entry to F. You really do need to do some rewriting of the program in order to insert checks to verify that G will return, you know, the right type. And there are other implementation strategies you could take. Perhaps you do some tagging scheme, so ensure that functions are tagged with the right type, as Safe TypeScript did. So you might be able to do some type of check. But in general there are more run-time dynamics that you need to introduce. + +ATA: It continues to get more complicated once you introduce more advanced features, whether it’s control features and so on. So going back to this rubric that I was talking about, these examples are shows that there’s a bunch of challenges here with this kind of approach. First of all, you saw in all these examples that we were translating the run-time checks from the types and the type haves a specific meaning that we have in mind. It’s unclear how we could allow flexibility for difference meanings of types to exist. For example, if you have Flow versus TypeScript, maybe there are different types that have same syntax, but different meaning. It’s unclear how you could cope with that meaning, but the other issue is people will use in practice have high overheads of checking, and it’s a significant amount of work for implementers to make this practical. + +ATA: So one thing that you might push back on is that you might say that, well, a lot of this problem comes from one of the assumptions that we’ve made, that these checks will do their best to uphold the guarantees of a static type checker. + +ATA: So if you do care about this, you really do want to uphold the type guarantees, then you’ll need the structural types that were shown in the previous slides, and that comes with all these performance overheads. Let’s suppose that you don’t care about this aspect. You just want the simple checks. You only care about the simple number or string checks that I mentioned in one of the earlier examples. We think that if that’s what you really care about, that’s the kind of annotation that you really want, this would be handled by an entirely different kind of proposal that I’ll talk a little bit about in the next section. + +ATA: So I think no matter which path you take on this assumption, either you’ll run nto these performance issues and that’s the reason for not wanting to do it in in the type annotation type proposal, or we should do it in other type of proposal that’s not type annotations. I’ll talk more about the alternative for run time checking for these simple checks that don’ts try to uphold all the type guarantees that I was talking about. + +ATA: The key question here is: Why do want to couple run time checks with the type annotations proposal? Is that the best way forward? So stepping back a bit, let’s think about why would developers want these run time checks any way. We want convenience syntax for writing down checked program type behavior. But this doesn’t have to be the exact same syntax as static types, because static types already have an established meaning in systems like TypeScript or Flow and developers already depend on these meanings and care about them, it could be really confusing if you wrote down things that are similar to TypeScript or Flow syntax but you got checks that have nothing to do with what those tools enforce. + +ATA: Instead we think it would be more attractive to have extensions of existing proposals that could provide syntax that is pretty close to what you’d want from run time checking. I have a concrete example here of a lightweight syntax for checks. Instead of writing down type annotations with colons, here we have this number call with – it’s not a call, but a number thing with a primary X and then similarly string for Y. And what these are are actually extractor objects. So this is basically what the extrator proposal and the pattern-matching proposal, both of these are at Stage 1, would together provide. And this is effectively doing that simple lightweight check that I was talking about. This inserts a check for X being a number, no complex compilation involved. + +ATA: And so I want to go through a bit about how the extrator’s proposal works and how it provides protocol for doing this. Pattern matching specifies a custom matcher API and then defines a bunch of custom matchers for all the primitive class keys. In particular the number of primitive class gets a new property with the key symbol matcher that the proposal defines, which is a special custom matcher that matches and destructs numbers. And these custom matchers are used by both extractors and pattern matching to define distractor values. So if you have a custom matcher you can use them with the extrator proposal with special binding forms to match and distract values. In this first example you have `const number` and then name equals some expression. This is using an extractor object, that’s the number thing to first match on whatever result is produced by this function producing numbers expression, checks that it’s a number and then binds it to bound name. So you can – this is using it for a variable declaration, but you can use it in parameter function. You can also define your own extractor objects using this custom matcher protocol to implement arbitrary type texts. You can define this const MyType object with its matcher defined in the "..." in the code sample. And then can you use this my type name in function parameter position to basically act like a type. + +ATA: And so the whole point here is that there are a bunch of advantages of this kind of approach. It lets developers express type like checks without being constrained by the meaning of static types. It also leverages existing proposals that are already in the pipeline, despite being at Stage 1. + +ATA: And this is just one example of how to build on existing proposals. The key point is that it’s possible and that it has these other advantages like not being explicitly tied to static types in a confusing way. And it may make sense also in the future to bring in other proposals such as decorators for more complicated things, for example if you want to annotate function returns or you want to specify a more complex types or have different combinations of types. + +ATA: And so the big takeaway from this section is that we really think run-time type checking can be done better in a better way with a different syntax, without coupling it to the type annotations proposal and that that would be the least confusing thing for everyone. + +ATA: So let me summarize the discussion, sorry, the presentation so far. The big takeaways from the presentation are that we the people who are working on the Type Annotations proposal have taken this committee feedback and investigated the advantages of erased types versus run-time types and especially looked into the feasibility of doing run-time type checking. And in our opinion it’s really hard to meet the proposals goes to unify JavaScript with its typed variants while also having run-time types. And we think that run-time type checking can be done better in a more intuitive way by a combination of other proposals. So we are also working on iterating on the syntax as well and we had a very quick presentation on that in this talk. It will be presenting more details on that or we plan to present more details on that in future presentations. I’d be happy to take feedback or Q&A at this point and would especially appreciate discussion on erased versus run time typed portions of it + +MAH: I think I’m first on the queue. So I also believe that run time type checks are better served by other proposals. I think decorators are a good fit for most cases. However, I don’t think it’s an either/or answer here. One possibility is that the typing information that was added there could be made available, for example, to decorators, which could decide to use that information to do run time enforcements. + +DRR: Well, I see we have a reply. I think that I’d like to let DE make his point eventually because it’s part of that, but I will put out that that we’ve been in touch with the CPython implementers and one of the sort of pregrets they had was by default types were reified all the time in that every single time you need to reference a type, that actually has to be a run-time operation of some sort. It has to be in memory. It has to be retrievable. So there’s a bit of caution there. If you do that by default, you can’t put the genie back in the bottle, which is the situation they found themselves in Python where they tried to say what we did this on-demand or you had to opt into it, and they had to roll that back. So now you have to opt out when you don’t want that. Or you have this import from the future or something like that. So there’s a little bit of caution. I’m not saying that there’s not something that we could do down the line. But I would say that primarily what we’re trying to do with type annotations is have them erased, have them not contribute to the runtime behavior and find other capabilities where maybe some other solution like that could reflect – the wrong word, could have a symmetry what you would write in type space and what you would write in value space, run time space. + +DE: There’s several things here, what briefly, I guess to make concrete what DRR was saying, a minifier should strip away all the types, but if it’s reflectable, it can’t unless it does some complicated analysis. Also, this is an important question that Matthew is raising, because the textured mode to emit decorator metadata which emits types, but reflects metadata and stuff on classes, this is a very popular flag. So even as introduces efficiency issues, it’s a clear demand from developers. But it’s really unclear what they should emit. If we wanted to keep this as a language thing which has to be, like, universal, we can’t make any decisions that might kind of be ad hoc. I don’t want to be too critical here because it’s a useful feature. But the only thing I could think of that could be there is the string of what the type is. How do you resolve any variables? Type systems use a different namespace. If we could think of a way to do this feature, I would like it. But those are my concerns. + +DRR: Right. And quickly adding to that, you may think that you’re just trying to get one type, and yet the underlying types that that thing has to be refined as is this whole monstrosity that you don’t realize you’re bringing into run time until what bundle you interest to produce. That’s a tough tradeoff for people they don’t realize they’re making + +SYG: Hi. So what did I say? Oh, yes. The presentation may have suggested that perhaps implementations would be okay with simple nonstructural type checks. I think the bar is much much higher than that. We have concerns with the type eraser proposal with just the parsing cost of the things to throw away and parse. It could be structured. It’s not just a line comment or something. We have performance concerns about just that. Like, we are not okay with any kind of run time type checking, even if it’s simple. That’s all + +DLM: +1 to what SYG said, we also share concerns about performance about the type eraser and parser and I don’t think we’d ever support a proposal for run time checking + +MLS: Yeah. The only thing I would add is that even if it’s true run-time type checking, I imagine that all the browser-based engines at least, I don’t know about XS, we have typing system in our higher tiers. That’s how we get the performance, because we know this is a number or a string or it could be a string or a number, blah, blah, blah. This provides us no benefit even if we do type checking. It just adds an overhead. And what’s the semantic if we fail type check? What do you do? + +JRL: Okay. Unless there’s anything else on this topic, next up is MM. + +MM: Okay, yes. So I was really struck by the idea of using extractors to express dynamic type-like checking. I know that I was very negative when extractors were first proposed, but they’ve grown in appeal in my head since then. In general, one of the criteria for anything that extends the language is that it should have very wide reach, it should solve a very wide range of problems. And this use of extractors in this mode suggests that if extractors would generally enable sound static inference because the things that you’re inferring start with something that was checked, then a language with integrated extractors designed to enable good sound static inference might be one that’s just better than a language with type annotations. + +DE: I don’t see why this is an either/or thing. Extractors can definitely check at the mesothelioma [?] they’re executed that its argument has that type. That would be sound. If you mean soundness in a more nonlocal way – + +MM: In a more nonlocal way that the bindings are of known type that they were extracted and that you know that if execution continued on success path, that the input did pass the extractor check. + +DE: Making a mutable binding, which has checks when it’s written to is a much narrower thing than I think people often want with type system soundness checks. You want to check tricky nonlocal properties and structural properties like the one that ATA explained. I think it would be a way to do a check at a particular moment in time. I think that would be useful, especially at API boundaries, the kinds of things where you want to use private fields and methods. This would be analogously useful to make a high integrity boundary. Play well with each other. But I think this serves a different need from type systems. + +DE: So if you had the use of an extractor, I think it would be possible for type systems to easily infer, okay, you mean this argument is this type. That when YK and I were thinking about this area about guards a few years ago, he made an example program that would use TypeScript’s way of asserting types to build a guard system that TypeScript was able to execute run-time and such. I don’t see any reason why this wouldn’t map over to extractors. + +DRR: And to add on to the point that they’re really complimentary. There’s a feature where people often ask us to be able to more specifically type the variable of a catch clause. It’s a very common complaint. Most people want to be able to say that the thing that you’re catching is always an error. That’s not in practice always enforceable. So often you would want to be able to say, yeah, I really don’t know what this is, because provided that it doesn’t pass this check, now do something there. So an extractor would be able to validate, yes, I have an error. Yes, I have another object or there’s cases where you sort of seek a blend where your public API really wants to enforce those run time checks, but internally, for all intents and purposes, you consider yourself sufficiently consistent. + +DRR: And so they are really nice in that way and you can actually build tooling to say, yeah, my public API always needs to do those sorts of run time checks. You really are committed to that as well, but have you the option and can compose + +DE: Mark, does this answer your question? + +MM: I’m not sure. It’s all intriguing. I see Ron is on the reply thread here. I do want to invite RBN to take this as an opportunity to revise the extractor proposal. Like I said it’s been growing in my head on other grounds anyway. If it can elegantly solve some of these problems, that would be great. + +JRL: Want to go through the queue items here. We have a couple replies about extractors. MAH, if you’ve not spoken yet. + +MAH: So it’s more clarification, because I’m not familiar enough with extractors, but would that approach work with bindings that are reassignable? + +RBN: The answer is no. That go to the comment I was going to make about extractor limitations. I think extractors are a very interesting potential avenue for type checking of a run time checking at the parameter level where you’re only having to check the inputs once and that’s that transition from the call site to the callee, but they don’t perform any type of action on where you’ve defined them. They are essentially just a form of destructuring. So they can provide a run time check at those boundaries, but they don’t have any impact on any type of reassignable bindings. You might say that X is a string, but then you can assign it to a number in the body because that’s all unchecked. + +RBN: In the case of extractors in those cases, a type system might be able to infer the type of parameter to say you used this thing hand the only valid input will be a string, and I also apologize if any dog is making noise here. But we don’t – we wouldn’t be able to use that at run-time to perform any checks in the body. Extractors also have limitations in that they don’t provide any type of information visible or externally reflectable information about those parameters, you can’t use them on class fields. For one, there’s a conflict with method declaration because they would look the same and separate from that, again, it’s a type of destructuring, so even if they were feasible, all you would be doing is checking that on the initial assignment. But, again, not if you mutate that property later on. Also about your point for decorators to solve those. Decorator is interesting for providing rich metadata at run-time that could potentially be used for run-time type checking, but even a decorator on a field can’t do a run-time check on mutable assignments, there’s no potential for that unless you mark that as an accessory if field which allows to represent reads and writes. So there are limitations on what you can do with decorators, but they do give you the ability to provide toze type checks at a boundary. + +MM: That’s definitely clarifying. Thank you all for the clarifications. + +ATA: I was going say most of what I was going to say was covered by DRR and DE. + +EAO: Okay. So the sense I get overall of this whole proposal, that it’s more of a – it’s a solution looking for a problem that it’s trying to solve a year ago when this got accepted for Stage 1, the problem statement that was in fact considered then was only effective we made up or made concrete during the meeting itself, and then – so what it looks like now is that since then, that problem statement has evolved to this current form of unifying or unforking JavaScript and somehow then presenting type annotations as a way of achieving this result. However, I’ve not been able to find any conversation anywhere or description of how in practice this unification is supposed to happen as a consequence of accepting type annotations. + +EAO: Now, I filed an issue about this specifically, but I think in general, I’d like to give a little bit of feedback here in that I would presume that we are supposed to actually solve the problem that has been identified here, rather than looking at the technical specifications of a particular possible solution for that. + +DE: So I think I have to say I have a little bit of trouble taking it seriously when people say do you know what problem you’re solving? You can say you haven’t proven your motivation. But this came up with records and tuple as well. Anyway. The way that things are explained sometimes evolves over time. Anyway. For this unification goal, which I personally consider very important and I’ve considered important through the whole life of the proposal, we’re almost there to have all the language features that people have been using in the tooling space in JavaScript, and this has been kind of a unifying theme of what this committee has been doing since ES6. It was unifying JavaScript by making built in classes and subsequently built in fields and these sorts of things. We’re filling in the gaps where the ecosystem has been solving language problems, things that are logically language problems with language extensions. Type syntax that doesn’t have to be in a comment block is one of these popular language extensions. It would unify the ecosystem to have a built in form of grammar for this. You can see similar dynamics taking place in the CSS working group, things like variables and nesting that CSS is adding solve deep problems that have been solved by tools. Although tools are not going away, neither in CSSs or JavaScript, I think built in JavaScript gives a unified base for everyone to work off of. I think in language, it’s useful in many case to have this comment base + +DRR: I very much want people to take to heart on what DE just said and I want to kind of frame this idea of where this proposal is going over time. I want everyone to take the five year view or the ten year view even, where is JavaScript going to be in ten years. There is this extremely popular, there’s several popular fortunates for annotations around JavaScript. People found it extremely useful. I’d say almost everybody here, maybe in some pocket of code has used TypeScript in some flavor. Maybe not everybody, but maybe the majority of the committee at this point I would say. So there’s this clear utility, and there’s a question of can we make the use of that less friction-y, can we make it easier for people to use, can we make it this low configuration thing at the very least. And part of answering that is actually discussing it here in committee over time, trying to understand, trying to come up with a shared vision. Can we find agreement on those things? Over time we might have to come to different shared understandings from what we originally came to. These ideas took a while in, you know, little laboratories to try to see if people actually like the ideas and see what worked and what different work. Some of the features are not used very much anymore and could probably be dropped a little bit. And some of them are extremely useful and are shared between different type systems and things like that too. How much can we explore here? So today we came with an update to ask can we even agree on no run time type behavior, right. No run time checking. Because if we can’t come to a shared understanding there, it’s very hard to go somewhere in the future. But it seems like we’re not seeing a ton of pushback on that point. Maybe this are other directions that we can explore. Maybe we find other place [?] the language can find the same facilities people are looking for. But even by moving to Stage 1 here, we had an avenue to discuss that, and that with was the biggest thing that we really were seeking last year. I think this conversation has been extremely helpful in building some shared context. + +DRR: So when we talk about unifying JavaScript in ten years is this going to be a viable thing, can we come to a shared understanding, can we have those conversations, and we will try to do that tastefully, because it will have to satisfy the criteria that we outlined in the presentation. It had to agree with the existing type systems, but it also have to agree with everybody in here. You have to build consensus and feel good about where we’re going with this. It has to be tasteful. If we end up on the XKCD with 14 standards, now it makes it harder for people, that’s not a place where we want to be, where it feels bad and there’s more frictioning and the tooling is not helping with any of the problems at all. I don’t know if that answers your question, but I hope it gives some outline on how to frame it, how to frame the ideas here. + +JRL: We have one minute remaining. If the champions want to ask for any consensus. I don’t think you’re going for advancement; but you want to ask for run time. JHD, I believe we shortly discussed your topic. If you want to say anything last before we wrap up + +JHD: Yeah. I mean, I think to what you were saying, DRR, is that if you think there is a timeline, whether that’s one, five, ten years, whatever, at which TypeScript et cetera would no longer need to do any downleveling at all and would be a type checking tool, I think that is an important vision to convince the room of. Because without that vision, the proposal with or without runtime type checks is a very large surface syntax change to the language. And if we never were to get to that point, then how much have we really unified? + +DRR: You’re asking about if – I think down leveling behaviour can be viewed as independent of – + +JHD: TypeScript or Babel or whatever. And I think this is somewhat different in that usually we ship small proposals that are independently motivated that add up to large pictures as well. This is not a small proposal. It’s primarily motivated by the wide vision + +DRR: It is a long term vision. I do not expect us to rush this in the next month, year, several years. We will have to build consensus + +JRL: We are at time now. I encourage you all to continue talking about this after, after the meeting today, but we’re still packed for the rest of the schedule. Do the champions want to ask for any consensus before we move on? + +DE: The question was does anybody have any concerns about to the extent that this might happen, using the erased semantics rather than the run time type semantics, this isn’t to express endorsement or strong resolution. That would be out of place for Stage 1, but it would be great to hear these briefly. Nobody wants to advocate for non-erased types? + +JRL: I think beyond that there was strong pushback from browsers that runtime types are very difficult. Let’s wrap here. We can add anything else into the notes for the topic. + +### Speaker's Summary of Key Points + +- The type annotations proposal has continued to evolve on syntax, and a detailed presentation explained why the approach of the champion group is for the semantics being based around type erasure, rather than runtime type checking. +- There were significant questions from the committee about the motivation for the proposal. +- The three browsers expressed that runtime type checking in type annotations would be unacceptable. +- No one advocated for semantics other than type erasure. + +### Conclusion + +The proposal remains at Stage 1; no consensus was requested of the committee. + +## Type Annotations Delimiting Concerns + +Presenter: Waldemar Horwat (WH) + +- [slides](https://docs.google.com/presentation/d/1TLGdvGfOn2wl-_i_HfrfpgFkdffrhCnisowdkOiebB8) + +WH: Since this proposal is mostly syntax, I wanted to raise some issues about syntax. Let’s start with an illustrative example. Here you have a function which does a bunch of things. Is this a valid function? What does this do? Well, at first glance it seems to be okay. Looking closer there is this little syntax error, so you might think that this whole thing would be syntax error. But in fact, with something like TypeScript-style type annotations being proposed, this can parse as something completely unexpected, and this has to do with “token soups”. At a high level, a token soup is something that starts with a delimiter, consumes a bunch of tokens, and ends with a matching delimiter. And this has rather surprising consequences in a program like this. If you’re looking carefully, you might spot a couple other surprises, which I’ll discuss later. But this presentation is mostly about the consequences of having token soups in the parser. So, again, token soups just skip arbitrary sequences of tokens, only matching parentheses, square, and curly brackets. + +WH: It’s unknown how lexing slash as division versus regular expression would be handled in a token soup, which is a problem. Token soups are used liberally in the proposal to skip past various kinds of type expressions. Here I outlined a bunch of them. All the ones in orange are token soups. And the question is: can you tell that something starts a token soup before the token soup starts? And that’s a crucial question. Without a correct answer to this question, the consequences fatal. There’s a couple possible answers you can give. + +WH: Option A is: it’s unambiguous to the parser whether it’s starting a token soup based on whatever precedes its opening delimiter. Option B is: the parser might not know whether something is a token soup until it sees what follows it. + +WH: Let’s explore option B first. So here we have `… {a:b} => …`. Whether the `{a:b}` is a type token soup or something else depends on the `=>` after it. But what happens if you have a token soup which contains `{yield / 3}; a = “4/}`? Well, it depends on whether you treat the slash as a division symbol. Here the token soup ends in one place or you treat the slash as starting a regular expression. It ends someplace else if you treat the slash as a division operator. So you can’t really skip past the token. Now, you could say that you will always decide one way or the other or ban slash inside token soups. All three of those have other fatal consequences. So those are not good options either. + +WH: So let’s go with option A: It’s unambiguous that the parser sees a token soup from what precedes its opening delimiter. Here I already gave a counterexample to that at the beginning of the presentation where, from a cover grammar, you cannot tell whether something is an async function with some type parameters or whether it’s a less than operator. And, in fact, because it’s a cover grammar, the identifier doesn’t even need to be `async`. It happens with any identifier like `foo`. + +WH: Here’s another example, `a: (type) => (foo())`. You could interpret the arrow type as token soup, because it could be an arrow function with a type followed by a function body. Or this could be a label `a:` followed by an arrow function with an argument. So here you can’t tell either. + +WH: It gets worse. The arrow is used for both arrow functions and for types of functions. So depending on whether there’s another arrow following this thing, this `(foo())` could be a function expression body or it could be a token soup. And there’s no possible cover grammar you could write that accepts both. + +WH: So if we have anything similar to TypeScript syntax, then option A is also infeasible. So both options are ruled out. This seems like a fatal flaw. + +WH: The other problem you have is you can’t embed constant arithmetic inside token soups because of the division problem. The problem also extends to `<`. Let’s say you want to have a constant expression whose value is `a < b ? a : b`. The token soup will misparse that because it will think that < is a delimiter, which it isn’t. These issues were filed a year ago. There has been no substantial activity in the last year. + +WH: There’s a bit more. Since this so closely follows TypeScript syntax, it also introduces additional operators which are not reserved words. I don’t know how this works in TypeScript — I have not been able to find a TypeScript spec that’s recent. It’s possible that it has some conditionally reserved words in some context, but without a spec, I just can’t tell. One common context is `as` followed by parentheses, square brackets, or curlies, which are interpreted and skipped as token soups. Okay. So far so good. So let’s say we adopt this syntax. And then the proposal gets accepted and somebody writes `module as {…`. Now is that a token soup or is that a module body? + +WH: So this will introduce a lot of conflicts with other future proposals leading to a death of a thousand cuts. And the result is if we try to adopt a syntax which is very close to TypeScript, we find that we just can’t. It’s like we’re trying to square a circle. So we must adopt something different. And if we adopt something different, then we’re introducing yet another incompatible type standard when there are already a bunch of perfectly good ones. And that’s the end of my presentation. + +JRL: We have five minutes left. I screwed up the queue, so I don’t know how to fix it. But RBN is up first. The queue is already fixed. + +RBN: So you had an example that shows a parsing ambiguity with `async` arrow and as I understand it, that’s still an issue that’s being worked out. I would like to point out that you also had an example, this one here of `Foo` angle bracket as being a potential source. And I did want to point out that this was already addressed in the slides in the previous topic in that they were looking into – we’ve been discussing something like the Rust’s turbofish operator, the double colon angle bracket operator for anything that would look like a tip argument that’s passed to a call. So this, I think, is under discussion as not being valid syntax within the type annotations proposal, although I think it is true that we need to look at the previous example regarding `async`. That’s being looked at + +WH: I think you misunderstood the example. The example does not have type parameters of function calls. This example is just the async cover grammar `CoverCallExpressionAndAsyncArrowHead`. + +RBN: I see. So the issue is that it’s using member expression and then looking at type parameters. Thank you. That’s fair. + +DRR: So I do appreciate the name for this, “token soup”. It is kind of funny. So I think when we put together the grammar, the tentative one, the one that is really supposed to be a starter for ideas, it was not something we were necessarily committed to as-is. It was an idea. The biggest thing we were trying to do is accommodate the existing languages, basically make room for TypeScript syntax, Flow syntax, things like that. Over time, there’s a question of how much language stability there is across these checkers. And there’s definitely been this question: Could we make that more concrete over time? And in some cases, like, we could add things to TypeScript where you would need to use a more unambiguous syntax in some cases like a double colon operator when you’re doing type arguments for a function call. + +DRR: Now, I appreciate example 1. I mean, it’s not something that I’ve not given a chance to give much thought to at this point, to be honest. Partially because until we got a sense of habit type appetites [?] on other things, for example, the run time checking today, it’s a lot of work to resolve the grammatical issues if we didn’t see forward progress on other fundamentals. So I think that’s why you didn’t see much response there. We can start thinking about it a little more deeply and understand what we need to do going forward. I’m curious to get a sense of other people’s thoughts here though. Dan, you have an item in the queue + +DE: I really like the token soup concept. I understand DRR is willing to compromise on it. But I think it’s really important for the extensibility of the type system. If we tried to give productions for all the types of things for what we consider a reasonable set for the various type systems to have, then that would seriously limit extensibility. It’s helpful, WH, to have this set of grammar issues set out, and I think we can use this to iterate on the proposal. It’s hard to see it as a fatal flaw. Definitely for the proposal overall, but even with the specific concepts of token soup, I don’t yet understand why this can’t be made to work out. + +WH: Yeah. I tried to explain why this is a fatal flaw, and one of the problems is that a token soup cannot contain an arithmetic expression as a part of it. Which means you cannot ever have a type system which uses constant expressions. + +DE: Yeah. Because this wasn’t put on the agenda beforehand, the slides, I haven’t really been able to look at this closely enough to understand. And I had trouble understanding it online. + +WH: The basic issues have been filed a year ago. There has been no motion. In this presentation I’m purely focusing on the grammar. I do have opinions, which I did not cover, on all the other issues from RBN’s presentation about whether we should be doing this at all. Here I just wanted to give a short presentation, just illustrating some of the grammar issues. + +JRL: I think this is still Stage 1, so there’s time to work through this and hopefully we’ll be able to solve the fatal flaws here in the grammar, so this is acceptable. We are now one minute over. Shu, your topic is the only thing left. Do you want to quickly state it? + +SYG: Sure. I also confess to not appreciating the full details of the grammar problem here. But my takeaway from WH’s presentation was that at the end this has to divergence point that, like, regardless of the fatality of the current thing that’s tentatively on the menu with token soup, because of type application is a thing in TypeScript and because less than is a thing in JavaScript, it seems like there has to be a divergence point and RBN floated the idea of the turbo fish operator. Because you necessarily have to diverge the syntax, then have you this N plus one standard problem. That can’t be just worked around. I would like to hear that more seriously addressed than from the champions than just TypeScript will try its darndest to get people to move to new syntax + +### Speaker's Summary of Key Points + +- The type annotations proposal has continued to evolve on syntax, and a detailed presentation explained why the approach of the champion group is for the semantics being based around type erasure, rather than runtime type checking. +- There were significant questions from the committee about the motivation for the proposal. +- The three browsers expressed that runtime type checking in type annotations would be unacceptable. +- No one advocated for semantics other than type erasure. + +### Conclusion + +The proposal remains at Stage 1; no consensus was requested of the committee. + +## Await Dictionary for stage 1 + +Presenter: Ashley Claymore (ACE) + +ACE: So this is a new proposal in that it’s Stage 0, but not new in that it has been around for a while. I recently took over ownership of it. I don’t believe this is the person being presented at a plenary. Kind of the problem that this proposal is looking at is seeing if there’s a problem here. Can we do something in this situation? The code starts out great. It has something needs to get async on nously. It gets it, awaits it. Code needs a second thing, so it gets it, awaits it. We used a waterfall because we could have done these things in parallel, but we didn’t. This doesn’t get merged. Whoa whoa whoa, there’s a waterfall, do those two things in parallel. This is great. Then we need a third thing. The code is using promise all, so the person [?] doesn’t introduce a water fall. They keep expanding the existing fork join that we’ve now started. Time passes, requirements change, more and more things come along. Application becomes more complex and we end up in a situation where we’re now at a point where for me personally the code has now become harder to read. Complexity of reading this over time is worse than linear. One of the problems here is similar to a function call where you just have ordinal parameters, you have to make sure everything still lines up. So here when I’m looking at the code and if I’m looking at feature flags, and I want to be sure that a feature flag is coming from the right place, I have to count one, two, three, one two, three. Ideally I want to be able to control-click in my editor and jump to where that’s coming from. But I really can’t. I have to start counting, and then this problem gets harder, the harder it is to do that counting. + +ACE: So the problem isn’t a technical one. It’s a human one of reading this code and being competent at this code or navigating this code. So what would you do today? Maybe someone that is confident with promises would rewrite it like this. So they would launch all of the tasks, holding onto the promises and then await them. Or instead of awaiting them individually, await them as a promise.all. I think this is fine. This is what I’ve done in this situation. It can be a little bit annoying to rewrite code, to switch over to this. I also find for me personally, I find it annoying that I have to introduce so many more variables into scope. This becomes annoying, this is a larger function when I start typing session and it doesn’t auto pleat [?] session P. I really don’t want these promise to be in scope anymore. I effectively want them to be treated as something you shouldn’t have to think about at all. It’s not clear in the code that these are no longer used and these are all sharing this one big bag of scope. + +ACE: So a potential way that we could solve this is having an API like promise.all that is like a nominal version. So you pass in a bag, a named bag of promises, and it waits each promise and gives you back a promise that resolves to an object that you can then destructure. So this way, you’d use it just like you use promise.all, but you don’t have to worry about keeping these things in the right order. It’s all based upon length. An alternative like possible approach and maybe there are other ones. Like this is mainly asking for Stage 1, so I want to focus on the problem. I’m not saying these are the only two solutions, but these are the solutions that jump to mind, is an alternative is focusing on something where you say I want promise from entries, kind of the motivation for this is avoiding the can of worms of what does it mean to kind of get all the things from an object. Are we talking about all the enumerable properties, just the string properties, prototype chain. It is a whole design space change, change of what the user expects. Design space ignores all those things, because it uses a more well understood protocol. The downside of being more verbose. + +ACE: So what are people doing in the ecosystem? Blue bird had a .promise, which can also source graph. Also npm libraries for this API, combine promises and props and together they have about 180,000 downloads a month. So it’s like the first of the two. There seem to be APIs that naturally emerge into the ecosystem. So there’s also the waitbox proposal. And I don’t think – I’m kind of only mentioning this to say I don’t think that proposal precludes this or vice versa. So if that proposal went ahead, then potentially it would also have that method added to it. One of the questions that comes up is why only looking at a nominal version of promise.all. Is this actually adding a new dimension that all of the things would add. I think the answer to this would be no, but again, I think that maybe that’s something that would be kind of looked at in Stage 1, if we went to Stage 1, just to be sure what’s better here, being consistent adding more things or going for a much slimmer API. I think it’s a natural question to ask. As I said, I think the answer to this is probably, no, we don’t want lots and lots of new methods. + +ACE: One other question is can’t people just write this in userland? And, of course, yes, they can. Potentially this is like maybe the shortest way of writing it. Probably not the most performant, but probably shortest for lines of code. It’s not very long. I don’t think people – I think to write this not a lot of code, I think you have to be very comfortable with promises to quickly write this out. If you’re writing the code and felt like you wanted this, didn’t want to use a library, just wanted to do it yourself, I’d be surprised if a lot of developers would just immediately know how to write it. I think a lot would, but not the majority of people would spit this out. I think it requires a certain level of comfortability with promises, which I think we’ve discussed before not all developers feel super comfortable with promises. + +ACE: So that’s the presentation. It would be great to get people’s thoughts, feedback on if this is a problem, and if it is should we try and see if there’s a way we can solve it + +JRL: 12 minutes remaining, so I think we should have enough time. First up is SFC + +SFC: There’s a lot of other great topics as well, so I’ll be quick. I definitely think it’s a problem and I think we shouldn’t limit ourselves to just an API solution. I think we should explore a syntactical solution as well. There’s a couple other entries on the queue with ideas for that, so I’ll yield my time. + +KG: I support this proposal in the same that it’s in. And particularly, own props form, not the entries form. The entries form is too awkward to use. I also want to speak to why I think this is worth doing. Promise.call is kind of different from most things in the language in that there’s these sort of two lists that you are keeping in sync with each other. There’s the bit you get out that you’re destructuring 90% of the time and the bit that you put in. Array.map: So the values are not necessarily particularly related to each other. So it doesn’t really make sense as a list. The only reason that it’s a list is because that’s the structure that we have. But it’s the structure of this function is different from most functions because in addition to putting things in, you’re getting things out that correspond to the things that you put in in a way that’s these are like two collections of heterogenous objects. You have to keep these collections in sink. And that’s a nune eke thing. What I’m interested in exploring syntax, I think the syntax is a thing we’re exploring in other proposals and is best explored in other proposals because it is sort of more cross cutting. Since we already have `promise.all`, even if we did have syntax for this, I would want to have a library form for this as well. So I think it makes sense to pursue the library form of it prior to pursuing any additional syntax, which can happen later. + +SYG: This may be just my naivety as a non-JS programmer who doesn’t program that much JS, but when I was reading the proposal on the slides, is the basic problem that positional things are hard to keep track of? Like, if so, why is it worth solving here for `promise.all`? Is it because `promise.all` is special in the way KG said? Why aren’t provisional function parameters super confusing. + +ACE: So I think yes you’re right that I’m posting that the primary problem is the tracking of all the things. Personally I feel like in some ways it is a solved problem for JavaScript in that we have a solution, which is, instead of having a function that takes many arguments, it takes one, which is an object that’s destructured, but to use that, you have to be the author of that function to use it. It’s a breaking change to just change to that. So I guess it’s like we’re in a position to write that function in the language, and we’re in a position where we can make a named function argument using the kind of existing pattern that the language allows. And, yes, I think – is it solving it specifically promise.all, I think it is an API language that’s prop you lar [?] where it’s collection in, collection out, where the person who is calling it kind of cares way more that it’s named than the person receiving it. + +JBD: So I think my response also builds on what KG is saying. When the list of a function’s arguments usually are related to each other. They have a semantic that attaches to each other and they mean something. But while you can pass a list of similarly related things to `promise.all`, also often one is passing unrelated things that simply one wants to parallellize and not make block each other. And it just doesn’t – at that point yes, it is hard to count them, because there's’ no inherent meaning in the code. It’s literally just like I want to `await` all these things together and then get them back in variable names and it’s just hard to keep track of that. I’ve got lots of places in code where I have these two lists that I keep in sync, an it’s just annoying boilerplate that’s easy to get wrong + +KG: And there’s no inherent order. If you switch two arguments to a function, that’s a different thing. If you switch the order of these things, they are ordered because you have to put them in in some particular way, but they’re not – + +JBD: All is ordered in the sense that the first one fails, but usually when I’m doing this, I expect them to succeed. If I cared about the order. + +SFC: I have two replies. One is when I am wearing the hat as a JS user or programmer, I find myself when I’m writing complicated code, I use the library that gives me the ability to name promises or name callbacks. I find that to be very important when organizing my code. Being able to do this in the syntax would be finally the last thing I can be before I can drop my library and use native JS promises to do everything I need. + +SFC: My other topic is regarding arguments. There is some precedent already at Intel for using a promises bag, which is your named promises. And it makes code much more readable. I don’t need to go on more about that, but yeah. Not everything needs to be positional, and there’s – we already have precedent from things get unwieldy positional, we turn them into an object. + +HAX: I support Stage 1, because I think it’s worth to explore the problem. But I hope we don’t limit it to await dictionary as the only solution. For example, Swift, which also has async/await, but does not use promise. It has the underlying mechanism TaskGroup which have the similar API like promise, but most of the time, developers do not need to use that. It has a syntax like `async let session = getSession, user = getUser();`. It can be used to solve most use cases of `Promise.all` or this proposal trying to solve. I don’t mean the Swift solution is better, but I think we could explore different solution for this problem. Thank you. + +WH: This seems like a solution to a subset of what I think the problem is, and I view the problem as having dynamic dataflow graph, wherein these things can be doing an operation and then feeding the results to the next operation and so on. So I would hate for us to spec this library and then have to do yet another library to solve the data flow graph problem. As part of exploration I’d like to see if we can do a more generic solution. + +ACE: Off the top of your head of any other languages that could be inspiration for me to look at? Just if you have any off the top of your head + +WH: Data flow languages. And there are languages which do that by default. Haskell and such. But that’s inherent in the language. + +JFI: I had a comment related, if my microphone is working. Yeah, I noticed it in other places. I think DE had expressed interest in is there something around signals that the language could look into. The scale to depend on the result of the other one and whether that breaks down. So with Waldemar’s question of is there a data flow library for promises that might go in parallel or sequence, does that relate to signals at all? Nothing concrete. Just pointing that out. + +DM: So when we discussed this week, we were all skeptical about the motivation behind that. I think Shu’s point around that resolved it. I see this focus on the API first and then look at syntax as a separate portion of that + +JRL: Okay. We are right at time now. Would you like to ask for consensus + +ACE: I would like to ask for consensus for Stage 1 to explore this. Yeah. My preference is to explore this as an API. + +JRL: Do we have consensus – I think we’re at Stage 1, so we’re not tied to a particular solution yet. But consensus to explore this at least as an API solution. Explicit support, please? + +KG: I explicitly support this. + +RBN: I also support this. + +JRL: Okay. And does anyone object? + +MM: I do not object, but I want to – I do support it as a Stage 1 investigation, but I want to register my reluctance. I think that the feature in any of its forms doesn’t pay for itself, that the inconvenience that it’s overcoming, once you pull the launching off into separate variable declarations is minor, and it’s a specialized solution that then gets multiplied by four once you take into the oring that nalty to the promise.call + +KG: Only two. Race doesn’t make sense here. It’s only the things that return lists. That’s all in all settled + +MM: Okay. That helps. + +ACE: Thanks, Mark, your reluctance is registered and won’t be ignored during Stage 1 + +JRL: I think you have several explicit supports and no one objecting. I think you have Stage 1 + +ACE: Thank you very much + +### Speaker's Summary of Key Points + +- The ordinal aspect of `Promise.all` can make the code harder to follow +- An API that allows a named variant could potentially still encourage parallel data requests while remaining readable +- There were some suggestions to also explore a syntactic solution, however there were also points raised that a syntax based solution would not be favourable +- It was raised that there could be a wider space that should be explored, citing dataflow languages and ‘signals’. +- MM raised a reluctance that this problem justifies new additions to the language; the existing solutions seem reasonable. + +### Conclusion + +- Stage 1 +- Explicit support from RBN, KG, CDA on behalf of IBM diff --git a/meetings/2023-03/mar-23.md b/meetings/2023-03/mar-23.md new file mode 100644 index 00000000..c240ee23 --- /dev/null +++ b/meetings/2023-03/mar-23.md @@ -0,0 +1,1102 @@ +# 23 March, 2023 Meeting Notes + +--- + +**Remote and in person attendees:** + +| Name | Abbreviation | Organization | +| ---------------- | ------------ | ------------------ | +| Chris de Almeida | CDA | IBM | +| Istvan Sebestyen | IS | Ecma International | +| Waldemar Horwat | WH | Google | +| Daniel Minor | DLM | Mozilla | +| Yulia Startsev | YSV | Mozilla | +| Ashley Claymore | ACE | Bloomberg | +| Daniel Ehrenberg | DE | Bloomberg | +| Nicolò Ribaudo | NRO | Igalia | +| Richard Gibson | RGN | Agoric | +| Sergey Rubanov | SRV | Invited Expert | +| Jordan Harband | JHD | Invited Expert | +| Jesse Alama | JMN | Igalia | +| Justin Ridgewell | JRL | Vercel | +| Linus Groh | LGH | SerenityOS | +| Mark Cohen | MPC | Netflix | +| Michael Saboff | MLS | | +| Chengzhong Wu | CZW | Alibaba | +| Justin Grant | JGT | Invited Expert | +| Philip Chimento | PFC | Igalia | + +## Shared Structs update + +Presenter: Shu-yu Guo (SYG) + +- [proposal](https://github.com/tc39/proposal-structs) +- [slides](https://docs.google.com/presentation/d/1Qhrn6w3hcD4_uD9ebKfz4Fd_tJqQyPOI-IlH8keWIfQ/) + +SYG: So this is an update on the struct and shared structs proposal. It’s more of an experience report. One of the big questions – well, before I get into the main thing, give a quick overview of the talk. It’s more of an experienced talk of the work in V8 and the work we have been doing in V8 to see the feasibility of implementing shared memory multi-threading and some of the interesting challenges we came up and how we solve them and so forth. This is not going to be a talk about the design space a little bit. But my intention here is really just to give an experience report of the feasibility study, of the work that we have been doing, the plan is to come back later in the year and talk about the design. If you’re not really interested in the implementation space, sit back and relax and enjoy a quick tour of some of the interesting inner works of JS engines. If you are another implementation, this might be interesting to you to see if some of the challenges are shared or interested to V8 and interested to hear afterwards. With that start with a quick recap what my mission here is with this proposal. The overall mission of which this proposal is one part is to raise the performance ceiling of advanced JS web applications by giving them access to shared memory and improvements in task primitives. The operative word is _advanced_ JS web applications. You can see the current concurrency mechanisms around single threading and communicating message loops and serializing and deserializing is all fine for not advanced web applications where you have an application that doesn’t need to squeeze out every bit of performance out of the machine. There are a handful of web applications like office suites such as from Google and from Microsoft that while the number of applications are few, their users are in the billions and they do want to squeeze out every bit of performance possible. So why can’t we squeeze out every bit of performance possible today? The main reason is boundary costs really kill you. And by boundary costs, I mean the costs of serialization and deserialization across worker boundaries for JS values. And here multiple previous investigations and failed attempts by advanced attempts of utilized workers show in a the boundary costs come from not being able to share states. You have to copy. And this cost really kills you because it’s kind of hard to scale your application. Maybe you can do some tricks with one worker but then you can’t really do things beyond that because for every boundary crossing between every worker or between the work and the main thread, you have to pay the boundary cost. You might wonder why doesn’t SharedArrayBuffer solve the problem. You need to choose your own object and primitive lay out. So not only do you – you don’t have to copy, but you have to serialize and deserialize. That is equivalent of introducing your boundary are high copy costs. It is hard to maintain and really only a solution if you’re compiling from a language that takes care of object lay out for you like ++ in WebAssembly and their SharedArrayBuffer is fine and for web applications, ArrayBuffers don’t help you for general programming. They help you if the data is byte buffer oriented, then of course they I do exactly what they were designed to do. This proposal is about fixed layout objects, which I’m calling structs. And structs are objects with fixed layout at construction, i.e., a closed option than open object than Java and that is the right ergonomic for some objects and happens for shared memory and possibly performance there is a use case for fixed layout objects. And it’s attractive for a bunch of things. Shared memory as what is motivating me primarily but WASM interop and FFI and packing memory lay out and predictable performance and would be helpful for data types like complex. It gets more speculative lower down the list. This update focuses on the implementation experienced around one which is a big open question, like, it was possible at the time I proposed this, it was not feasible to do in engines. Therefore, we should just give up before we get too far. So Number 1 is it even feasible in production engines? Is it feasible for shared structs and feasible for sharing string? Even if it is feasible is it worth it for the performance? If we implement that is shareable, the hoops to jump through to share things killed the performance. The only reason we’re doing any of this is for performance. Is it worth it for the performance? After we do that can developers leverage it if we built it? Of course we need to it rate and discuss the design. As I said earlier the plan is to save that for a later agenda item in the year. So this update is about the first three things about what we have learned prototyping and shared memory in V8 around architecture and performance. The table of contents. I will go through the implementability of case study of V8 and architectural champions and challenges in run time and strings and exposing synchronization primitives to Java script that I think is interesting and look at quick synthetic benchmark that is the start of the performance investigation and then I’ll go through some feedback from other partners that we have already engaged and outline some next steps. Implementability. To start with, this is exploring the implementability of these things that we prototyped in V8 and Chrome. This contains shared structs with null prototype struct. Shared fixed-length arrays are like JS arrays but shared and allegation have the null value for the prototype and fix and length and can’t resize because they’re shared. And the JS level synchronization primitives that were prototyped are mutexes and condition variables. Architectural challenges in V8 the two are Isolates and the garbage collector. How you share pointers in the JS engine because JS engines – because JS is very single threaded and has been single threaded from the beginning, it’s very reasonable that all VMs have been engineered with single threat vie Manoticks in mind and they are isolated and isolating in separation with the GC heaps. I imagine other engines if this is not built in assumption, the unit of a separate unit of execution designed to run a single thread I think that does exist across engines. The core problem is basically that if you want to share things that make isolates not so isolated. In particular V8 has the notion called "pointer cages". We do this for two reasons: For heap size optimization and for compression and for security. So what pointer compression is, on 64 bit machines: there’s ASCII representation on the slides of pointer. Divide it into two halves and upper bits consider base access and the lower is offset. What you can say is each isolated unit execution is give it 4 gigabyte GC cage that all of the heap pointers must be kept. This is the JavaScript GC heap and not like the malloc heap but what the garbage collector manages. If you do this and give each unit a separate 4 gigabyte "cage", you can chop off the top. You know every pointer has the same upper 32 bit. You only need to keep the lower 32 bits around. This is a good memory optimization on 64 bit machines. That’s what we do. The problem is that if each separate unit of execution of same 32 bits the worker has a different thread than the isolate. If you want to share something, you can’t. You don’t know what the base is – you can only share full pointers or share things in the same cage. And the solution for the prototype is that we move to a per processed shared cage for the entire process across all threads is a single 4 gigabyte for all GC heap pointers. There’s pros and cons for this. The pro is this unlocked other benefits independent of the other prototyping effort and simplified code and led to sharing of read only stuff again that was kind of interesting. And that sharing of read only stuff is a nice optimization that should benefit all executions. There was some security benefits by having everything in a process in the same cage, you could put some guard pages around that cage in an Uber cage that is even bigger and sand box other things that are off the GC heap like array buffer data, for example. That was hard to do for – in the per thread – in the per isolate cage world. This is easy to do in the per process cage world. The main con is that it limits your heap pointers. This is not all pointers and not all memory. Some things are allocated off heap like array buffer data. For the GC heat this limits to 4 gigabyte process. That is a fine limit for web use cases. Might not be a fine unit for server side cases and we have ideas to relax it to make it an 8 gig cage if it is too big. The second part of the architectural stuff is the garbage collector. This is a big one. Garbage collection: coming out of the design that JavaScript is single threaded, it is natural to design the garbage collection such that each separate heap on the worker versus on the main thread. Those heaps are independently collected and closed – their object graphs are closed. There’s no edges between them, so you can just independently collect them. The problem is one if they have separate heaps where do you put the shared objects and recall the design here has a constraint that shared objects cannot point to local objects. This is a good restriction to make shared things easy to read and point willy-nilly to object that it is not thread safe. So where do you put the shared objects? You have two broad choices. You can have, coming from an implementation that already has separate heaps, you can have a shared space and have a special heap out of shared things go next to the local heaps or have the unified single heap across everything. We chose A because having a shared space – having unified heap across everything is not an incremental software engineering thing as years of rewrite and there’s no reason to do that up front. So having this special shared space that is owned by the main thread is the solution that V8 chose is basically a separate heap out of which all shared things go that is – sits next to all the threat local heaps as well. So this shared space supports concurrent allocation. Nonshared spaces are still independently collected. Because this is a shared space, everything all the other thread-local heaps can point to it, but it cannot point outward, so to know the full liveness set of the things in the shared space during the garbage collection, we basically have to stop the global with the global checkpoint. For major collection we have to stop the mutators. We applied some of the usual tricks to garbage collections to make stop-the-world not too painful. Marking is still incremental and we have the shared remember set that is analogous to a new remember set. There’s pros to this. This is an incremental solution. It extends the existing architecture and its performance is probably good. It has the desirable performance characteristic if most garbage is local. So if you think of the generational hypothesis for garbage collection that most objects die young and you have a separate young space that collects – that’s fast to collect because most of the objects are short lived, you can squint and kind of think maybe most garbage is not just young but also local. It doesn’t escape your local thread. If that hypothesis works out, this is probably actually a good design. The con here is that on paper, it’s not maximum performance. You have to stop the world, so it’s very difficult to in the separate heaps world to have a concurrent with mutator GC if you have shared stuff. You have this problem where because everything that’s shared has to go into a separate heap, you might allocate stuff in the wrong heap to begin with. And then you have to perform a copy when you find out, oh, it escapes the local thread, and I have to copy it. This may be okay, like, that con is also analogous to the generational hypothesis that things that survive young generation will get copied to the old generation. Just like things that survive the local copy – sorry, the local heap end up getting shared will get copied into the shared heap. Maybe that’s okay. We need real world cases to see if the design decision makes sense for performance. That’s the major architectural challenges and how we solve them. On the run time side there’s challenges as well. The design – the proposal was designed from the get-go to pig by back well on existing JS object run time and restricted the design of struct such that you could mostly just use what was already implemented except with the restrictions that would make it not thread safe. The reason was for incremental implementability. The proposal is going to die if the requirement for implementations is that you must write a completely new VM, so it was designed to piggyback on existing run times. and there is some work but over all, I think, for V8 at least that hypothesis pans out. The key is hidden mutability and the shared barrier and the publication fence. The hidden mutability stuff is basically the structs and these shared fixed-lengths arrays and things are designed to have immutable shapes and sealed objects. They have fixed lay out. When you extend the lay out of the object that is inherently a mutable operation and need to rejig the lay out of the objects. If you declare the lay out ahead of time, the thinking was that is enough to get you easy to implement thread safety for the most part. We’re not messing with the hidden internal state of the VM and only do read only the operations on the VM because you declared lay out up front. Some operations that look like read only in the language are read-write in the objects in the VM. Tmhe two examples I want to highlight in V8 are being used as the prototype changing the lay out by optimizing for use in the prototype and puts it into dictionary mode among other things. And we have these caches that I think we added for sunspider that other engines might also have that end up being hidden points of mutability. Luckily these are mostly easy to solve. The solution is do the mutability thing upfront like if if you care about for-in enumeration cache, you can have the cache or do it up front. Do it at the time of declaring your struct type instead of doing it just in time when you do for-in. The other solution is you just don’t do it. These are optimizations. It’s not a correctness issue if yo don’t do it. If it becomes performance issue that’s more fun later for VM engineers. The shared value is to enforce this design constraint to – sorry, to force the design constraint of restricting shared objects from being able to point to local objects. Again, this is just like a core design constraint. You can’t have pointers – you can’t have edges from shared things to local things. So how do you enforce this? The solution here is to on assignment, basically apply what I’m thinking as a write barrier what I’m calling the shared value barrier. This is not a write barrier in the garage collector sense. This is a write barrier in something that must get triggered every time you have an assignment. I had worries in the beginning I would need to proliferate the shared value barrier everywhere. But I think in the design of production engines there are a few choke points - not one, but a few choke points because of the unification around inline cache paths, you can insert this barrier and have good confidence of correctness. And you can even have a fast path and slow path that works out fairly well. Some things you know to be shared by just the type like shared structs are always shared. JS objects are never shared. Therefore, the slow paths, you can jump to the runtime and ensure the right barriers are thrown and so forth. The gotcha here is that you do have to do a bit of work for the JITs because they have to know they might be assigning to the shared objects and emit an extra value barrier. The final piece I want to highlight on the runtime side is the publication fence. This is a bit of memory model arcana. the design of racy shared memory stuff is that the observable races must only be on things that are observable to the surface language. But there are a bunch of of course VM internal data structures and even if the design of the shared structs allow for races at the JS language layer, we must never allow any data races at the implementation layer because that means VM crashes. So the bar here is at least could be higher if we iterate on the design but the bar is at least the VM must never crash. And that basically boils down to that when you allocate a shared object, they can become visible to other threads without a synchronizing API like post message. If they go through synchronizing API then it takes care of VM internal things being visible to other things and it’s published and you’re okay. But it might just become shared by simple assignment to another shared object that is already shared. It can become visible very fast. You have to ensure if it becomes visible in this fast way to other threads, we must not – it must not become visible in a way to other threads that causes the VM to crash because we didn’t publish all of its internal fields correctly. And the solution here is quote allocation is publication. This is something that basically with the JVM does and when you allocate a shared thing you have a store store barrier before the allocated objects is returned by the factory. You basically allocate it and initialize all the internal fields and emit a store store barrier that in the C++ implementation is release fence. We have convinced ourselves this is okay. This is memory model arcana. + +SYG: So that is the – I’m moving slightly high inner the abstraction layers now. The other huge challenge that we did see ahead of time is strings. Strings are these like simple nice things on the surface language that are extremely, extremely hard to implement and understand in production engines because the performance requirements on them are actually fairly high. So over the years all the engines have very highly mutable many different representations of strings internally and those representations are mutable and they transition between these representations. I’ll go through some of this. If you haven’t seen this, this is terrifying. Just in V8 and I think a lot of these string types, these string representations carry over to SpiderMonkey and JSC as well. We have the string types and sequential strings that you might think of as C strings and modulating the null term lating character and contiguous character buffers and sliced views with other string in offsets and the length. We have constringes that are sometimes called ropes that are tags of other strings. You get "cons strings": why do we have them? We didn’t spec a Java-style string builder so people use + to build strings. If you copy and flatten and copy like strings into larger character buffers every time you append them, that is too slow. We made internal invisible string builder. We have external strings this is V8 specific thing. External strings are character data are owned by the embedder. Blink. Chrome might say this is my string. You should have a JavaScript object representing it but I own the characters. Or node might say it. I don’t think node says it. You have internalized strings strings in the intern table. If you internalize them you compare them by pointer. You have thin strings which are pointers to internalized strings. This happens because when you internalize a string you want to convert it to internalized string, but you can’t do it in place because you don’t have access to all the references to that string. You instead do indirection and turn the string you internalize to the pointer and we call it a "thin string". The problem is these transition states in place and some transitions overwrite character data. What is the state transition? That’s what it looks like. [hideous graph on slide] It’s extremely complicated. And the solution here is to deal with all these super unthread safe that over write character data is do the following: Strings can be shared in place or by copy. Ideally you want to do it in place so you don’t have to incur a copy. Sequential strings that are character buffers are allocated in the shared space. They can be spared in place. That doesn’t mean they’re always shared. They are allocated in the shared heap that I talked about earlier. Other strings are never basically in – all the string works like slices and ropes and that kind of stuff are never shared. When you need to share them, you flatten them into the character and copy them and string sharing occurs in the shared value barrier. That’s the representation part. Interning is canonicalized and you can’t it to use property keys and intern and when you do the property you do the look up, is this the name of the field and pointer comparison of character buffer comparisons. Interned strings need to be shared for performance and intern table needs to be shared and thread safe. And in place internaling importance for performance. You don’t want to incur a copy every time even for single threaded execution. The first time we intern something on a miss, the first time we intern something on a string table miss, you basically flip a bit on the string that says I am now in the intern table and I am the canonical representation. On subsequent hits you convert them to the pointers to the canonical version and those pointers are the thin strings. The in place part is the string table miss when you insert it into the first place and insert it into it, you do this in place to the string to do the first something that gets inserted is the canonical string. The solution to make this thread safe is luckily the string table is thread safe because production engines support things like off thread compilations and they’re compiling the program and have property names to canonicalized and it is in V8 that supports lock free hits but lockful misses and allocate all in place internalize-able strings in shared place the intern space in shared space. Now, there’s another wrinkle to this which is that some shared string transitions overwrite character data. The string table hit that I talked about, when you convert – when you transition a character buff tore a pointer, how do you do it? So you actually over write the character data. You just like overwrite the contents of the string, you say I am no longer a character buffer, I am a pointer and first word is the pointer to the actual string instead of start of the character buffer. The problem is that that is of course super unthread safe. You can’t overwrite character data is another thread is reading it. The observation for our solution here is that while you do this overriding of character data, you never actually change the character contents. Indeed you can’t because JavaScript strings are at the JavaScript language level are the immutable primitive value types. The character contents never change. What we basically do is indirection where we have a separate thread safe forwarding table instead of transitioning these unsafe transitions in place we reuse another field, the hash field in this case to store forwarding index that says I’m a forwarded string instead of looking at the indirection in my own memory go look it up via this index in the separate thread safe data structure. With that the state transition is even more complicated. [even more hideous graph on slide] But I guess it was already impossible to understand. + +SYG: So final bit before I get into performance is an interesting lesson from surfacing synchronization primitives. The multithreading you want to get faster performance ceiling than pure message passing. You do it blocking the thread with new texts and condition barriers and so on. You can’t do it on the main thread but if you have more coworkers you can do this. You can already do this. We’re forcing users to write their own text and condition variable today that is an art and not easy. Good for the engine to provide it. I think this proposal should also include mutex and condition variables. This is just what I was saying. I think these should not be in scope with the proposal and present detailed API in future and may have symbol.dispose synergy with resource management. The interesting realization here is that under the hood, ultimately blocking your thread and doing wake ups on the thread requires some kind of OS support. POSIX has abstraction layer if you used P threads variables and those are all implemented on the operating system by OS specific thing. Windows have slim reader write and macOS has unfair lock stuff. They all bottom out. You have to rely on passing the boundary. And OS level primitives all require some structure with a stable memory address. You can’t move the it, if you move it, you cannot address the thing that you are waiting on. But production engines use moving GCs like generational and compacting that move the contents of JS objects. If you surface these two surface languages like JavaScript that move the objects under the hood, these must be moving GC safe. The lesson is do indirection. These objects can point to other data structure that that is not moved if you want to use, for example, P threads to implement these. While you’re at it, might as well implement these manually. They are extremely fat for no good reason. If you implement your own you can tune your performance. WebKit is blazing the trail with the parking lot. I see it as Linux futexes in user land and more ergonomic and great for synchronization primitives and apply straight forward in implementing condition variables and mutexes in the JS engine. + +SYG: The upshot is that yes, structured shared memory is implementable. There is a dev trial that is highly experimental feature that is not guaranteed to stay. That is behind a flag that is off by default that could be removed at any time. If you’re interested, there’s a flag that you can manually flip to try out these things. If you open the slides and click on the dev trial link there’s a document for how to turn it on and what the APIs are if you’re interested. + +SYG: That’s the implementability part. We now move to performance numbers. The only reason to do any of this at all is performance. Like, there’s no reason to introduce a harder to program model if it’s going to be slower. You might as well not do it. We need some verification this is worth it for performance. So the dev trial is still relatively new, and we’re engaging partners to come up with realistic use cases. We have the synthetic benchmark that is for the most widely applicable case that is zero copy message passing. That doesn’t require mutable shared memory, it just requires some kind of shared memory. If you have the thing shared and message past it even if immutable should be zero copy and must faster than we do today and create a synthetic benchmark. What does it look like? So the Y axis is in milliseconds. The X axis has the weird looking equation. What this basically – what the synthetic benchmark does is that M of F times D equals C means a tree of nodes with fan outs, so how many nodes at each level of the tree. F is how many nodes at each level of the tree. D is how many levels of the tree there are and C is the total node count. For the synthetic benchmark creates the trees to be passed by message. And there are several configurations. You can put short strings in them. This is randomly general raised strings guaranteed not to be generalized. And numbers and mix of floating point and integral numbers. And the POJO is ordinary JS objects and struct is the dev trial. What you see, this is the message creation time. This shows that structs take longer to create. Why? Because shared structs need to be shared. They have the shared value barrier especially for strings when you have to ensure that the strings are in the shared space. That actually takes time. It takes longer to create. So we’re not starting off to a good start. It’s actually slower, but when you post message it, this is the unsurprising thing. When you post message it no matter how big the payload is if it’s zero copy the time is constant and basically free. For ordinary objects this scales very badly with the payload size and the object complexity. This [chart on screen] is log scale. I have no idea how to fix the Y axis. But the actual top is a hundred milliseconds. Whereas the bottom is something microseconds. All the flat lines on the bottom are the shared struct stuff and the thing that is exponentially growing but is linear in the picture because of log scale are the ordinary objects. This is what we expect. That’s good. And this is the total. The total time. The POJO lines basically grow pretty fast and the struct lines are better. Transfer free as you expect but still nice to confirm. Not completely free because some costs have been moved to the shared value barrier but still nets out to be worth it. This is just scratching the performance of the performance ceiling. We need more sophisticated examples and something that fully leverages shared method threading in the traditional day than using it to improve message passing although it could be used to improve message passing. We’re waiting more partner engagement with the DEV trial. + +SYG: What do the partners say? We have engaged some partners. Notably folks from Microsoft were trying use the dev trial to see if it can be used to speed up Babylon.js that is rendering. So the current design of shared structs is data only. We don’t know how to share functions. It’s very difficult to share functions to share code. That was punted and left out of the proposal. Turns out having a data only proposal makes it difficult to incrementally adopt. If you have existing code base and see would multi-threading speed this up, yowled do that Babylon.js did and take one class if I don’t have message now I need to refactor all of the use side for static methods and hard to swap implementations and a big pain. Ergonomically is difficult to locally adopt it in the existing code base. The other big feedback was this requires self don’t cross isolation opt mated. On the web any kind of shared memory requires the site to be opt into cross origin isolation. This is something that exists today. And you can only used shared array buffer high performance now and other stuff if you opt into cross isolation. This is extremely onerous to set up. And partners basically complain can I use this without cross origin isolation? There’s pretty unequivocal answer there: That’s categorical no. You must use cross origin isolation to use this. It’s exactly the same kind of Spector race as SharedArrayBuffers. How onerous it is and working as intended and the level of scary needs this annoyingly high level of opt in intent. I’m not sure if service folks have looked at this since the web added it and if they consider a similar level of opt-in intent to be necessary for surface don’t applications to use shared memory especially in multitenant stuff. But it’s annoying but it’s not going away. This is – we have to have this level of opt in. + +SYG: So what are the next steps here now that we have done a bunch of implementation work and have a prototype? We’re going to explore more with partners such as these office suites like Google sheets and Babylon.js and advanced web partners that want to squeeze out every part of performance. Plans to have a method sharing design. I think it’s become clear that it’s just pretty unergonomic to not have method sharing and a few ideas have been kicked around that is kind of analogous to wrapped primitives. I won’t go into detail here. It doesn’t involve fully sharing code. It involves like how to make sure the same set of code gets associated with the right shared structs across multiple threads. There have been good design discussions with folks like RBN and DE and MAH and ATA. And big one is WASM GC alignment and the WASM GC is to add objects to WASM and multithreading will come in the future and alignment with the multithreading capability is a hard requirement. The plan is to ask for Stage 2 sometime this year. Look for future presentation on the actual design. We have the working call that is monthly or something if you are interested. Look for it on the calendar. And big shout out to the V8 team and partners and Microsoft team making thread safe on the blink side and thanks to the other delegates for the regular working calls. And that’s it for the presentation and I’ll take questions with the last few minutes. + +WH: What did you mean by release barrier? + +SYG: Talking about the publication fence? + +WH: Yeah. + +SYG: It’s a release fence. You allocate a thing and write things into the field, the internal fields of the thing I allocated. Before it’s returned from the factory and can escape to script, I do a release fence. + +WH: What is a release fence? + +SYG: It’s like an STD atomic release fence and one of the things that the C ++has. + +WH: That doesn’t do anything unless you have acquire on the other side. + +SYG: Release write doesn’t. Release fence does other things. It’s like a separate thing. + +WH: It still doesn’t do anything unless there is an acquire on the reading side. + +SYG: We can discuss the architecture offline. They are convinced it’s kind of okay. We can look at the generated machine code. In the JVM if you look at this they don’t use C ++stuff they have the architectural they do that basically prevents store. Store store reorders at the compiler and the ISA level. + +WH: Okay. I have a different question. Does this work for all primitives or only some of them? Does this work for bigints? + +SYG: In the dev trial I’m not sure if I implemented BigInts. It is designed to work with all primitives by copy if nothing else. + +WH: What do people generally do if they want to share a RegExp or a Date? + +SYG: They can’t. Because this must be an opt in and not like a transparently we make some things shared because that’s too unsafe if they are missing like – the fact of the matter remains a lot of the standard library is going to remain single threaded and the shared part is not going to be as expressive. And if they need to do a date, they would need to convert it to an inter change format that is primitive. If they need to do RegExp, similar. How big a problem that is hoping that is empirical question to figure out with partners. + +WH: I imagine a big one that you already mentioned is functions. + +SYG: Yeah, functions, yes, also a big one. + +WH: Okay, thank you. + +LCA: I assume SABs also can’t be shared through the shared struct as the JS objects; is that correct? + +SYG: Yes, that is correct because shared array buffer object themselves are single threaded JS object is only the backing store that it’s shared. When you share today array buffer you clone the shared array buffer object but don’t clone the memory it points to. You can’t change that behaviour so you can’t do it. This will be award and should be solved in some way and maybe a new shared array buffer object that still wraps the same pointed memory. Already aliased. Not worse to do it again. I don’t have a nice elegant solution here. + +LCA: Thank you. + +YSV: SYG, thank you very much for the very informative presentation. That was really interesting to go through. I’m wondering, you don’t have to commit to anything with this, but I’m wondering if you have sort of a preview of what you’re thinking of in terms of the design for the shared methods? I’m curious what the search space is for the design there. + +SYG: Sure. The high level current idea, there’s different variations of it. The high level current idea that I think about it is analogous of wrapping of primitives. When you have an object that is not an object. When you want to call – or string is a better answer. When you have a string and want to call the prototype thread on it. At the time when you call IETF becomes automatically automatically wrapped in the object that is string.property copy from the current realm and that has the right methods on it. We can think of sharing methods in a similar way when you call – when you do a property access that is not an own property, sing it has an null prototype it doesn’t have this, where does it go? And another mechanism now it gets automatically wrapped to some thread local object that contains all the code that you can then use to resolve and call the methods. The space here is how do you ensure the same set of methods and these objects? How do you set it up and bootstrap the workers to have the same code? What is the right thing and how do you communicate sharing that stuff to bootstrap it? Folks like DE and MAH have been thinking about by sharing modules and maybe to use module identifiers. Not identifiers. Module specifiers in some way to ensure that the same copy of things get loaded across multiple workers. But the preview here is basically still punting on actual sharing of code. But for ergonomics, this wrapping of primitives analogy already exists and may work well for shared structs. + +YSV: Okay, thank you. The wrapping was what I thought might be the direction. But I’m interested in hearing about this module as a vehicle by which we can share methods. I heard a little bit about this. Okay, interesting. Thank you. + +USA: Next up we have DE. + +DE: I’m very happy to see all of this progress on this proposal. You know, the benchmark results were impressive and hopefully motivating for various groups of people. Also happy that you’re treating future WASM-GC alignment as the requirement, and looking into the usability issues like methods. In the method case thanks for being open to suggestions and having these regular meetings. Overall I think the project is going well. When SYG refers to partners, I take it that this is still open for other major websites that might make use of this that you might be open to working with more groups and wonder if you want to advertise that briefly because this kind of partnership model is supposed to be kind of open. Anyway, that’s up to him, not to me. I’m hoping for class syntax and single threaded structs. We've talked about structs being analogous for records and tuples–although we don’t know what the future of records might be, but this concept of having certain properties of objects fixed when they are allocated as opposed to freezing them later is very powerful and I think it’s good that we think about this as language designers and use it in more cases where it’s helpful. + +SYG: Thanks DE. A response to that. First I should thank you for the method sharing idea. I was more stubbornly still thinking if we can have special magical functions or something. This current direction is much more tenable. Thank you for bringing the idea up. As for the partners thing, it is open. I’m open to partnerships. I don’t want to widely advertise it because the quality of the dev trial is not there. I think the current partnerships require high touch. And I and the team only have bandwidth for so many high touch partners why we’re limiting to basically advanced large team corporate team backed applications. But as it becomes more stable, I would certainly welcome partners of all sizes. It wouldn’t be enough to work with large sites. I would like to get more experience on how useful it is across the spectrum of developers even though it is designed initially with the advanced web partners in mind. + +USA: Thank you. Next up we have MM. One quick reminder we are about 7 minutes to the end of the timebox. + +MM: Okay. First of all, my compliments on how much careful engineering and the quality of the explanation. I appreciate all of that. I have a question about the relative contribution to performance of two different aspects of what you’re doing. And then I’ll also explain before you answer, I would also like to explain why I’m asking that particular question. So the question itself, when you do the message passing, if the structs that you’re passing were transitively immutable, but you were still able to share them between threads by pointer passing, I take it that would be compatible with the experiment that you actually did that you showed the results for. And obviously the full proposal has the read/write structs and the ability to do fine sharing and locking. The reason I’m asking is that all of the impacts on the programmer model and the contagion of needing to deal with concurrency through the ecosystem as people release libraries all comes from the need to coordinate a shared memory multithreading by locking that the spread of locking disciplines into user code. If all you are doing is passing transitively immutable structures through message passing between threads, that would not affect the user model at all. + +_transcription service interrupted_ +_switching to bot_ + +SYG: Doesn't mean that we're not going to make these structs available. it probably means that by default if you don't opt into it, your server does not into it. You get these things as immutable at construction time and they can still be zero copy, message passed. You can't do the full escape hatch. fine-grain locking. but you can still do the zero copy message passing and they still can be shared, but because you don't have cross-origin isolation. you just cannot have mutations, okay? + +MM: That's a very useful answer. + +SYG: but I also want to double down on that like in our exploration and and talking with these advanced web partners, We believe that the escape hatch for mutable shared memory with the fine-grained Locking is required to move the needle for the performance ceiling for these Advanced web applications. + +MM: For these advanced web applications,do you have any quantitative sense of how much additional benefit they get from fine-grained read/write, sharing of versus read only sharing. + +SYG: Quantified? No. This is this is this is bit of a chicken and egg problem, we have to build something, so they can prototype something. and the thing is like, because this kind of the traditional sense of shared memory, multi-threading doesn't exist on the web, of course no Advanced web application was written that way unless they were already written in C++ or something. So they basically have to rewrite stuff which will take a long time to re-architect. But the idea is, for example, for something like a spreadsheet application, you might have multiple workers that do calculations on the spreadsheet. Currently, if you have a giant spreadsheet that spreadsheet model in memory needs to be duplicated per worker. If you do not duplicate that, that module is highly mutable because like different cells are being written to, etcetera, input and calculation updates… if you want to share that shared like this, if you want to share the data in memory, you need fine-grain locking in the traditional shared, multi-member in the shared memory multi-threading sense and that is just a different architecture that will have different very different performance characteristics than how folks are doing it today. And I can't quantify until I can convince folks to build that experiment which is the partner exploration bit. + +MM: Thank you. I got a lot out of that answer. + +USA: Great. Well, we have one final reply by Waldemar but I wanted to ask if we can take this offline; we are at time. What do you folks think? + +WH: As a follow-up to MM’s question, are there issues with hidden mutability when sharing transitively immutable objects? + +SYG: Are you asking about Javascript objects? + +WH: Yes, I believe that’s the case that MM was asking about. + +MM: Yes. + +WH: Are there hidden mutability issues with sharing POJOs (plain old Javascript objects) in general? + +SYG: Sharing immutable POJOs can run afoul of hidden mutability. Another thing, I didn't highlight was all pojos allow you to reconfigure, non configurable writable properties to not configurable and non writable. That is not even hidden, that is like explicit mutable stuff Like that changes the property descriptor Is of an object. The POJOs allow that. Well, actually, I take that back that doesn't apply if you if it's deeply non writable anyway. everything anyway. So okay I take that back but there will be there might be hidden mutability concerns for pojos. My sense is still that that is really only a problem for production engines that have had many that has had decades to build in optimizations at the object representation layer And those are just optimizations and if you have A new like, opt-in space to think about sharing We might disable some of those optimizations that are not sure it's safe. Okay. + +SYG: Yeah, thank you for the questions and listening. + +### Summary + +Shu-yu Guo of Google has been progressing on a prototype implementation as a dev trial in Chromium of Stage 1 Shared Structs, in collaboration with large partners, e.g., Microsoft. +This system enables faster execution of advanced web applications through shared-memory multithreading of JavaScript programs, including GC heap-allocated structures (something impossible with SharedArrayBuffer). +Various implementation challenges were discussed, and benchmark results showed strong improvements. +Regular external calls for TC39 delegates take place to iterate on the proposal, and the V8 team will be open to additional partnerships to prototype shared structs more broadly as the current prototype matures. + +## Async Context + +Presenter: Justin Ridgewell (JRL) + +- [proposal](https://github.com/tc39/proposal-async-context) +- [slides](https://docs.google.com/presentation/d/1LLcZxYyuQ1DhBH1htvEFp95PkeYM5nLSrlQoOmWpYEI/) + +JRL: This is async context for stage 2. There’s been no significant changes in the proposals since the last time this was presented back in January. All we have done is written spec text. We have a series of open questions, but we’re trying to go for stage 2. The reason is because a lot of this proposal is going to be integrating with the larger web platform, Node.js ecosystem, all of these other environments. In order to really have these discussions with these other standards bodies, we need a tangible API. We want to demonstrate what async context is capable of, how to integrate into browser features. We are seeking stage 2 even with a couple of open questions to have something tangible to discuss with these other implementations. + +JRL: Let’s start though with a use case. This is plain user code. This is something I might write in order to handle some kind of fetch. I am doing something like a postMessage or I am just doing normal code. One use case is to measure execution time. How long until I POST data to my server to save state or something like that. We want to have some kind of insight into the events happening on line application. We are taking this user code and how might we integrate something like open telemetry? The current full proof way is to modify your code. First, we have to do a couple of wrapping operations in order to make other things aware of tracing contexts. But we also need to modify the dev code. My handler needs to be aware of spans. Propagate that across the code into any kind of entry point that might finally terminate this operation. So in order to do this, it’s cumbersome. I have to write code with tracing in mind. If we have AsyncContext, then my user code doesn’t have to be aware at all. I can set up the tracing to happen, patch global event listeners to make them aware. That’s the extent of it. + +JRL: The AsyncContext, and I will show this is implemented in a second, this takes care of propagating this tracing event for me. My dev code doesn’t need to be completely rewritten with this in mind. It’s just a seamless integration of library code into my application code without doing a bunch of rewrite. + +JRL: Implementation of tracing is something as simple as this. We could set an AsyncContext instance. We could have a wrapping function that will perform the necessary creation of a span object in open telemetry and run in asynccontext with the current span value and propagate all the way through the call stacks to everything gets immediately the current span for whatever fetch or whatever is happening on your server. + +JRL: But this just happens at the integration point. At the wrapping point, trying to enable tracing. It does not happen at the developer code, which is completely unaware and doesn’t need to change or take in mind any of this. It simplifies the ability for me to integrate other libraries and frame works into my application code, without making changes. +If we think about this at the very highest level, what does AsyncContext give us? It’s a run function. A get-function and there’s a wrap function as well I will go over in a minute. +But the highest level, AsyncContext allows us to store a value in a global storage state, during the execution of a particular call. +We can go through the code sample and get a sense what have is happening. During online, duing the first of the function, I expect – I own this function. We know what expected is during this first case. It will be the value 1. We need to determine is the value of the context that `get` is. It will be whatever is set there via the run at the call stack level. So if we step into context that run, we are inside of this run call stack. We invoke this function, and this inherits by the current run. Line 6, this should be very easy to prove to your self that context is going to be whatever that value is, was placed by the run. And obviously, expected is the value 1. +But then we hit this `await promise`. And that pauses the synchronous execution of this function. Now we set out. [bab] at line 2, which is now done. It’s synchronously complete. Now line 3. And we do a second of this `fn` function in a separate function that calls call stack. We enter the context with the value 2. We then invoke `fn` with expected value of 2. This line 6 should be easy to prove to yourself. The context will hold the value of 2, the expected is obviously the value of 2. Then we hit a promise again. The same promise, it could be any promise really. It’s just we are pausing this synchronous execution of this function. One of our asynchronous context will resume. One of the functions will have finished waiting on the promise and redeem execution. After the promise, we continue with the `fn` function, we would to see you are it return the 1 value for the first execution. Expected is still 1, because it’s a closure wrap parameter. We need to ensure that context pulls the value that was placed by the run. That’s for the first execution. Now we need to ensure that context that get holds the two values, placed there via the run, we get into how this happens, with a little bit of pseudocode. + +JHD: I have a clarifying question. Do you want me to wait until the end? + +JRL: I can take it now if it’s quick + +JHD: Would the same assert work if I did `promise.then` or `.catch`? + +JRL: yes. You’re talking about a change in line 1. Yes. If you do anything like that, the expectation is the promise will reserve your async context for you. We will discuss that at the end because that is brought up by – the expectation is that, the promise will propagate the value for you. +How might this work? + +JRL: This is not a perfect pseudocode, but 99% correct pseudocode. This is actually how I have implemented this using a native ad-on to do this. Essentially this code. This is the majority of the stuff necessary in order to implement a full Async Context. There is a global storage and that storage value is maintained by the currently executing agent. That storage value is able to be manipulated by the AsyncContext instance. + +JRL: We have a `.run` method here. Look at lines 6 through 8, the context instance is able to clone the current storage value. Set a new value because I can’t clone and set a value of a single step. And then it’s able to set the storage value to this brand new clone with my new mutation added. After that point, this map is essentially immutable. I can’t express that in job script, as I am not mutating further. During the execution of call back, the synchronous execution, the global storage value held by the agent is whatever the mutation of the map was. After the synchronous execution of that call back, we restore the previous async global context back into storage. +And so it continues on without any further changes. + +JRL: Using the get method, we are able to access whatever is currently stored in that global storage value for my current async context instance. So guard this with these methods only async can access the global storage value. It’s a very proper API for it. + +JRL: The only code that is not shown here is how this is getting propagated. And that happens at the moment via host hooks. There’s `HostMakeJobCallback`. A paired one called host call the job call back which is responsible for snapshotting, which is just taking the reference to the current storage state, and storing that in your call back – in your job call back. And then whenever you call that job call back, you restore that snapshot to the global storage state. That takes care of almost all of the complexity of this problem. The propagation of the async context happens at the language level. Automatically for you. + +JRL: The final fundamental API that is exposed by async context is called `.wrap`. A static method. All it does is take the snapshot, a point of reference and that’s the extent of it, and it stores that value in a brand new call back for you. When you invoke that call back at a later point we restore the snapshot as the current global state. When we execute your wrapped function, and then we restore that prior state back to the previous value. +That’s the extent of the three APIs that we need. And that’s all that really has to be done in order to do this for almost all cases that happen. + +JRL: This kind of API allows us to implement all kinds of features. We discussed a little bit about performance analytics using OpenTelemetry. if you have an APM, using Sentry or DataDog or something like that. This allows those libraries to accomplish the performance monitoring they want to implement. We have logging use cases. Both on client and server, when you go to augment the log with additional data. The request ID, maybe the DOM elements that caused the click event to happen, . . . this allows nice framework APIs without imposing costs on your users. This is one I am excited about. The same reason that OpenTelemetry works seamlessly, frameworks can work seamlessly because the framework doesn’t need to pass to the user a context value, some state or something like that. It can be propagated through the framework code into the user code and back into the frame work whenever user code calls the framework APIs. And have that happen automatically for the users, which is magical. There are browser features, not like me running my code in a browser, but browser features themselves that can be implemented on top of this. There's the TaskAttribution APIs that Chrome is working on. And the ability to detect long tasks for these tools. That is handled whenever the click started or your fetch or post-message started. Something like that. There are cases, particularly for something like IMP, the time it takes between the click event listener to your paint when you mutate something on the DOM tree . . . that can be implemented on top of AsyncContext. In the browser itself there are task scheduling APIs, where you can schedule some background work to on a background task that only is executed whenever. And then have that background priority propagated to other things, if that task invokes . . . if they wanted to have a fetch or post, all also inherit a background priority automatically without you having to do anything. I wrote normal code, scheduleled for a background and continued to happen in the background, which is amazing. + +JRL: We have open questions and opening to answer them between stage 2 and 3. None are huge changes in the fundamental API; how to integrate each of the features. I will start off with the easy questions and get to the hard questions as we go through the presentation + +JRL: The first one is about constructor extensions. About DevTools. At some point, DevTools want to display the global storage state, so you can see what your async values are holding. If I just have an anonymous symbol or context, pointing at a value that is not helpful for Dev tools. I can’t tell what value is held by what. It’s not useful. If I had a mapping that said, pointing at that at a value of zero or a mapping that said, being context. I can look and it says, I am set to this theme. That makes the DevTools nicer. It will affect the overall API. + +JRL: The other possible extension is having the default value. If you’re familiar with React context, it allows us to initialize your GSX context value with an initial so you don’t have to constantly recreate that if isn’t is set. This is for something if you’re propagating a theme or a colour scheme that has bunch of colour values, for individual properties or text nodes or Dom element styles where you initial values and have it automatically propagated, if the context is not been directly set for you. So the overall thing here, the name doesn’t affect the overall APIs, but default value affects the way dot get operates. If there’s not a bound mapping, a set mapping in the global storage state, then the default value would be returned for you instead. Neither of these are super high priority, but they are nice to have, we could do anyway + +JRL: There is a question of higher order of functions and organize ergonomics. There’s is `AsyncResource`, which is mostly equivalent to what we have as `AsyncContext.wrap` you initial a class instance. This is a class. You knew. You have an instance. I am extending it because I took the code from the node docs. You have a method called "run in async scope". It performs exactly the same thing that wrapp does. It snapshots at the time of the construction and then run async scope allows to run a call back within that snapshot so you run it and go back to the prior async global storage state. This is the way node exposes the API. +The way we expose this API and someone who is migrating from Node into AsyncContext, it’s not 100% straightforward. It’s possible to get the exact same API. But you have to use a higher order function to do it. +I don’t think we would have trouble doing this. We could think of this idea. But we’re not normal developers. We’re extremely technical. We understand how to do these kinds of things. +I don’t know that a beginning dev would understand how to do this, if they are trying to do this migration. I don’t think people immediately think of the higher order function, immediately invokes. That’s not the first step that the beginning Dev would make. I don’t need to think about higher order function, and in this case, I am exposing snapshot, which which is equivalent to having a wrap, evoking the call back parameter with a bunch of arguments. +This just makes it easier for beginner devs to migrate into AsyncContext API. + +JRL: Let’s take questions for those two, if we have anything and get to the big questions that are going to be posed later on. + +USA: There’s 3 questions in the queue. First up, JHD. + +JHD: Yeah. So when I make a new function, I don’t have to tell which variables in scope I want to close over. And I have done that in PHP, it’s annoying. What if it should automatically do ‘wrap’ for every function? + +JRL: If you are using promises, it is automatic for you. However, there are more queuing primitives that can be written. For instance, you could have a batching function that schedules a bunch of functions to be run at a background task or all at once. Those individual tasks lose their context when you put it into a single set time out or a set interval or something like that. They inherent the context at the time the set interval is scheduled. Not the context at the time I ask for this function to be batched. +Async context.wrap allows you to integrate async context into these other queuing primitives like a batch. + +JFI: I wanted to chime in on that point here because I was on the Dart team when they introduced zones. And very similar issue, any time you store a call back in user land code, your library storage the call back has to take great care to run the call backs. To capture . . . and it ended up being a game of whack-a-mole. People go to every library out there, over time and be like, I am trying to use your library, it doesn’t restore the correct zones. It was kind after very difficult – I don’t even – I haven’t worked on Dart, and I don’t know where they are with that situation. But I don’t know if there’s any way to get around that, but you know, given the size of the library ecosystem in JavaScript, it might take a very, very, very long time for libraries out there become async context compatible when they store call backs. If there’s an automatic way around this at all, it would be greatly appreciated + +JRL: I don’t think there’s a way to automate this beyond the natural continuation point for promise await and when we have discussions with the web platform folks around addEventListener and other registration time events, there is a – it is impossible for us to do this automatically in one 100% cases. 95% of cases you don’t is worry about. The closure creation time is not the context propagated. It is call time. I could invoke this within a different `context.run` that inherits + +JFI: For most end-user behaviour, right, like imagine add event listener was not context aware. It doesn’t restore – or capture the current context, that is a stand in for a library not updated to do this. You say, okay. Could you capture the current context like on the closure, on the arrow function like your passing to add event listener, even if it’s not been updated yet, you will get – 90% of the time the behaviour you would have wanted? + +JRL: I don’t think this is possible to do automatically for a closure creation. Because this is an extremely expensive operation to do constantly. For the points you’re interacting with code that does not properly propagate, you can manually call wrap at – before you pass the call back to the code. I don’t think this is can be done in 100% of cases, as I said. Now, I think we will talk about the performance penalty that that would incur – + +DE: This is a semantics questions. Not a performance question. In cases where you take API that takes a call back, you don’t want it to close over the kind of immediately enclosing context. That’s why we can’t make this automatic. At the same time, there’s – the scenario that Justin outlined where async shifts – + +JFI: I meant that as a stand in for user land – I know the platform you wrap all call back entry points. + +DE: Yeah. I mean, the Node ecosystem already has this. And that’s where some of the batching thing happens and the experience has been, things can get bugs fixed over time. I am not sure + +JRL: I think the case here is, during performance a hoist all of the functions out of – as much as possible. I might have the highlighter function reused in multiple promise chains. So like promise whatever my function is, in the global scope. In my module scope. That function can’t close over the module scope context because that is always empty. It has to be the context at the time that I registered the `.then` on my promise or I waited on my promise and invoked the function. + +SYG: I think that’s my confusion. Like, I don’t – I did not understand what JFI was saying, like, if you could do it at closure creation time, then use the binding, right? + +DE: Yeah. I don’t see how JFI’s idea would work. And I think we should discuss this more off-line. But there are several circumstances where you don’t want the semantics. + +JFI: It’s a question of defaults and it’s . . . just relaying the struggles of an ecosystem that is orders of magnitude smaller than this, try to adopt a similar API over time, so putting that out as one of the potential speed bumps here. + +DE: I want to acknowledge, I agree with Justin and there are integrations costs to this proposal. Certain things do have to be updated or they will not like combine well with this feature. + +API: I wanted to throw on, exactly what Justin is proposing, the nonweb platform, we have been doing this, if people are not using the provided primitives that the chain, the values will not work correctly. In practice for us, that hasn’t been a big issue. It’s very apparent to someone if they got above and the value is not what they expected, they just go find the place where they were passing a call back or proper API and fix it. So it’s – you’re not going to prevent bugs from existing and user land libraries. + +JRL: Right. I do want to move on to other questions. Do you mind if it’s quick? + +CZW: I would just like to mention that the user function to be traced with OpenTelemetry requires not being wrapped for outer spans to be propagated into the function. So wrapping every function in context will be a no-go for OpenTelemetry. + +JRL: Okay. Next up is mark. You are going to talk about something as my next open question. Can we if we pause that + +MM: I was thinking of using my slot to ask a question that is not what I wrote down. I think it’s quick. + +JRL: Okay. Go ahead + +MM: I appreciate the notion of the snapshot as a non-high-order capturing of the current dynamic state. But what is the API of the return snapshot object? + +JRL: It’s exactly the same. Let me switch back to the slides. It is exactly the same as what is in the comment here. Async – this does internally async context.wrap with this arrow function. + +MM: Ah. Okay. + +JRL: Line 2 there and line 3 through 5, those are equivalent. It eliminates the high-order function for the user from having to think of a higher-order function. + +MM: Okay. So now go to open a possibility that I might hate . . . but it’s worth putting on the table. What do you think about attaching a refined dynamic context, a snapshot through errors at error construction time which is exactly the same time that errors capture the dynamic stack? + +JRL: I think it’s a possibility. The – we need to consider this more. I don’t know how that would operate, if – with the abrupt completions through all the values. I think it would work fine. I don’t think having a wrapped function would be the appropriate thing to attach to an error instance, we need to be an object of some type that allowed you to do – I don’t know how to extract a context out. But I am not 100% opposed to it. I just don’t know how it would work at the moment + +MM: Okay. That’s an adequate answer. Thank you. + +JRL: SYG? + +SYG: Yeah. I want to clarify something about what it unlocks. Some of the browser features would unlock. And I read that means that things like priorities for tasks are going to be layered on top of this? Which is not correct. Like, those things – task contribution and other platform APIs that mechanically can be explained by async context can and are progressing independently of async context. + +JRL: Yes. We’re essentially reimplementing the base layer for all 3 proposals. Task, long task, you have to have something that is similar to async context. It would be nice if we could layer this so async context is the thing that does this for you. However, the person at chrome is we build with task contribution. The need exact way you layer this is up to the implementation. But the same core has to be solved for all the proposals. + +SYG: Agreed. On the same core, explain the mechanics of the propagation needs to be solved. The second part of the item here is that I think what async context is really talking about, the delta between that and other stuff, the web API is thinking that programmable, and having this is a different design space, it pulls into considerations on the implementation side, that non-programmable API Is like task contribution don’t need to think about. And yeah. So agreed the mechanical – there is a core that aught to be shared. Implementation, even nicer also in spec. But async context is more powerful and we should give it more consideration. Because like, you know, you can store like . . . the current concerns are something like you can have an unbounded number of objects that are propagated, and you have scalar data to propagate. + +JRL: Yeah. Thanks. We have DE and then I am going to pause the queue. Because we still have a couple more items to get through and they are going to be more involved with discussion + +DE: Okay. I want to do both queue items. I think another way to view async context is it provides an extensive web manifesto programmable. We are in agreement of the relationship and working closely with people working on async context or other people in the pro team who are diagnose task attributions. These are not effort proceeding without coordination. Not that anything has to block out anything else. So before we move on in the presentation, I want to ask because this is a confusing proposal in a lot of ways . . . does anyone have any questions about the basics, just clarifying what the proposal does? Like, how is it used and the cases we’re talking about? Does anyone want to . . . get some more background? Ideas? Shu? + +SYG: Okay. Like, what is the semantic difference between this and incumbents? + +DE: I hope that is modelled through this. This is something to be done yet. + +JRL: I don’t know what that is. + +DE: Yeah. No, if you look at the host hook for like jobs, it saves and restores incumbents and/or things, but I was hoping to do those through variables. I don’t understand those to say, yeah. + +JRL: We’re taking advantage – HTML incumbents, or the setting or, it’s 3 different things, we’re taking advantage of the same thing we added to the spec to propagate the value. So the same host hooks, the host make and host call job call back to propagate the incumbents around, we take advantage that have is pop grate async context around. + +SYG: That’s what I expected. You want to like make sure that is by – how the feature is designed. +The second part of my question that I am not clear on is, what is the lifetime, what – forget about analyzing. What is the best – what is the shortest lifetime you can give to an async context + +JRL: An async context itself is like – an object that holds. So whenever the user releases that. `AsyncContext.run` is important because it holds – mutates the storage mapping, and then executes a function. +So the shortest lifetime for that mapping is whenever all asynchronous operations and sync calls finish that descend from this particular run. We have a value of 1. That takes so it’s pointing to a value of one and held in global storage. During the execution of the synchronous execution of the MF function, that context mapping is held. + +DE: JRL, in the last meeting, they were having a expectation of developers to expect that such variable will be cleaned up, if they are not going to be accessed along – + +JRL: I am trying to answer exactly that. + +DE: Sorry + +JRL: During the synchronous execution of this, that map is held by the settlement of the promise. Until that happens, that mapping needs to exist. If the promise itself were to be deallocated. The mapping needs not to be held. It could be released at that point. If the settlement does finish, if we get to line 13, 14, then that context still held by the synchronous execution of the resumed call stack that happens whenever you put the pause, async function back on the execution stack. +So during line 13, 14, that context is still held. After that context, nothing else holds that mapping anymore. So we are able to free that global storage mapping that created during the run on line 2, at that point. As long as nothing snapshots your context – your mapping, then it’s freed immediately. If something does mutate your mapping, via the calling context.wrap or promise.then or await promise, or something, it’s held for as long as that thing exists. +After that thing no longer exists, your mapping can then be referenced and deallocated . . . possibly deallocated. I am hopeful this can be optimized away so a lot of things can be freed very quickly + +USA: You have two replies on the queue and 20 minutes for the presentation + +JRL: Is this polyfillable. No. Async await is not polyfillable no matter what we do. You have to transpile to promises in order to do this. Dan, ask this one + +DE: So I don’t think it’s reasonable for async context user to expect that the implementation will be some magic to figure out sooner that some – variable will not be experienced. People who use async context have to expect that the object they put in the async context will stay live until that asynchronous thread of execution completes. +So I think – this was – + +SYG: Let me see if I get straight. Async thread of execution complete, do you mean – + +DE: Call backs that wrap, that ends up closing over that runs value + +SYG: But in no longer – when it no longer has any future tasks? + +DE: Yeah. + +SYG: Okay. + +JRL: I need to pause the queue, because there’s more to go through. We will address more in the break at some point in the future. + +JRL: The next question is Generators. Around async generators and synchronous generators. This is mark’s question. Let me go back in the slide show. It has a defined behaviour across await. So await online for here, I know what the behaviour of the context before that await and that context after that await wait. However, I don’t know what the answer is for the yield boundary of an async generator. +There’s a couple of different answers we could make to this. +So in your mind, I want you to formulate some kind of solution to this question. +We have a `context.get`. Await. Get on line 5. I am going to assert that those two values must be equal because that is the semantic of async context. However, I don’t know what the context is between line 5 and line 7. We need to come to a solution for this kind of a question. +There’s 3 – 2, but 3, answers that I think are possible here. +We could have a knit time of the async generator. On line 11, I construct a generator by evoking it. We could lock that value at that exact time. Context is a value of 1. +Alternatively, we could take the approach that most of the other – the rest of the language is going to be doing, and that it’s set at the call time of your call stack. On line 14, the 2 value i propagated into the context. I will explain how this doesn’t violate the await semantics in a second. +A knit time locking, line 11, a context value of 1. Any time this generator executes the generator holds the value of 1 in the async context global storage state, all the reasons are 1. +I said, I am asserting that line 3 and 5 must always be equal and obviously that is the case here. +And the – it it happens that between line 5 and 7, they are also equivalent. The generator itself holds on to its creation context. +The other logically consistent answer here is that it is the call time of the call to the next that propagates. It doesn’t matter there’s a value of 1 during the construction, but what the value is on line 14 and 17. Line 3 and 5 are equivalent. No matter what choice we make here, these two lines have to be the same. So in this case, it captures the value at the call to the next on line 14. The two value is there on line 3 and stored on line 5. The yield escapes the current execution context. It goes back to line 17. On line 17, I reinvoke this generator with a new context value and that value could be seen when that generator resumes execution. This is still logically consistent for-await. And this is the choice we want to make for generaller rarities because we don’t have to do any extra work. This already is the became the spec is implemented. Also this affects synchronous generators. Whatever choice we made for an asynchronous generator across the yield boundary, it’s the same answer for a synchronous generator. We could go with the init-time-lock value of 1. Or continue with the current call time whatever you called that next. So you get a value of 2 and a value of 3. This is still consistent with the async generators work. But we have to be mindful of this. Additionally, we have other things that look like generators but are iterators. For instance, an array iterator. If I have a generator, implement that iterator as a generator, then we have a knit time construction or hall time construction. With the way – do choose a knit time construction, we don’t have to answer, what happened for all the iterators that exist in the specification in +If we chose a init, does this get property see a value of 1? Which means I need to add a context. If we chose call time semantics, nothing needs to change. There is no change in the larger semantics of the language. Whatever the context value is, the next time, is the context that the generator stays in the body. +It is the first question question +Let’s go back to the queue. + +JRL: My opinion is that we should use call time semantics, I would like that. If we have a strong preference for a knit time construction, then this proposal is going to get very, very large. + +USA: First up which have Andrew + +API: I have a question to know your thought. If you did capture it at a knit time and called it.next. It would see value 1. Would that preclude you from – + +JRL: Correct. If we chose a init time the generator can only ever see a knit time. If you chose – if we chose call time it’s easy to wrap with a generating that preserves a init time context for you automatically. This is explained a little bit in the attached initial thread. I did not include it in the slides. I’m sorry. + +DE: I don’t think we should resolve it today. It’s good to have feasibility into the issue and we have to resolve it before stage 3, but I want to focus on . . . we have a good draft? I think we do + +DLM: (from queue) +1 we can discuss this in stage 2. + +JRL: Let’s pause this and go to the issue – remind you that the issue is number 18 on the Repo. +The next open question that we have is around async – I’m sorry. Unhandled rejection. I have 10 minutes left. +Essentially, unhandledRejection – this is the only web platform we specify because we defined the – we need to answer what is the context captured when async context – when it unhandled rejection to revoked. Line 6, a get store inside unhandled rejection listener. What is supposed to be get? We have a get stored in that unhandled rejection event listener and see some value. It can either be the time that it happens. I think that’s a silly answer because for reason in a moment, it could be the throwing context. The thing that wraps whenever the throw happens. That’s a bad answer, because we could talk about it. There is the call time, when the rejection happens. Line 14. When the rejection actually happens, which one of these three answers essentially do you think is the one that is propagated? There’s more clarification that needs to happen here. For the majority of cases, it doesn’t matter what our choice is. Unless we chose the registration time of the event listener, which is an awful choice. If we choice one of the other options, we have essentially consistent, pretty logically consistent answer. The ABC context here will be propagated to the rejection, unhandled rejection for all the promises no matter what happens. You can go through a series of call chains, async stack, async function that calls another function, that calls another function that eventually has an unhandled rejection promises. All see ABC because of the way the semantics of the proposal works. We don’t have any issues. +But this really comes up when we have this specific set up. +We have a promise, the promise rejection function escapes the context, the context.run. On line 9 through 14 here, I create a variable called reject. I then invoke a context.run and get that out of the context.run. The rejection has leaked out of my context. That’s an important point to make there +What is the context at the time that this rejection happens on line 16 here? What is the context that happens here? At the moment, the answer is call. It’s whatever the reject function itself is invoked ask that is a side effect of the way that the then is the thing that captures the context, not the promise. +If we want to make this – change this so that it is the init context, then I have to go through and make a larger set of changes to that the promise allocation stores the context and not the then handlers. This is the next hairy question we got. And I will jump back to the queue + +MM: Yeah. So my sense is that unhandled rejection handlers are there for diagnostic purposes, which is the reason why errors capture stacks is all for diagnostic purposes. You can have a rejected promise that is not rejected with an error, but with that program, it doesn’t capture a stack . . . so my suggestion is that the unhandled rejection handler, bound to default, and that if the handler wants to extract the dynamic context associated with the reason why the promise was rejected, that it can do that by using the error option + +JRL: I am going to give you a hypothetical answer to this. I run a platform. My platform runs user code. I don’t control the user code that is run. However, I do need to ensure that the rejection is associated with the request that is handling that – whatever caused that rejection to happen. In that case, I could only successfully associate – that would break my platform for certain cases. So I am not – + +MM: What are the uses other than diagnostics? + +JRL: I need to show the user the query params it caused that to do the diagnostics for them + +JRL: I can’t force my user to throw an error in every case. + +MM: Okay. I am certainly happy for this question to proceed during stage 2. And I agree with the other thing about yield, able to have the question proceed? Stage 2. I got to say, I will take the opportunity to say, I am very supportive of this proposal going to stage 2. + +JRL: Awesome. Thank you. + +USA: Real quick, you have no more than 4 minutes. + +JRL: Yeah. The last one we will have to digest over lunch because I can’t answer whatever I do + +DE: I would really like – focus on do we like this proposal? Go to stage 2. We need to discuss that. Or we just keep it stage 1 for now? Do people want to add to the queue about thoughts for that? + +JRL: I can ask for stage 2 and proposal the final question afterwards? + +DE: Yeah. Let’s do that. + +JRL: Okay. Then, yes. If – I don’t think any of these questions are blockers for stage 2. In fact, I need to get it stage 2 to answer some of them with the larger web community. I would like to, with the current semantics and spec tests, ask for stage 2 with the expectation we will have the larger discussions on the web platform folks and other implementers after stage 2. + +KG: So I am fine with stage 2. But like with the understanding that I am basically taking it on faith that it is useful in the ways that you said it would be useful. I don’t understand – I can’t visualize how it’s useful in all those things. And like as you say, we need to talk to the rest of the platform to ensure it does work in some of those cases. But I would like to spend like a lot more time sitting with examples of - here is why it is useful in this case, how are we expecting it to get used, to solve problems that it solves. Right now we talked about the logging example and that is neat. But I need to see a lot more examples for a feature of this size to be happy adding to the language. That said, that can happen in stage 2. I am fine taking it on faith that those examples exist. Just I would like to see them when this comes back. + +JRL: Okay. Yeah. We can work on that. I do think all of the cases that we have listed in the slide at the very beginning are case that is are solved directly by the presence of async context. So I will flesh out those examples in the repo after today + +KG: Okay. Thanks. + +USA: All right. I think you have a number of other comments with support. There’s CDA that says, IBM supports stage 2. KG went so I will delete that. + +SYG: I think the utility is – of this particular proposal is restricted to making a programmable, which is not as wide of the slide, which other web APIs. They can go independently without this happening. With that said, the user programmable part is useful. There are frameworks that would benefit from this. The big if is on the acceptability of performance. And I think the performance – the performance of the implementation, and techniques will work for the existing web features and the user programmable async context. +But I think that it’s still an if. +And historically, there’s been resistance in V8 if not performing enough. + +JRL: Okay + +SYG: Yeah. Completely fine with stage 2. + +SRV: (from queue) support for stage 2 + +JRL: Okay. Then I am going to – I will answer your thought in just a second. I am officially asking for stage 2 so we can wrap this up in the timebox. And I am happy to talk while having lunch. Getting several explicit supports in the queue. Does anyone object to reaching stage 2 today? + +JHD: I don’t object, but I want to say, I would like it to be explicit, if possible, that it is goal of the proposal to, as much as performance allows, have the defaults be safe. Meaning, like, it’s really – I want it to be really easy to preserve async context and really difficult to accidentally lose it. And I understand performance issues may interfere, but like, if that’s an explicit goal with that caveat, then I am happy with stage 2 + +JRL: That’s a discussion we will have with the web platform folks how to work across a large slot of code. That’s my intention as well, to make this automatic in most cases. + +JHD: Thank you + +MM: I just want to say that I disagree that capture should be the default on semantic grounds - independent of performance. + +JRL: Okay. I am going to pause right there. Let’s not discuss that at the moment. I want to make sure I get to stage 2 before this timebox is up. +There are strong supports. I didn’t hear anything objecting. I think that’s official I am stage 2. Correct? + +CDA: Yes. + +JRL: Perfect. Thank you. Mark’s point, I think your explicitly – about closures, capturing them, registration, time would make to capture time, we can talk about the web platform talk, which is issue 22. About registration time capturing event listener and other platform – + +MM: I don’t disagree. I was thinking specifically of closure capture. + +JRL: Yes. I agree it should not happen. I hope it can happen because that’s the only way that will happen in the ecosystem. For Shu’s point about performance, I think this is – more performance than the current solution which users have to do. And that’s all I have time for. + +### Speaker's Summary of Key Points + +Reaches Stage 2 +Future presentations (and edits of the proposal README) will need to elaborate on the use cases, as the committee does not understand these beyond logging. +Open questions will be discussed on repo threads and in regular calls which to be advertised to the committee. +Need to investigate ecosystem integration +Also to investigate the implications of having automatic `context.wrap` capture of functions as suggested by JHD + +### Conclusion + +- Stage 2 +- Explicit support from KG, SYG, CDA (IBM), SRV, DE, DMR + +## Promise.withResolvers + +Presenter: Peter Klecha (PKA) + +- [proposal](https://github.com/peetklecha/proposal-promise-with-resolvers) +- [slides](https://docs.google.com/presentation/d/18CqQc6GfZJBWmT7li2nqfvrSFhpNwtQWPfSXhAwo-Bo) + +PKA: Okay. Hello, everybody. My name is Peter, I am a new delegate with Bloomberg. And I am here to give a brief presentation on Promise.withResolvers for Stage 1. +The idea is hopefully familiar to – probably familiar to a lot of us. The plain Promise constructor works well for use cases. We pass in an executor. It takes the resolve and rejects arguments. Inside the body, we are meant to decide how the Promise gets resolved or rejected bypassing it into some async API, like, in this case. That works well for most use cases, but sometimes developers want to create a promise before deciding how or when to call its resolvers. +So when that situation arises, we have to do the dance of scooping out the resolve and reject from the body, binding to globals and going on their way, like in this case +This is a really common – I don’t want to oversell, it’s not everyday that you write this, but fairly common. It gets re – this a wheel that is reinvented all over the place. Utility function in the TypeScript. In Deno as deferred. It appears in all kinds of popular libraries + +PKA: The proposal is very simple: a constructer that does away with the need for users for – developers to write this by simply returning a premise together with its resolve and reject functions on a plain object. +This idea has been in Chrome before. It used to be in Promise.defer. Many people know it under that name. The name is bikeshedded in future stages. But it’s clear that there’s a need, or a desire for this functionality. And it would just a nice thing for developers to be able to have an easy way to access this functionality. +So, yeah. I am ready for feedback. + +DLM: Yeah. We discussed this and the team agrees this is a common problem and we support it for Stage 1. + +PKA: Great. + +CDA: Okay. KG? + +KG: Exactly the same comment. + +CDA: Okay. Yeah. I would say the pattern is extremely common. And IBM supports of Stage 1 as well + +MM: +1 + +RBN: I just wanted to say that I’ve definitely in favour of this proposal. I am not sure I agree with the naming and I apologize if I missed too much. As I said in the chat, I had to step away for a minute. +Definitely if favour of this. This exists quite a bit throughout the ecosystem. In Jquery from promise was adopted by the language as Jquery's defer. There’s libraries implement and it’s extremely valuable. Around naming, this is something we can bikeshed in Stage 1. + +PKA: Absolutely. + +RBN: I like that this mirrors – subtlety mirrors the `Proxy.revocable` API, where you get both the promise and the thing that does the resolution or revocation as properties + +JRL: I have written this in every one of my applications. + +LCA: Deno has standard library function author this + 1 for doing this built in. Next we have JHX. + +JHX: Yeah. I also support Stage 1. my question is do we have the history info that why es6 do not include `Promise.defer`. + +MM: Yeah. I was there. I was actually the one that pushed for the – the original proposal actually had `promise.defer`. That was also in the proposal as I originally wrote it. +And I was the one who then pushed for the change to the current API, where you have the executor to the constructer. And I am glad that that – the exector and the constructer made it into the language, but the perspective that we had at the time was to be be as minimal as we can. So it was really one or the other. And I think that I agree that the aesthetics of language today is, although I am certainly very much still on the side of trying to err on the side of minimalism – I think to be redundant in functionality, by adding this common API, is fine. + +CDA: Okay. We have nobody else on the queue. Give that a moment to stir . . . + +JHD: Yeah. So I jumped on the queue. The first thing I was curious, would this follow the same species pattern, that Promise.all happens to? This is something for Stage 2, I was just curious if someone had thoughts on it. + +PKA: It’s a open question. The spec as it I have it written would have subclasses would have produce subclasses. + +JHD: For now, that answered my question. We can discuss it further in further stages. Thoughts on naming: there a lot of websites that ship `es6-shim` that deletes `Promise.defer` because Chrome shipped it for a while and it wasn’t in the spec. So that may or may not be an option. Throwing that out there. Also, something to deal with in later stages. + +CDA: All right. Another moment here . . . in case anybody wants to jump on the queue with anything. Failing that, I think you have already got several explicit messages of support. +And we have another one. +I guess if you want to repeat your call for Stage 1 consensus? + +PKA: Yeah. Do I have consensus for Stage 1? + +NRO: +1 for stage 1 + +### Speaker's Summary of Key Points + +General support +This was only omitted for minimalism in ES6 +Name to be bikeshedded, "defer" has the problem that `es6-shim` deletes it +Symbol.species to be discussed + +### Conclusion + +- Stage 1 +- Explicit support from MM, NRO, CDA (IBM), JHD, KG + +## Quick Regex Escaping update + +MM: If this is a bit of dead time, let me update something that KG, JHD and I resolved over lunch. One is that KG did a slide show that had a really quite exhaustive analysis of the safety of `RegExp.escape` and the renumerationion of the unsafe cases and that convinced me. So I think that given the – what the rest of the room was in the previous conversation, that we will be going with head with `RegExp.escape`. + +### Summary + +MM’s security concerns have been addressed, looks like we can progress with a `RegExp.escape` API + +## Temporal nanoseconds precision follow-up + +Presenter: Philip Chimento (PFC) + +- [proposal](https://github.com/tc39/proposal-temporal) +- [slides](https://docs.google.com/presentation/d/1b74GI-zHrG0wDzmwFs_yPWRli24KyVUNx3GeZt8JouA/edit#slide=id.g2227767b447_1_6) + +PFC: After the presentation on Tuesday, I got some feedback that it would be useful to go into the difference between nanoseconds and microseconds precision here. And also, elaborate a bit on what the current opinion of the Temporal champions is of this, which in the presentation I didn’t mention explicitly. I made a couple of slides, if you want to follow along, at the end of the slide deck from the presentation that I gave on Tuesday, in the last three slides. + +PFC: First I wanted to go into the assumptions that we made when figuring out the solution to eliminate unbounded integer arithmetic that I discussed on Tuesday. +What we’re assuming is that the range that JavaScript Date covers right now, which is 100 million days before and after the Unix epoch, that should be the range that Temporal takes. Because one assumption is that we don’t want people to have a `Date` object that cannot be ported to use a `Temporal` object. The other assumption is we'd like the noncalendar units of `Temporal.Duration` to be at least able to handle subtracting the earliest possible `Instant` from the last possible `Instant`. One thing that this means concretely is that math with Durations requires one more bit than arithmetic with exact times. Because you have the difference between the earliest and the latest or the difference between the latest and the earliest. So you need another sign bit on top of that. +I did a little table here about the required bit widths for each scenario. The first column is the proposal as it currently is. The middle column is the situation after we set an upper bound on the bounded duration. The third is the case for if we set an upper bound and change the precision of the proposal to microseconds. So no nanoseconds. Duration storage, that is a problem we consider already solved. You store in JS numbers, which are essentially 64 bit floats. That doesn’t change. 640 bits seems like a very large storage, but there is optimization possible, and I recommend it as well, if a duration with only a few units, like 5 seconds or one hour 30 minutes. +Duration calculations, that’s where the unbound integer that we want to get rid of. +If we follow the plan that I outlined on Tuesday, then to do duration calculations after that, requires somewhere between 75 and 84 bits, it depends where you place the upper bound. +My feeling is that if you’re using more than 65 bits anyway, you may as well place the upper boundary at max safe integer for seconds, and a billion minus 1 for subseconds, or a million minus 1 if we go to microsecond precision. +So that would be the 84 bits. +The 75 bits would be if you placed the upper bound at the exact number you needed to hold the difference between the earliest and the last instant. +If we also limited the precision to microseconds, then this would be a range of 65 to 74 bits, depending where you place the upper bound. Notably, you could not fit in 64-bits, unless we reconsider one of the assumptions to have duration able to cover the entire range of earliest to latest instant because . . . 64-bits is exactly enough to store the signed integer that is the storage of an Instant with microsecond precision. You will need the 64-bits. The storage you need, the integer size you need to do the calculations with, these don’t change from 74 and 65 bits when you place an upper bound on duration. +If you also limit the precision to microseconds, then you can use 64-bits for the internal storage of temporal instants. You need the extra bit for duration calculations. That won’t fit into a 64-bit integer arithmetic. I have the 3 \*, maybe there’s a special case for durations that exceed 64-bits. +But you do need more than 64-bits in any case for other duration calculations. + +PFC: I made this last side about what seems to be where we have arrived at the Temporal champion meetings. The upper bound on the time units of Duration. For a long time we weren’t convinced of that, but recent discussions I think changed everybody’s minds. +Given that we solved the unbounded integer problem with that, additionally going to microseconds precision doesn’t seem like a clear win. +Because the only gain you get of being able to do something in a 64-bit integer, that previously had to be done by the 64-bits is the internal storage of the time since epoch in temporal instant and time zone. +That is, as far as I can distill out of the discussions, how the champions group is thinking about it. +Now, from other discussions, like – from talking with SYG, they place a different weighting upon those two things relative to each other. I would like to ask if we can have maybe a short discussion to hear opinions from other delegates about this, any other concerns there might be? Can we get a signal, how people are generally thinking about this? That’s what I wanted to present. Let’s use the rest of the time for discussion. + +WH: I had to step out for a moment, sorry if I missed it. What upper bounds were you using for bit calculations? + +PFC: I had a range of bits on this slide, depending on the upper bound. The highest number of bits comes from using the upper bound of max safe integer seconds and the lower comes from using the upper bound of just enough to hold the difference between the earliest and last instant. + +WH: And what is that in time? + +PFC: It’s on the first slide here. It’s the same range as Date. + +WH: 10^8 days? Okay. + +CDA: We have DE on the queue + +DE: So I agree with the champion group here. I’ve been in touch with them, but the nanosecond precision has some benefit as far as future proving and utility. You can see from Darryl (???) that you don’t need to go more precise than nanoseconds. But microseconds will work for a lot of applications, but it would be unfortunate given all the work that’s been put into making sure that temporal can represent all kinds of things that come up, if we were missing this piece. I think it’s a little bit more of a future proofing issue, as well, that we know that some data is in terms of nanoseconds. Even if we don’t want to do calculations with that now, we may want to in the future. Given that chart of bit widths that PFC showed, it doesn’t seem like microseconds give that much of a benefit in terms of any meaningful kind of memory usage or performance. +Or maybe it does, but it’s for a reason that I can’t quite understand. So yeah. It would not be, you know, a complete killer of a feature if we used microseconds, but it seems preferable to use nanoseconds. + +SYG: Thanks, PFC for doing the work on the bounds here. I mean, there is benefit. I see instant storage to be 64-bit. Like, that is a benefit. We may disagree on the degree of the benefit, but that is a benefit. +I want to give – so the V8 position here is we can live with the bounded stuff and nanosecond precision. We still prefer miscroseconds, but we can live with the champions' preference. Some for background anti-nanosecond position, like some of that is performance driven as with all feedback from V8 is. But some of that is actually like just philosophical disagreement. And I am a little bit out of my depth here, relaying the opinions of others on the team. The feeling is nanoseconds for the computation that people want nanoseconds for is just not useful. The computer clock will not get to good nanoseconds. It sounds good on paper. We will future proof but people are not convinced that is going to happen in a useful way for nanoseconds. I don’t know what kind of clocks they are using today, like, are these financial stuff exchanges, using their own like atomic clocks what are they doing to get nanoseconds? In any case, like that kind of future proofing bar is not something that has been really applied to other proposals. Like we did not really – it’s not just performance that we are against nanoseconds. But performance takes precedence, takes priority. So we are happy with the current solution. But yeah, we don’t think nanoseconds are useful. + +CDA: Okay. Next on the queue is APK? A quick note we have just a few minutes left for this item. + +APK: Yeah. This is real quick. Responding to SYG that it’s not so much about the origins of the clock because a lot of times it’s used for exact sequencing because it doesn’t matter what is it is, but it’s a single clock generating the numbers it so it’s consistent in the data set. It’s more, to me, the near-term benefit is not some hypothetical future situation, but that literally every other system and every other programming language now have upgraded either started out or upgraded to nanoseconds. So when you’re ferrying values through JavaScript, from different systems, you don’t want the case where just because it happens to be going through the JavaScript language, as a temporal-needed type that you’re losing precision or how somehow carry the extra precision out of band just to get it through JavaScript into another source, like from a database to a service or another database. There’s is a clear benefit to doing it now because nearly every other system already supports it + +PFC: This is maybe a hot take, but I guess if we didn't have Duration as a first class type in Temporal, I would struggle to see the utility of nanoseconds, but since we do, I think that’s mainly how I see it being used. + +WH: Nanoseconds are used for interop and round-tripping from other systems. Nanoseconds are quite common. One minor concern is that Abseil uses quarter nanoseconds, which actually causes a lot of friction of the boundaries. + +CDA: Okay. Last on the queue is + 1 on preferring nanoseconds from DLM. + +PFC: Yeah. Thanks for the input, everyone. + +#### Summary + +The committee weighed pros and cons of nanoseconds vs microseconds, concluding to stick with nanoseconds as the granularity of all Temporal time/instant types, to enable interchange with other systems. + +## Time Zone Canonicalization for Stage 1 + +Presenter: Justin Grant (JGT) + +- [proposal](https://github.com/justingrant/proposal-canonical-tz) +- [slides](https://docs.google.com/presentation/d/13vW8JxkbzyzGubT5ZkqUIxtpOQGNSUlguVwgcrbitog) +- [summary](https://github.com/justingrant/proposal-canonical-tz#handling-time-zone-canonicalization-changes) + +JGT: Thanks so much for having me and I just want to say before I start that I am just so happy and grateful to be here. I’ve been working with temporal for 3 years for you, it’s great. I work in startups so there’s often a bunch of downtime. I learn something every time. So thank you. +So today we will talk about the canonicalization of time zone identifiers. There’s a Stage 0 zero repo up here. I just cloned the temporal repo for now. If we get Stage 1 we will clean it up. Richard Gibson has graciously agreed to work with me as a co-champion which is awesome. Let’s just dive in. + +JGT: So today these are fairly complicated topics so I'm going to provide some context and then talk about the problems and some solutions. +And if that goes well, we will ask for Stage 1. +First in context is the IANA time zone database for everybody in computing. A quick introduction to TZDB: the core is rules. Rules are like a function that accepts the name of the time zone and identifier, and UTC timestamp, and it returns the offset of that instant in that time zone relative to UTC. +A Zone is a collection of rules for a particular time zone. For the location of this meeting today, the Zone is America/Los_Angeles, it would be -7 hours offset from UTC. +Zones are named by identifiers. An identifier is the name of a continent or ocean, slash and the largest city in the time zone. There’s also the special prefix called Etc which has special time zones like UTC which do not correspond to any geographic location. +There’s 600 of these. They grew slowly, like when a country decides to split a particular region’s time zone. +Finally, there are Links, which are redirects going from a non-canonical identifier to a canonical identifier. Links don’t have their own Zone records. They're only for backward compatibility. +The final player in this space is CLDR: the localization data repository used by ECMAScript, Android, and others. +When you call Date.toLocaleString, there's a "PDT" at the end of it. The "PDT" comes from CLDR, which needs to know about time zone identifiers because those are the keys to look up the time zones it needs to display localized text. When this changes, CLDR picks them up in the next release (after delay). The next release of CLDR usually contains the update. + +JGT: The next piece of context is explaining why "Links" are created. Links are the focus of this proposal. +There’s three reasons. One is to deal with legacy identifiers. US/Alaska, before the Continent/City naming scheme was created, was the name for the Alaskan time zone. Now it’s America/Anchorage. There's never going to be any more of these, so not a problem for ECMAScript. +The next kind of Link is controversial: it's merging zones that have had identical rules since 1970. The maintainer of the time zone database wants to reduce the size and overhead of maintenance of the database, and so in the case of let’s say Iceland, that had the same rules as Ivory Coast since 1970, those rules are now merged in the time zone database. +And so as you can expect, this is controversial because there are lots of use cases including ECMAScript here, but also Java and elsewhere, where it’s helpful to hold on to the original identifier used because it’s entirely possible at some point Ivory Coast and Iceland are going to diverge again and which would result in data loss for the end user. +Thankfully, this is not an issue for us because ECMAScript implementations work around this problem. Firefox uses particular makefile options when they build the Time Zone Database. We are not worried there. For V8 and WebKit, they use CLDR which never changes the identifiers once once is published in a CLDR release. It’s like a Roach Motel for canonical identifiers. So, therefore, when they ignore that because it’s against the policy to change the identifier after it goes in. +Both of those are not an issue. +What is an issue for us today is this third case, which is when there are changes to the English names of cities, like when Asia/Calcutta was renamed to Asia/Kolkata or Asia/Saigon to Asia/Ho_Chi_Minh. These renames are rare. One each per year in the last three years. Previous was in 2016. So it’s – they are rare, but when they do happen, they are problematic and that’s the focus of this proposal. + +JGT: So switching from context to problems. +One of the problems I mentioned before is that because Firefox and V8 WebKit use different data sources, this difference leads to different userland behavior. In this case, there’s variation between implementations. If I create a time zone, then Firefox will give me back Asia/Kolkata, but Chrome and Safari will give me Asia/Calcutta. This is a challenge and needing to deal with it is annoying. And it also creates some other problems that I will talk about later. +The reason this variation exists is because the spec is vague about what implementers are supposed to do. It says, of the time zone database. But it doesn’t narrow it down enough – I mentioned before that Firefox uses the build options. That information just is not in the spec. And so furthermore, thankfully nobody does this, but there's nothing that prevents implementations for doing worse things, like mapping to . . . Stockholm to Berlin and all over to Serbia. Developers in those countries would not be happy about that. +But even in the current state, developers are unhappy with the current state. +The problems I'm showing are from a quick Google search. A small selection of bugs that I saw for developers complaining about the current state of things, almost all of the complaints are that V8 and WebKit are not caught up with how identifiers are in the time zone database like Firefox. Here's one guy complaining that it's been broken since 1993. +Another guy, his piece of network equipment that depends on the time zone ID hasn’t caught up. You can find dozens more like this of people complaining about this over the years. +People understandably, if you live in a country, you have an opinion about what your cities will be called. And it’s annoying when you’re a software developer, looking at what your computing platform is giving you a colonized name instead of the one you prefer. + +JGT: Beyond people being pissed off and having political or cultural concerns, there are software engineering issues too. For example, in the Firefox case, canonical identifiers change over time. This means that static data, like in test automation, can behave differently over time. +So in this case, I could run a test suite. Pass in Asia/Calcutta and get it back. Upgrade Node, run it again and I don’t get the expected thing back, I get Asia/Kolkata. This is a specific example of a general case: with time zone identifier strings, you can’t use triple equals as the way to determine whether two time zones are the same. An example here is, you take the current time zone, save it, one year later, come back, load it back, compare the two. By string identity. And they don’t match. And so I think that’s what the one lesson that came as I dug into the problem: the core issue to solve is that using === on string identifiers to compare time zones is not a good idea because those strings can change. +OK, these problems are bad. But with Temporal, they will get worse. +Today, before Temporal, you have to dig to find the identifier for a time zone. +The first 4 lines of the top code sample are a localization example of a date. You will notice, when this output here of 3-10-23 India standard time, there’s no time zone identifier. From the end-user’s perspective, there is no presence of the identifier. The only way you can get it is if you call the resolvedOptions method and pull the time zone property and get the the ID. You will never see it otherwise, like if you `console.log` a date. You will never see it if you hover your date in the debugger. Never see it in browser devtools. +However, in Temporal, ZonedDateTime has the identifier in its `toString` output. Any time you call `toString`, any time you transform a Temporal.ZonedDateTime to JSON. Any time you console.log it. You will see this identifier. And so it’s a safe bet that maybe 100X, maybe a 1000X more developers will see identifiers after Temporal is out than before. +So that was essentially the impetus for me doing this proposal this problem will get worse than it is and there’s a fair element of evidence that it’s already pretty bad. + +JGT: So let’s talk about proposed solutions to these problems. There’s two groups of these solutions. The first group makes small changes to the spec text, to reduce the divergence between implementations and the spec and implementations and the spec. Tighten up the spec to prevent future divergence and work to see if we can converge on a consistent approach for now. +And then a second part of this, to complement the changes above is to make small API changes that make it less disruptive when this changes in the first place. This will really help for the inevitable next time we have a Kiev to Kyiv change +We split these solutions up into 6 steps under the idea that some might get blocked or technical issues. We didn’t want the perfect to be the enemy of the good. We can move forward on the rest. +The first category is the spec changes. To simplify the abstract operations that deal with time zone identifiers. His makes every other change easier. Following this is to clarify the spec to divergence from getting worse. Not make a normative change yet, but to say, if you are doing this in the future, please don’t do this because it’s stupid. Please head in this way. +And then work with the implementers of V8 and WebKit to see and their upstream dependencies, like ICU to see get a solution to the out of date identifiers. These are for current code from Temporal. These are the biggest source of problems today. There’s only 13 of them. Which is really good news. Which means the absolute worst case, there’s a hard coded mapping table to get updated once every year. Ideally we want a single upstream place where this happens. We’re talking about whether CLDR can be at that place. If nothing works, the worst case is not bad. Because the rate of change is low. +If we can get through 1, 2, and 3 we will be in a good place to add normative spec text that prevent the problems in the future as the limitations move forward, they will not diverge. + +JGT: Let’s talk about API change. Two in mind. Today, the main reason why code behavior changes when canonicalization changes is that we do this eagerly. If we provide an identifier both today and in Temporal, it will do this identifier before doing anything else. +You lose the knowledge with your original identifier that you passed in. An example where Australia/Canberra and I get back Australia/Sydney. So if this changes from one release to the next then the code will change. +The proposed behavior is to not to do that. To essentially keep the existing identifier that the user provides, instead of immediately canonicalizing. And when the user asks for it back, give them back the same identifier that they provided. So that fixes the test automation case and makes things more predictable in the face of changes. +In almost all cases, this code will not change. There are a few exceptions to that. If I call Temporal.Now to get the current host environment's time zone, it is always going to return it canonicalized. If I am doing the thing I said you shouldn't do several slides ago to use the string IDs for comparison, then you’re going to run into this problem. +So I think for Temporal, there’s not a lot of Temporal legacy code, it’s easy to customize the behavior. In DateTimeFormat, it’s up to TG2, whether they want to retain the existing behavior or snap to the behavior we are proposing here. There’s an issue in the repo to discuss this, please comment if you have an opinion one way or another. There is also a proof of concept PR to describe the changes required to implement this. It's pretty small. Only a handful of places where identifiers are dealt with in the spec. +The second API change. This previous change introduces a problem, developers want to have a way to know if you have two different identifiers. How to know if these are the same time zone? That’s the final step, to add a Temporal.TimeZone.prototype.equals method, which solves this problem of having to use identifier strings because you can use code to determine whether Calcutta is the same as Kolkata. You can do this today, as a work around using `ZonedDateTime.equals`. You can see, it’s not particularly ergonomic or discoverable. +This is the proposal and I would love your feedback + +CDA: Thank you, Justin. We have a number of items in the queue. First up, DE + +DE: So I think you have made a strong case that the insistence of the ECMA-402 shouldn't normatively reference the IANA timezone database, that this is not good and we need to allow tailoring, and we need to coordinate among implement indications as well as possible, possibly through CLDR. Overall, I would like to defer to TG2 to make a judgment. Even though TG2 doesn’t own Temporal, they are the most qualified body in this area. + +DLM: Okay. I would say, thank you for the presentation. I think it’s very well-explained. And overall, I think what you’re proposing is well-motivated and SpiderMonkey supports this. + +FYT: I wonder whether this is the right place to talk about this time zone thing. It’s a complicated issue. So I have to throw this topic out here . . . do we really think is the stand up body to discuss this issue or should be deferred to some other place to talk about it? + +JGT: And just as a quick clarification, I was actually requested to do this proposal by TG2. And I think to answer this question, I do think what TG2 or what – excuse me, what we can do in TC39, design the API to be less sensitive. There’s two parts: one is define the spec of how ECMAScript will deal with this. It’s codifying the existing practice. Implementers have decided this. Our goal is to – can we align on something and lock it in the spec to prevent further divergence. There are changes we can make that are unrelated to any standards of body, how do we make things easier for ECMAScript developers. From my mind that, is the core reason why TC39 needs to deal with this, because we want to decrease the impact of this, whatever these standards happen to be. + +SFC: I will reply to this, which is that . . . it’s the job of this body TC39 to provide interfaces that developers can build software against. Like, that’s our job. And there’s certain assumptions that developers can make and certain assumptions developers cannot make. +And if a developer – like, if we think that there are assumptions to build certain types of software, then those are things that should be codified in the specification. And JGT has shown quite a lot of evidence that having a consistent story across the web platform about how we feel with time zone canonicalization is one of those things. And therefore, I think it is the job of TC39 to codify this. I would also clarify that CLDR and IANA have their priorities and needs when it comes to how they recommend that you do it. +For IANA, JGT explained what they are optimizing for is data maintenance cost. So they want to merge as many time zones together. CLDR is optimizing for stability. The reason why they continue to use the obsolete time zone IDs is because they have data files that are – that go back a long time and want to continue working based on software that has been working for 20 years and continue for another 20 years. +So those two bodies have different priorities when it comes to how to do the time zone. And TC39, we have other requirements for what our developers need, which is not necessarily what IANA or CLDR developers need. This is appropriate for this body to explore. And I am also one of the people who recommended to JGT to make this proposal. So I am really happy with the way that the slides went together. I think JGT did a good job of motivating Stage 1. + +DE: If TG2 is in favor of this proposal going forward, then I am as well. This endorsement was previously unclear to me. + +SYG [from queue]: Say more about "implementers have already decided"? Not familiar with space. + +JGT: I think I understand this question. So in absence of the spec being clear about what the expectations are, both sets of implementers, so this is both the implementers that rely directly on CLDR, which is WebKit and V8, and Firefox, right, independently have come to a relatively similar conclusion, which is resolving Iceland to Ivory Coast is a bad idea and should avoid it. That’s what I meant by implementers have decided. They have coalesced, except for the 13 outdated zones that CLDR has kept for essentially implementation reasons. They seem to be very similar. And so it does feel like if we resolve the 13 mismatched zones, then we could get pretty good consensus across implementers of how the specs should behave going forward to prevent this kind of bifurcation in the future. Does that answer your question? + +SYG: I want more detail. Treat me like a golden retriever. Et cetera. What is the . . . what is the – let’s see. How do I phrase this . . . Firefox Mozilla, it’s a bad idea to do this, is is that written down anywhere, or just converging behaviour that you . . . + +JGT: I think in the case of Firefox, it was an explicit decision, as far as I understand. + +SYG: Sorry. Is this written down in another like in what WG or 3, as a spec agreed upon by web browser already? + +JGT: No. It is accidentally evolutionary behavior over time responding to just pick what is sensible. + +SYG: Okay. I see. this proposal seeks to codify some of these – codify that convergence evolutionary behavior plus the 13 extra ones? + +JGT: Ideally, to fix the 13, so we end up with one. We end up with one behavior across implementations and then to codify that. + +SYG: I am still confused. What is the . . . is the convergent effusionary behavior a subset? + +JGT: Things that are not converged. Because of the implementation of CLDR because they never update the zones. Therefore, over time, Firefox has diverged from V8 and WebKit. And so the goal here is actually to respond to customer feedback, saying that behavior is bad. To see if we can find a way to converge V8 and WebKit to Firefox is and lock that in here are some principles that will guide how we will do this in the future. + +SYG: And is it also in the scope of this proposal to write down the behavior that has converged? + +JGT: I believe so, yes. + +SYG: Okay. I think that answers my question. Thank you + +CDA: Okay. We only have a few minutes left. I don’t think we will be able to get through the entire queue to that end. Justin, do you want to – is there any particular item that you want to address or keep going down the list for a minute or two + +JGT: Yeah. I will love to hear – to respond to MM’s thing first. + +MM: Yeah. So there are genuine territorial disputes and strong feelings on both sides, they are not – they don’t have an objective resolution. I would hate for us to be in committee arguing between the partisans of both side. Is it our practice that all of the resolutions are delegated to a standards committee + +JGT: The time zone database owns those thorny problems. It’s more a matter of the time zone database has a makefile, which is a misnomer. It’s code. You run it. And that code spits out data files at the end. There are options to that makefile. +And our decision is essentially which options do we want to pick that define how the output is going to look, but the actual decisions of what data goes in there and what the names are, that is not our problem at all. It’s purely about how we use the options that are available to us by somebody else. + +MM: Whose problem is it? What standards organizations are we delegating these political decisions to + +JGT: It’s IANA. The decision to admit and name the identifiers in the time zone database comes from IANA. + +MM: Okay. Thank you. + +WH: Looking at the database, I do see separate entries for Iceland and Abidjan even though they have the same offset. I am curious: what happens when large zones bifurcate, like might be happening in the US nowadays if Congress decides to repeat the 1974 year-round DST experiment. Suppose Los Angeles goes to year-round DST and Seattle does not, how would that affect the concept of time zone equality? + +JGT: What happens in that case, as a new zone was created and that recently happened for a portion of Mexico. . . they add a new identifier for that portion of Mexico. That’s how that works. And yes, it does, if I have the old identifier for that part of Mexico, and I am living in a new one, it gives us the wrong time. There needs to be a way to say. Hey, I live in this time zone and that’s one of the consequences of using the time don’t know database and all and that’s not unique to us. + +RGN: I will elaborate. Bifurcation of zones is a concern. Whereas, the IANA is back-ward looking. We want to make sure that code that depends on Temporal and ECMAScript goes forward. Iceland, there’s no guarantee that because it has agreed with Ivory Coast since 1970, it continues to do so in perpetuity. That’s the reason we want to maintain the distinction by default. I mean, it motivates browsers to do so. But codifying it is an appropriate characteristic of this proposal. + +JGT: And I can quickly answer the previous question about web compatibility. Today we make no guarantee this will change over time. It’s why Firefox has been able to diverge from V8 and WebKit. Also, because of that, in order to worry about backward compatibility, I think you first need current compatibility. Which we don’t have today. And so one of the parts of this is to try to make more explicit in the spec what are the things developers can expect compatibility over time and the places they can’t. +Looks like we're out of time. +Can I ask for Stage 1? + +MM: Support + +CDA: We have explicit support from MM, CDA (IBM), DE, DLM. Okay. You have Stage 1. + +JGT: Haha, I feel very popular. Thank you, everybody. + +### Speaker's Summary of Key Points + +- The committee discussed various cases where time zones are canonicalized incorrectly by web browsers, and differing from each other, with a proposal for how to fix this and ensure common semantics. +- The biggest open issue is what should happen to DataTimeFormat.p.resolvedOptions().timeZone: whether it should continue its current behavior or snap to the proposed behavior for Temporal. That’s the biggest area where I am looking for input because I honestly don’t have a strong opinion of how that can be resolved. I would sort of follow the crowd. +- Everyone in the committee agreed that we should snap to the modern identifiers of Kolkata and Ho Chi Minh City and Kyiv; please get in touch with the champions if you disagree. + +### Conclusion + +- Proposal reaches Stage 1 +- Explicit support from MM, CDA (IBM), DE, DLM + +## Class constructor and method parameter decorators + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/rbuckton/proposal-class-method-parameter-decorators) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-TkodwnfnGJ4--QyAsrw?e=c7blVv) + +RBN: So we shipped the TypeScript 5.0 release, including support for the Stage 3 Decorators proposal. It gives opportunity to the native decorators. And it also gives us the opportunity to turn our attention back towards the ecosystem's use of TypeScript decorators to find more both innovative use cases and potential migration blockers. Yesterday we saw the metadata proposal, which is one of the several capabilities that was enabled in our legacy decorators experiment, and has evolved from concepts that I discussed with YK and EA back in the original design as far back as 2013. I am excited to see the progress and the progress of metadata is extremely broads and the proposal goes a long ways to removing hurdles to migration. Another component of the 2013 design was the potential to have function declarations, function expression, error functions, objective literal methods and parameters. Given the complexities of hoisting and circularity issues for function declarations, by the time decorates reached stage 2, we focussed on classes and class elements and leave for the other for future proposals. As TypeScript looks at existing use cases of decorators, one of the used experiments not covered has the ability to decorate parameters and methods. So the question then becomes, what exactly are we talking about when we talk about parameter decorators? + +RBN: Parameter decorators. They are in TypeScript currently for experimental support that is logically similar to parameter attributes in C++ and Java. As I mentioned, this is part of the initial design that we had been investigating with TypeScript experimental decorators as far back as 2013 and 2014. The motivations around this, when they were first considered when we started putting the proposal so long ago, and still exist today, are varied and align with the metadata proposal as it stands currently. One major case heavily used in the ecosystem, is constructor parameter based dependency injection. Another common case used today is a web API request routing, bind request headers, from form fields to parameters of methods of class. And then these motivations can be extended into a number other interesting areas that aren’t currently either viable with TypeScripts derivatives or aren’t easy to do because of limitations. Argument validation of null and define I don’t your range validation. There are issues that I have seen across multiple object relational mapping system. Type ORM, related to entity construction. Another is method parameter marshaling for interfaces, WASM with native libraries where you need to easily marshal strings or other data types including potentially things like fixed shape objects. And obviously metadata which is heavily used by many of the scenarios that are discussed about. +One of the other motivators and the reasons again that I mentioned earlier is that there are 3 remaining blockers for migration within the ecosystem. This one which is the lack of the ability to do class constructor parameter decoration, this is a high priority thing. It’s heavily used by a lot of large projects that use TypeScript. And it’s what I will talk about in this proposal, another is the lack of metadata, again this is one that is very high priority within the ecosystem for that type of migration and addressed by the proposal at Stage 2. And the fact that currently there’s no ability to decorate entangled get/set pairs. This is a low to moderate priority. Less incidence of this in the various repositories that I have investigated. It’s now addressed by the proposal – potentially addressed at Stage 1. + +RBN: I would like to spend time talking about each of the cases I talked about in the motivations and where this has value in the ecosystem. So dependency injection. If you’re not familiar, a version of a controlled design pattern. Used in large scale applications, large scale deployments. It's used in VS code and Angular. Angular has used dependency injections since Angular 1.0. Before TypeScript got involved with decorators. Advantages of this is it facilitates breaking down applications into parts, components and services. And helps facilitate unit testing by isolating dependency, so you can inject mock and fake implementations. Constructor parameter injection specifically is something used when a part or component in this – dependency injection needs to perform initialization. An example that I have used in some places is editor like VS code might register the L monikers for various – to be interpreted by the electronic tonic run time and do that as part of an injection. So in that system, or a component might receive at construction time the – a service with which they do their registration and pass that along and perform the registration during initialization. Without this and the ability to do with . . . you have to tag on a specific class or have a specific design for your class to indicate this is when initialization happens. And that then overcomplicates the process of setting up tests and mocks. This is used heavily throughout many projects. Angular has had constructor parameters since 1.0. They were doing specially crafted parameter names that only work in you’re in an environment that supported a usable function prototype toString, and back when this was being used with angular 1.0, it wasn’t guaranteed to exist and it’s potentially not viable today given that it’s possible that a embedded JavaScript engine might not maintain source information and might not be able. So the mechanisms employed by Angular back then are not up to the needs for angular when it released 2.0.In addition, DI is used heavily in the TypeScript ecosystem. DI systems. This includes Is injection system to be used – uses parameter injection within TypeScript. These are large projects. On the proposal repo. I have links thousands of these in the ecosystem today. As an example of what dependency injection looks like. A class to inject services and using a service identifier, in this case, I am using strings, but this could be individual decorators. There's a service identifier that you define that describes a service and that is a decorator function. So you annotate it for that. In this case, the constructor receives these values by – when the DI container does its work to satisfy dependencies, determine what it needs to create the objects to begin with, and you can see at the bottom of this example, if you had a DI container, could could contain the source and getting an instance of the customization service, it would allocate and fill in the things necessary to create this. + +RBN: Another example: a clip from actual usage within VS code. It uses the service identifiers that are essentially the name of the interface or the name of the service that you’re injecting and each one is an actual decorator. You can go to the slides and see the link, to see where this thing exists within the VS code, source code. And VS code has hundreds and hundreds of cases of this. It makes construction very easy. Another use case for this are methods that are endpoints and parameters are used to bind specific request parameters to parameters of method. That model allows you to abstract away complexities, handle coercion strings for, say, a numeric ID that you pass in a request route. It exists again, in the TypeScript ecosystem, if packages like loopback for. How this looks in at run time is that you have something like – a book’s API that binds a get method to various endpoints in. The first first case, inquiry string parameters to get the page number and size that come in. Requesting a single book, we can bind to a route parameter to get the same of the book. And when posting data, we might be able to say values from the route, the current user in that context. It would utilize async context. Without that, form fields or be adjacent to a serialized body from the foreign body. + +RBN: One example of something that isn’t feasible with TypeScript decorators . . . this ties into the things with annotations and run time-type checking as well. Argument validation, you can do here parameters, things in the body of the method, as a search, you can then expose on the parameters themselves. +In a way, this allows you to – promotes reuse and also makes it as if it’s part of the documentation. And because of the fact that parameter decorators are something that is evaluated at declaration time. When the user manager class is defined and made available within the run time, these things can be statically applied and attach metadata. You can use this information to perform especially binding. Let’s say have a HTML template, where you want to generate form fields and you could inspect the metadata assigned by the not empty and required parameters to actual HTML form code. This wires up this type of glue for these types of capabilities that really wouldn’t be viable if you were performing a search. In the case of object relational mapping, ORM systems allow you to define entities that happen to a table in RDBMS. Many ORMs use Object.create to hydrate. It has the prototype methods, and use to work with the entities. However, Object.create doesn’t work when you start introducing private fields. So ORMs, it has to call the constructor. Calling the constructor to hydrate is feasible, but then you have to ensure they have to specifically accept a single parameter that has an options bag. You want to provide other parameter list. So it makes it harder for the ORMs to know what is the best to do. This would allow direct fields to specific parameters to allow this hydration to be more compatible with what you want at the – on the end-user perspective. We want the code writing in this case to be better or easier for a human to read and write because the people that are likely to interact with these objects ask but a machine to leverage. +In the case of a ORM example, we might have an entity for a user class and store the ID and the password hash as a private field. Because of the password hash, we can’t use Object.create, since it won’t match the name, including a hash mark. And we need to handle how the construction works. We might be able to bind the field – each of the construct parameters to a database field. Leverage the name to handle that. In the case of renaming the field definition that exists or picking up the field definition that exists in the database to match the parameter without having to specifically match on zero the parameter’s name and allow us to have initializers to specify defaults without having to have something like a bag of properties. + +RBN: One of the other motivation we’re looking at this proposal is to provide the capabilities to better interact with foreign interfaces for one language run time . . . packages Fie-Napi, to provide FFI interop. But you want FFI marshaling behavior that is adjacent to the function. So I have an example here. If you were to try to use JavaScript from native code, you have to do this definition where you specify the return type as well as the parameter types on essentially one line and then on the next line define the actual function. This has the same issue that we were discussing with the premise authorities, where this might be fine for one or two parameters. Once it’s longer than, this you eyeball the order of parameters – parameter types versus the parameters in the second list. It makes it harder to read and know what the differences are and make appropriate changes. Whereas, if we had something like parameter decorators we could instead, mark [eeism] of the parameters with the thing we expect to come in here. Expecting this to be an int and the next to be a string. Specify marshaling behavior, like is this null string? How is it handled? This is using classes instead of functions because again this proposal specifically is focussed on the existing syntaxes that work for classes, and I will get more how it relates to the potential future for function decorators in a bit. +So next thing I want to get into a little bit is if we consider this, what the syntax looks like. I am looking for a Stage 1 proposal, meaning that we will have to be open to alternate behaviors to syntax possibly changing to for, this is something that is acceptable. One thing I would like to state is we should be very careful about eventually changing the syntax of how decorators work. +That consistency is what a method decorators looks like versus a parameter decorators. I think it’s best to stick with the syntax that we already have, and essentially illustrated here is that a function rest parameter would allow an optional decorator list before the element. A formal parameter would allow it before the binding element. +We have restrictions on when they can be placed with the static semantics. These are only currently valued on class constructors and methods. +As far as potential semantics, the goal is to align with the evaluate order and application order already present in the decorators proposal. This is the order that TypeScript uses its experiment decorators version of parameter decorators. And both of these – the TypeScript implementation in the legacy and native support are essentially the same when it comes to ordering. With other decorators, all replied – sorry. It’s always in document order. The expression that is part of each decorator, the ABCDEF would be evaluated in the order they appear on the screen. But how they are applied, the order they are applied is based on the – the specific prescribed order that is not document-order based. +In the case of method parameters, the decorators would be provided before the decorators method. Constructor would be before the class itself. And parameters are applied independently. +As with decorators on individual declarations, it would be in reverse order. It means that if we were to step through the evaluation order of decorators here, we could start by looking at the first parameter in the parameter list, meaning that the first decorator would be applying is the one closest to the parameter declaration. +When that is then applied, we then move on to applying the one that is slightly further back. The earlier in document order in this case. To match the same reverse order, the – that we see with decorators today, once we finish with parameter 1, we might move on to parameter 2. Which means starting with the decorator closest to the parameter declaration. And then moving on to the one that is next. Or that is previous to it in document order. +And then obviously moving on to the ones of the method, which occur in that method order. This order is important because of the fact that when we are applying the decorators, they are applied to the – initial method declaration itself before any other method decorators could potentially replace it, and no longer be valid . . . it wouldn’t match the parameters that are here +The next question is whether or not – what kind of things might you be able to do. TypeScript has limitations with its parameter decorators, which are not terrible limitations. You can still do interesting things as I have shown with the decorator constructor parameter injection and even with rep parameters. That could be achieved today with something like the access to context metadata. Access to at least the ordinary index of the parameter. That would essentially allow you to emulate what the parameter decorates in the leg say are able to do. There’s other things that are interesting in here that we could potentially do +So I will – this is an overview but go into detail of what these are in the upcoming slides +Like any decorator, it has the same API. Accept a target and a context. Parameters, like fields, don't have a representation. Not one that exists at the time of the declaration. Thus, the expectation would be that parameter decorators like fields received undefined as the target because there’s nothing to wrap or replace here. +As far as the second parameter context, we would again need some type of way to differentiate from others. We are using “parameter” as the name here. +At the least, we expect an ordinal index because that’s the only thing guaranteed at the time that it’s applied. You can’t get to a parameter’s name, in type text decorators because you only have access to function Proto-type string 2 to see that. Names are optional because binding patterns don’t have names to refer to. But having a name, even if it is optional, is useful in these cases. With parameter binding for HTML routes is that not having to repeat the name of the parameter is extremely useful, as with the example to database fields in ORMs +One thing that TypeScript doesn’t have, and you can’t do today with the parameter decorators is annotate rest limitations . . . to know – it’s useful for cases where there’s an array, but do you know if it’s an array as multiple or single arrays. +Another possibility we could have something like add initializing, adding static and extra that apply to the class, but not the function body. It’s important because these are all things that are declared and aligned with all decorators are defined with methods and fields and declarations themselves +Another thing that is important and ties into the decorator metadata proposal is that one of the two key things that parameter decorators are useful for is associating metadata about that parameter. This is necessary for the DI constructor parameter injection case. It’s extremely – it’s necessary for most FFI cases. It’s extremely necessary I for HTTP route parameter binding +Another thing this is looking into the ability to look at the function that the parameter is on. When you look at something like a field or method, you are attached to the class as well. You get context whether the method is static or nonstatic, AKA, instant or Proto-type. This is important when defining metadata on the objects, you need to differentiate what things you are describing. If you have two decorators on two fields or on two parameters, you need to be able to and want to differentiate, then you need to create an object graph within the piece of metadata to differentiate between what field was this attached to, what method was this attached to and in the case of the parameters, what was the parameter that this was attached to at the time. +Here, we have chosen to use the name function as opposed to method or something else to maintain a consistent API. And allow us in the future support parameter decorators on function decorators. +One thing that is really interesting with this design and with the design of decorators in Stage 3 is that there is this potential capability that we don’t really have or didn’t build into the legacy decorator support in TypeScript. These are very limited today. They can only be used to collect metadata. +they are only observational. It’s not designed to return a function that replaces the function that was attached to. We didn’t want a parameter to have the type of capability. It was too complex and cause problems with decorators that are replied by other parameters. +The field decorators in the stage 3 decorators proposal showed us there is room for potential investigation to like what could we do if we intercept binding of an argument to a parameter. There is the possibility that we could allow a decorator to have this type of capability, assuming it’s the performance characterring are viable, but do things like have parameter validation, the required – at required decorator might be able to say, hey. Here is the value coming in. Defined or undefined. Throw an exception. So this doesn’t have – you don’t have to have some other mechanism attached to that method to do that kind of work to read metadata to intercept these. You could do these in line. FFIs, MarshalingAsString and do you need the string to come in as length prefix or null terminated. How does that work [stbh] having a decorators how the behavior should work so that the end-user can just accept the string and not have to really worry about it. + +RBN: So what we are looking for with stage 1 is just to investigate the syntax and API design. Try to gather feedback from implementers about the performance concerns. Related to intercepting bindings. Try to again determine is this viable? Are there changes to make this work? Is this the direction we want to go down or find other approaches? +One thing that I do want to point out, this proposal is not bringing in scope function decorators. There is something we do plan to bring to committee in the future. This is something that was problem – I shouldn’t say a problem. Was a sticking point for decorators for many years as the proposal progressed because it was really hard to nail down what the semantics should be for a function decoration declaration that is decorated. Because of hoisting and circular imports and other things, it was the reason it was cut from the current decorators proposal at Stage 3. For the time being, we’re keeping function declaration decorators out of scope, which means function parameter decorators are out of scope as well as out of scope for this proposal are object literal they had decoraters. +We want to make sure this design is forward-thinking. +So I have gone through a lot of this. I want to go to the queue, start talking about things. I think it’s kind of interesting to – bring these in and again there’s thousands of examples of this in the TypeScript ecosystem. This has been a capability that we provided with our legacy decorator support that has had some option and has been extremely value and I would like to see that value and those capabilities come to JavaScript as a native decorator support. + +CDA: Okay. We have under 15 minutes left. And a number of items on the queue. Let’s jump in. + +MM: Yeah. This looks expressive enough that you could use this to express the special case of extractors for extracting from the argument to the parameter. And if the parameter is a deconstructing pattern you get the functionality of a record bound to the elements of the destructuring pattern + +RBN: That’s not quite accurate. Only at parameters and top level. One of the advantages of the proposal is that it’s essentially an extension to the destructuring pattern. You can put it deeper than just the top level of a parameter. So there is a little bit of overlap with the capabilities at the highest level, only parameters as it comes in. Parameter decorators, they give you the ability to have – to attach static metadata and observe that outside of that function, but they don’t give you the depth into intercepting destructuring like extractors or any other pattern matching extension light. Extractors, ordinary, in the mid-of the parameter. They are evaluated at run time. You can’t observe this information that they attach without doing function Proto-type 2 string type. You can place them within the destructuring. But they don’t give you the ability to the static analysis outside of the function other even run time analysis of information that was attached statically. So again, they touch a little bit at the very high level, but as far as the specific use cases, they are different. + +MM: Okay. + +KG: Yeah. The first thing is most of these use cases don’t have anything to do with classes. Validating parameters does not have anything with classes. Routing does not have anything with classes. DI? Maybe a little bit. But most of the cases don’t have anything to do with classes. In terms of problems to solve, most of the things that you talked about are not specific to classes. I should say more explicitly. I don’t want to see a world in which we have people start using classes just so that they can have methods that have parameter decorators on them. I am opposed to reaching that state. So - + +RBN: Let me – I will address those points. One of the reasons this proposal is scoped as it is, is again that sticking point on function declaration decorators. That is something we haven’t solved and/or a solution has bent brought to committee. I am trying to leverage this new – interest in switching to negative decorators to bring more of the ecosystem over. And while there will be time it will take to this [THET] proposal advanced through the staging process, I don’t – decorators as it is was delayed for a while because of the function declaration sticking point. +As I said, everything that I am doing in here is with a forward-thought to what . . . it’s designed to take those in mind. However, I have scoped this specifically to class constructers and class methods for several reason. One we have limited the scope of decorators currently at Stage 3 to only on classes. They are not supportive the object literal method, and the in TypeScript is son-in-law for class constructers and methods. What is interesting about this, however, is that I really – I have gone through and done a bunch of different source graph searches, thousands and thousands of cases. No cases of someone just using a class static methods to clear something that using that to make it useful. I am building this off real-world examples and built over the past 8 years. There has ban lot of adoption within the community. This has a lot of benefits even with the narrow scope that it has, there is that – the eye towards what this looks like with functions? We could not support – not currently support decorators on a parameter without supporting them on a declaration because they are statically evaluated at definition time. Which has the same issue. So I am looking at it – trying to advance this as focussed specifically on class constructers and methods. The parameters of those. Again with eye so we don’t have the blocking concerns. But when those – when a solution for those is presented and makes it through, we will easily intend that over to that + +KG: I understand the reasons that function declaration – well, I understand some of the reasons at least that function declarations don’t have parameters. But – sorry. Function declarations don’t have decorators in part because of the hoisting complexity. There’s another reason, which is that they are less obviously a good idea. However none of that is relevant to the fact that most of these use cases do not – are not particularly about classes. While I get that you want to restrict the scope to narrow thing and advance it and later add function parameters, that assumes we are definitely doing function parameter decorators. And I think that is far from a foregone conclusion. I should say this is not a stage 1 blocker. But I am not okay with advancing to stage 2 with only class methods having parameter decorators. I just really don’t think we should be in that state. If we’re solving the problems that you are laying out, we are not solving them just for class methods; we are solving for functions in general. We can’t just do 50% of parameter decorators. We just can’t. + +RBN: My only again – comment to that is that I think we were in the same boat with decorators. And I think function decorators are valuable. My first experience with TC39, I was invited by Luke what was the PM and he brought me to present the decorators proposal and this was back in like I think late 2012, early 2013. I would have to look at the email thread for that +And these were all things we considered. We went threw years of discussions with the angular team and part of that design was function decorators. It’s been hoisting and issues around hoisting and potentially factors getting in the way if you add a decorator. That stymied the entire proposal for a while. +I will get back to it here . . . +I plan to take all of these things into account with design. The design for this should – and we end up adopting function decorators, whatever that takes, this should work with that and I wouldn’t see again, I agree I don’t see this is advances to stage 2 if we make it so that these would never work with function decorators. I do however want to avoid these same type of issue where this just can’t advance because we can’t figure out function decorators. There’s too much values in the capabilities and that’s shown in the ecosystem that these are worth having, even if it’s limited space. We need to design to support this, but I would be concerned about blocking this purely on the we haven’t figured out function decorators yet so we shouldn’t take this. + +KG: That is still my position. + +KG: I did have the next thing on the queue as well. + +KG: The previous thing was concerns about advancing class method parameter decorators outside of parameter decorators more generally. I have a very strong objection to advancing class method parameter decorators without having function parameter decorates. Stage 1, yes. Not stage 2. I wouldn’t be okay without function parameter decorators. I understand the difficulties but I don’t think we can solve this halfway. + +KG: So that was a specific concern about the current shape. But more generally, I just don’t think that this makes sense in JavaScript. I appreciate there are lots of TypeScript users that like it, but we don’t need to add every syntax that everyone has liked. Yes, if it existed people would use it. It exists in TypeScript so people use it. But I am sure you are aware that people are validating arguments without using decorators. It’s more concise, but it’s generally worse for readers in most cases. The brevity is just not worth the cost in terms of language complexity and additional difficulty for readers. I will not stand in the way of this advancing to stage 1 by myself. But I don’t think it’s a good idea. + +DE: So I want to first say, I really appreciate the time that RBN put into this. It’s great after having huge amounts of endurance you’re endurance through the ten-year span, but to evolve the language despite the huge difficulty of that, the decorator transition for . . . it’s going to be allowed and I hope that we can continue this pattern of having the run time parts of the language be, you know, decided at TC39 . . . I hope that we can continue evolving the language such that, as said in the chat, we can eventually fill in the gaps that people are doing with tolls and widely adopted which we thing are good and find verses of that. We have judgment, make judgment calls about what things are good. +I do agree with KG that we should be considering a broader scope of the interaction with things like function decorators, function parameter decorators. And as well as extractors/pattern matching. This might be a good time to use a fixed style approach as we have been doing for class proposals, where we have, you know, five or more class proposals, currently under development and have a meeting every two weeks and talk about how they interact. There’s a reasonable question how we want to prioritize the multiple proposals in this space. Personally, I am excited by extractors and about function decorators. And pattern matching. +I do think it’s important that we eventually do something in this space. There’s also a little bit of a difference in terms of the ecosystem adoption of this. So the other features of decorators class, field, and method decorators were in both babble and type transcript. But parameter decorators were not in Babel. And they were just in TypeScript. +So that doesn’t mean this is not valid and important, but the other part at stage 3 was filling in the – cross-implementation, making it kind of even higher priority eco-system-wise. Not to minimize this. But also, some uses of these decorators like some uses of other decorators depending on run-time emit, which is not part of this. And I guess I would like to understand that a little bit better. How that relates to the adoption, upgrade story. If we want to say no to this proposal, we should say no. That would help TypeScript say, you would have to use the experimental decorators. You couldn’t upgrade to standard decorators and still have parameter decorators. If we didn’t clearly say no, they might say, TypeScript this will be a language extension to work with that. If we leave this proposal in limbo for five years, I think that would be bad for the ecosystem. And make it more difficult for the adoption of standard decorators, possibility plus extension was parameters which are not misaligned with the TC39 decorators. This is an important part of the equation to consider. + +DE: I don’t – DRR or RBN, you disagree with any of that? + +DRR: If I can quickly respond. I would like to further discuss, obviously we don’t to be in the state for five years, but I would rather have those quick conversations at stage 1 than have them right now . . . + +DE: No. No. I don’t think we are in a position to say no right now + +DRR: Sure + +DE: But . . . I would like people who have strong concerns to think about whether this is a going to be really a fatal concern. How we determine that? + +CDA: We are over time. But . . . I think we can use the last few minutes here to maybe hopefully get through the last items in the queue, if that’s all right. + +RBN: I want to respond to DE if we could and a response to KG as well. I fully intend to bring a proposal for function declaration decorators and object decorators most likely this year. This is something I’ve been discussing with CHG and others over time. However, because of the existing TypeScript eco system and the potential – migrating everybody to native decorators I wanted to get out here first to have these discussions first. Regardless of what state function declaration decorators end up being. I plan to have those as part of the discussion. Talking about class and method, this is higher than bringing them themselves. + +DE: Yeah. I think that’s reasonable. But I also think we can see that we will have to have the bigger picture together. Before we can advance these things beyond Stage 1. + +RBN: That’s fine all right. + +SYG: Yeah. I want to give RBN some direct feedback on the implementation stuff. +So first personally, I agree with KG on the readability, it will harm readability. But that’s my personal feelings here. I will not certainly not block Stage 1. I think my general sense on things that I disagree with on a feeling level in terms of readability and code organization and stuff, if there is demand and I think the TypeScript ecosystem has shown demand, if there is a demand, who am I to stop people typing what they want to type. Where implementation runs against that, if there is significant performance concerns for the user? Not the developers. The developers want to type the thing, because they want it. But if that has negative downstream consequences on the broad pot of web users then it’s a problem we care about, particularly for decorators. The whole style of metaprogramming and the runtime having to support it, there’s a lot of mines there about performance, which is why V8 gave strong feedback that caused the decorates in the state they are. +So I think when you think about this space, the advice is V8 will be be evaluating it through the lens of understand that things that look declarative ought to perform declaratively. That’s the lens which we will evaluate things. If it starts to feel declarative, hide so much magic and performance overhead underneath it, we won’t agree with it. + +RBN: That’s fair. I have interesting thought abouts decorator, the ability to like bind things and performance that might be interesting. But I won’t go into detail right now. It’s something we can talk about off line or on the proposal repo. + +DRR: I want to say, there have been comments on the taste of this that we presented. A lot comes with the context of years of building the application and ripping out and presenting it as-is to show motivation use case. Over time, we could present the use case with less pressure, to get some understanding and that’s all I wanted to put out there. So keep that in mind. + +JHX: Okay. I am not sure how parameter decorator . . . in my opinion, any of the strong leader could have cost. But if something is difficult, it’s because of the structure is bad or just wrong. Not because the feature itself is necessarily bad. +Okay. + +CDA: Okay. We are past time slightly. RBN, did you want to ask for Stage 1 + +RBN: Yes. At this point, I would like to ask if we have support for Stage 1? + +JHD: I’m sorry. I am not on the queue. Can we – the title is class method parameter decorators. Could you phrase that as a problem to be solved? + +DE: I don’t think proposals at Stage 1 have titles, problems to be solved. + +JHD: It’s been asked for a number of times in the past since that’s what that stage represents. I am not necessarily asking you to retitle the proposal, but as was commented a lot of tiles, there were goals. If we are coming at this, we want this syntax feature and here are all the use cases . . . that is putting the cart before the horse. + +SYG: Stating the goals and the problem is different than what the title is. + +JHD: Sure. Forget the title. What is the problem statement? + +RBN: Essentially, there were two: one is that we are trying to – like to enable some more flexible metaprogramming capabilities at that allow the motivations I listed. Request for routing . . . these are hard to do today. I think I showed in the example of FFIs, that’s the current FFI APIs, the eyeballing and disconnect. These are hard to do especially with class methods, which have the same methods with method decorators. We don’t have that ability during definition time to kind of inject and intercept and make these changes and do this type of recording +These are things that are really hard to do right or become more complicated because of that eyeballing that you have to do. I mean, you could emulate parameter decorators using normal method decorates, which is how they worked forever, it’s a wrapper around what the parameter decorator had beenings like. But you have to eyeball what is the index. If I refactor and move a parameter around, I am having to figure outer what is the index has changed to, this is complicated if you want these benefits. So we like to make this a lot easier. So this is a feature to make these capabilities easier. The other problem statement is, we have a large community of the TypeScript that we like to migrate to native decorators and hoping . . . and not to rewrite the code to switch to the native code. We understand that there’s a likelihood that if this does make it past stage 1 or make it to stage 1 and beyond, there might be changes that result in limitations. The same thing happened with – kind of happened with field decorators and the same – and the need to have like the accessory key word and the limitation now that we don’t have the ability to have the paired or tangled get set. We are shown there is a broad community involved – or interested in this, used this and like to bring that capability here. +Again we are trying to solve an issue around improving the developer experience and have evidence backed by years of users showing this is a great way to do that. + +JHD: Thank you. + +CDA: We are way past time. Asking for consensus. You have explicit support from MAH, DE and JHX. + +RBN: Do we have anyone objecting to stage 1? + +KG: I don’t object to stage 1. But you heard my thoughts on advancing in the current form to stage 2. + +CDA: Okay. You have stage 1. Thank you. + +RBN: Thank you very much. + +### Speaker's Summary of Key Points + +- Parameter Decorators part of initial draft of decorators proposal, under consideration outside of plenary since 2013. +- Have existed as part of TypeScript’s ‘--experimentalDecorators’ for 8 years. +- Broad adoption within TypeScript community (Angular, NestJS, InversifyJS, LoopBack 4, and others). +- Enables decorator metaprogramming targeting parameters to assist with hard-to-achieve operations today (FFI type marshaling, DI constructor injection, HTTP route parameter binding). Alternative approaches require divorcing parameter-specific meta information from the parameter itself, which can be a maintenance headache (see `ffi.Callback` example in slides). +- Makes some operations more capable, such as moving asserts out of the method body and onto the parameters, allowing that information to be used outside of the method (such as binding a method and its parameters to HTML form validation). + +### Conclusion + +- Stage 1 +- Explicit support from MAH, DE, JHX +- Important to consider Function Decorators before this can advance to Stage 2. + +## Import reflection discussion continuation + +Presenter: Mark Miller (MM) + +- [proposal](https://github.com/tc39/proposal-import-reflection) +- no slides + +MM: So I think I did express the issue there in the – in the next presentation, which is – but it does apply for to this one, which is that there is a whole bunch of related issues – you know, proposals related to modules that are not organize [thol]nal, coupled to each other, and feed to be considered as a organic whole to maintain the coherence of language. +So I would like to hold anything back from Stage 3 until we can coordinate across the set of module proposals. +Because otherwise, once something goes to stage 3 without that, then we might have painted ourselves no a corner. + +GB: It’s important to try and define what we mean by coordination here. Since there has been a tremendous amount of coordination in the modules group meeting every two weeks, that’s 1 ½ hour meeting we have between all the module specs to ensure we are maintaining this alignment for specs. We have to be careful about holding anything back because otherwise, it will be very difficult for anything to progress, if we can always make this argument. So I wonder how we strike a balance there to make sure we’re continuing to build the foundations + +MM: First of all, we admit that I might be underestimating the degree of coordination there is, because I have not been attending these meetings, I waived this consideration during Nicolò’s presentation, I know how well he understands the overall considerations of modules. And I understand you do to from what I have heard from other conversations. +But I am certainly not oriented enough right now, as to how import reflection relates to the overall set of module concerns to. I mean, you’re not proposing it for Stage 3 right now. That’s correct, right? + +NRB: That is correct + +MM: This doesn’t immediately come up, but I have a bunch of open questions but how it relates to some of the other modules that are fuzzy right now so we can take them offline. But I do want to just express concern that there is that coordination and it’s much like what we face in the early days of classes. It was very important to figure out how to break it up and have pieces go forward such that we weren’t painting ourselves into a corner + +GB: Yes. If we can come up with a plan to work through those concerns that’s great so we don’t end up in a situation where there’s surprises when we seek a Stage 3. Would it help to bring some of these questions to the SES meeting and more important discussion into that context? + +RPR: Then maybe MM could be invited to the module meetings. + +DE: There’s an open invitation if you want to join. It’s the same time as the SES meetings on Tuesdays. + +MM: Yeah. Attending more meetings is not something that is necessarily easy for me to do at all at this point. But I very much appreciate that it has been raised at the SES meetings. What I saw in your presentation was quite different – and by the way, in ways that I have very much attracted to. I really liked the orthogonalality, in terms of the five phases and how you could have a reflection into each of the five phases. I thought that was really beautiful and elegant. But it was different than anything we discussed in the SES meetings + +GB: We can certainly come to the SES meetings and explain this to and express what the layering is for the proposals + +MM: Okay. The other thing – let me just say – it’s not fair as a consideration to hold things but, but practically is a consideration for me – as you know, Chris Colal [?] has been on paternity leave, which is going to extend through April 27, is the day he’s back from paternity leave. And I have been crippled to understand the overall, you know, the overall ecosystem of the modules epic in the absence of Chris. + +NRO: [inaudible] bringing forwards the learnings [inaudible] the SES meeting. + +MM: Yeah. That all sounds good. + +RPR: SYG, o you want to go with your question on the queue? + +SYG: What was my question on the queue? I see a screenshot. Yes. Yeah. I guess that wasn’t answered by Mark’s question. Just mechanically, what – like, this has to be a stand alone proposal still. So what is the layering? + +NRO: Mechanically, the layeringis that there’s no dependencies. This can land as is by itself. + +RPR: Good enough for me. DE is not in the room. If we move . . . back to SFC. Is SFC in the queue? + +SFC: I added this two days ago. I wanted to say that I like the way of expressing the five phases of module loading. This is the most clear of the various module loading proposals and I just like the direction this is going. I think it’s very clear. + +DE: Yeah. At first, when import reflection was being discussed, I was pushing for import reflection to be part of the thing to go with the import attributes. Part of that might have been nervously trying to build the case for import attributes which we now agreed on collectively. Part was an argument that we also made that we should not have too much blowup in the syntax space used by import statement. The explanation that at what phase modifiers are was very persuasive to me. Put something in this as the syntax and the specific semantics of saying which phase we’re talking about. And the different feature that is are each independently motivated turn out to fit with the phase scream [?]. + +DE: So at this point, I am convinced that this is a reasonable way to evolve the import grammar without becoming kind of too complicated. And with having a reasonable kind of separation of concepts. + +LCA: Okay. That’s it for the queue. Is there anything else? Then I would once again like to ask for the stage 3 for review. NRO, Chris and DE. I am not sure DE will make the review for the [inaudible] we would very much appreciate it to ask for advancement. + +MM: The next meeting is at the end of May. Correct? + +[various]: Yes. Yes. Mid-May. + +MM: Let me say, I like the overall direction. A very positive one. I want to make sure it’s all coordinated well. + +LCA: Thank you. + +### Speaker's Summary of Key Points + +- We talked about how, going forwards, module reflection can be coordinated with the other proposals related to modules. + +### Conclusion + +- Plan is for import reflection to come back in the next meeting in May to ask for Stage 3. + +## Async Explicit Resource Management again + +Presenter: Ron Buckton (RBN) + +- [proposal](https://github.com/tc39/proposal-async-explicit-resource-management) +- [slides](https://1drv.ms/p/s!AjgWTO11Fk-Tkodu1RydtKh2ZVafxA?e=yasS3Y) + +RBN: I am going to bring up where we left off. This slides I am about to show are this is the same slide running as before. I have added some slides for additional discussion. Let me just get those shared. This is the slide we left off on. And I am going to go into a little discussion here. The consensus we had on it was Tuesday, to move toward with the syntax. I did an investigation into this. I put up PR’s I mentioned. I’ve had – I think at least SYG looked at it and talked it through with him in Matrix yesterday. But to give kind of an example of what we are looking at, to introduce a cover grammar for `await using`, it might look I will show on zero in on the slides. This is off the use of cover grammar for specifically the cover parenthesis expression and cover parenthesized expression and parameter list and the cover expression and async arrow head. Essentially, what we produce here is rather than an await expression. We produce a cover grammar that covers the same thing that await expression covers. + +RBN: But then it wouldn’t be bind until the later point, when static semantics are applied +Again this is just could be covering await unary expression, identity [?] to a wait expression. Where this matters as you bubble up out of to assignment expressions to expression statements, you cannot have an identifier name follow an expression on the same line. That is invalid. Today it triggers – it wouldn’t trigger ASI [?]. It’s on the same line. And then specifically, what want to opt into for await using, so in that case we would have an `await using` declaration. And this has again – this cover await expression and a wait using declaration head, cover grammar, this matches now the case where expression now fails because you cannot have something following this in the expression case. +Here, we would say though a no line terminator works and then parse a binding list that does not include patterns. The specific parse parameter - production parameter shown here is relatively new. I just merged it into the research management case because of an editor comment about using the parameter in two different ways. This is more consistent. I am presenting this here as well. We successfully parse this and if semantics . . . and verify this is a valid cover for await, new line term later, slots into the space we had before. +Now, the implications of this are that again cover await expression using deliciousing [?] head will eagerly consume what would be the content of the await expression. And then again, followed by identifier name and illegal . . . + +RBN: So a number of example cases for this . . . if you have await X today. This is the cover weight expression. And then later refine to await expression because it’s a involved expression. + +RBN: If you have await using on its own., this is just an await expression. using is not reserved. It’s an identifier in this case. If you had await using, followed by a new line and X = Y these are two separate expression statements, because of ASI. We would inject – this would fail to parse in both cases and I will show that more in the next slide. But when the expression case, it failed parse the next is identifier, but because of the failed, we reparse, as if we inserted a semicolon, and we end of the line. + +RBN: The reason this fails the case of the await using declaration is that await using declaration requires the binding list on the same line as the keyword. This wouldn’t match this await using X, again we would cover the parse await using as the cover grammar. We successfully parse the using declaration however we have an early error for lexical bindings not having an initializer when it’s constant. This would result in a syntax error during the early error static semantics. And the other case: If you have await followed by a new line, using X = Y, this would fail to parse in both cases. Because await using, normally parse a wait expression will fail at the expression level because you have again an identifier on the same line. And would fail to parse await using declaration because again you have a new line in between. So this would not be an await for nothing. That’s not valid in JavaScript. This is X = Y, will fail to parse as an expression, but will back up until we reach the level we are parsing expression statements, but we have consumed these token as – and successfully parse it using await using declaration. + +RBN: One of the other things that we looked into – I don’t have this in the slides, but I investigated async using as a grammar. It has the same level of complexity, except there’s one small benefit at least to the await using grammar in that in both the case of await expression and await using declaration there’s a plus await context. It’s fairly easy to restrict these and those cases. In the case of the `async using` expression case for an async using X = Y type declaration is in + await, but the async using, an arrow function head is not necessarily parsed. So there’s a little bit of discrepancy there + +RBN: On the parser complexity side, I did implement this in TypeScript 3 weeks ago. But I was trying to it each of the possible cases. TypeScript is not necessarily LR1, most of the parse is but we have a couple cases of infinite look-ahead how we deal with – generics and arrow functions. But we primarily stick to LR1 and 2. We might max out at 3 token look ahead in case that is are not specifically handling arrow functions. In TypeScript this requires to disambiguiate to look ahead. If we saw the await token in a statement context, which could allow expression statement, if we saw the next token using with no line terminator in between and the next is an identifier with no line terminator in between, it’s definitely not an await using and only using deliciousing [?] and parse as such +If parsing complexity for a parser that permits two token look ahead, it would perform such look ahead to make this simple. + +RBN: So . . . again I talked about this with SYG. I haven’t had really feedback from anyone else about the cover grammar. So I am not sure if there are any issues with the grammar. I think this is feasible and I would like the chance to go to the queue, see if anyone has feedback or concerns and potentially see if this is enough to advance to stage 3. + +WH: I am curious why you have the restriction against `using` followed by `await`. + +RBN: Part of it is copy paste. And trying to maintain similarity or consistency. Another part of it is, if this advances to stage 3 and once we have merged them, I plan to come back and this request to remove that, to avoid complexity using await orderings. Now it’s kind of just sitting there. But I wanted to keep the restrictions with using for the time being. + +WH: Yes. So I looked at it some yesterday, it was the only change that I request: get rid of that restriction. Still thinking through all the cases. Examples are `await` can be used as an identifier in -await cases, except that it can sometimes also be used as an identifier in +await cases. And haven’t thought through all the possibilities yet. + +RBN: One of the reasons – I agree we don’t really need the look ahead restriction. An expression and await using declaration are only in + await case. So you wouldn’t use that as a identifier of those cases. + +WH: The BindingIdentifier grammar allows use of `await` it as an identifier in +await cases. It’s counterintuitive. + +RBN: That would be interesting because I did not – I thought that was specifically restricted to prevent that from happening. + +WH: It is doing it. There is a note that explains why. + +RBN: And the other thing is that await in this case, the look ahead restriction only would have affected the next identifier in the binding list, not all. + +WH: Yes. There is a static semantic there. I am still working through cases such as `await of` and `await using`… I need to convince myself this will work. I just need to think about this more. + +RBN: One of the cases in using declarations if you in a for of because using is not a reserved identifier, or reserved keyword, if you had 4 open parenthesis using of – is this a using declaration named of that you are binding into the result of whatever is on the right? Or are you binding an identifier named using of, a thing named of and that’s less – I think we might still have – the same restriction. You have entered a two key word case that takes you out of the this has to be await using. Yeah. Await X of isn’t valid because X has to be a left-hand side expression. And await is unary. So it’s wrong for that to work. We don’t need of restriction for the for declaration case. + +WH: Yeah. No red flags, but I haven’t convinced myself fully that it works yet. I just need more time. + +DE: The cases for you presented in your slides, dealing with new lines, that looks great. Also, making the normative change to allow using await in non-+ await context seems fine. Also, fine if we didn’t do that It’s pretty hard to read the cover grammar. Like, I think it looks right. Have the editors thought about moving away from cover grammars to some other way of, you know, notating this stuff? + +KG: No. + +DE: Okay. + +KG: I mean, it might be a good idea, but we haven’t thought about it. + +SYG: So happy to defer to WH on a closer reading of the grammar. Talking with RBN, I am convinced this is just like easily implementable in handwritten with recursive descent with limited look ahead . . . and I guess also TypeScript is of course that way. And that’s the thing I was really after and I am convinced of that. So happy with Stage 3 + +RPR: Yes. That’s the end of the queue. + +RBN: So knowing that – the conditions for this are a bit interesting. So the proposal’s at Stage 2. In the January meeting, the sentiment and the consensus was to achieve Stage 3 in March. The terminology that was used was “no later than” . . . considering these, but that it is true that the – specific cover grammar was not provided in advance of this meeting. So if someone does have concerns and would like to block consensus, that is a potential option. But I would like to at this point ask if the condition for using await and satisfactorily – is satisfied to the committee’s – satisfactory to the committee. Is there possibility we could advance to Stage 3 with the syntax + +RPR: Are you asking for Stage 3 + +RBN: Yes, I am + +WH: I will support, unless I find problems in the grammar in the next few days. + +RBN: From my investigations, it’s 100% feasible. It’s whether or not the cover grammar I provided matches with what is needed to make this work. I have put a bunch of different cases, not just the ones I have shown, on the slides here to verify that a cover would be successful for this. So I think it’s definitely something that we will be able to solve, even it requires amendments to the cover grammar. + +RBN: So I think given that condition from Waldemar and a couple in the queue as well . . . + +RPR: Yeah. + 1 from DE, MM, and CDA. So I am only hearing support and WH has got his conditional +review. But it sounds like people are confident. + +RBN: To clarify, any observation given the specifics around the cover grammar not be available in general + +WH: Can I ask you to remove the restriction against `using await`? + +RBN: Do you mean against lookahead of `await`? + +WH: Yeah. Just delete the look-ahead restriction + +RBN: It’s definitely not needed in the await using case or shouldn’t be using that because we already validate after the fact. So I don’t think that that would be a normative change but on the offchance it is, if we have Stage 3, even conditionally, I will ask for consensus on zero that change, if anything. + +WH: Okay. + +DLM: + 1, the conditional advancement based on WH’s review + +DE: It was sort of suggested in that exchange that we maybe ask for consensus allowing for-await using await in an await context and I would explicitly support that. Even as much as it’s weird and ugly. Are we calling for consensus on that or considering that for future + +RBN: One is moving the look ahead restriction I put in this slides for the async await using declaration. Since it’s not really necessary. It’s more of an editorial question. And removing the await syntax. I would ask for consensus removing that restriction, it’s undone restrictions giving the change to syntax + +DE: The first is an editorial fix. + +RBN: I can concur with that + +DE: Good it hear if anybody has concerns. Now we’re switching from using await to await using, there’s no reason if you’re not inside of the function to prohibit using await to do – clean up something called await. What do people think? + +RPR: SYG + 1. + +RBN: Any objections to that change + +RPR: Nothing from the room? No objections on the queue. + +RBN: It sounds like we have conditional advancement assuming, based on WH’s review, then. + +RPR: I agree. + +RBN: The one last that I wanted to bring up is again to – as a reminder when we reach consensus on the condition in January, one of the things that we also had consensus was was the async and sync versions of the proposal will merge down to one for the course of Stage 3 to Stage 4 in part to make my life easier when it comes to specification changes. Since they would be – maintaining them in parallel and writing testing. To clarify, this was a prior consensus once it reached Stage 3. And I wanted to remind the committee, I will do this once this final condition is met + +WH: This will make everyone else’s life easier too. + +### Speaker's Summary of Key Points + +- Consensus on day 1 to use `await using` keyword ordering given a working cover grammar. +- Cover grammar provided via https://tc39.es/proposal-async-explicit-resource-management/pr/15/ +- Parser complexity only requires two-token lookahead in engines that are not strictly LR(1). + +### Conclusion + +- Stage 3, conditionally on final review of cover grammar by WH +- Consensus on normative change to remove `[lookahead != `await`]` restriction for sync `using` declarations. +- Support from WH, SYG, DLM, MM, DE +- We already have WH and SYG and MF has been the reviewers back in Stage 2 + +## Incubator Call + +RPR: SYG, did you want to do anything with the incubator call? + +SYG: No. I really would like to get these started again, but I haven’t been successful in galvanizing other folks from running those and I have not had time myself. + +## Non-violent communication course (NVC) + +RPR: This was not on the agenda because I forgot. We have budgets that we had reserved from ECMA for the nonviolent communication course (NVC). There were requests from ExeCom. We haven’t done all the items that were asked from that. As it stands, at the next ExeCom meeting happening in three weeks time, we are planning to withdraw our funding request, unless someone valiant wants to step up and really do the work for arranging that. + +DE: The work of arranging it means putting an item on the agenda for this meeting and getting two-thirds support of the committee for continuing it after having presented the syllabus. So Chris actually found the syllabus in more detail. But none of us had the energy to put it on the agenda and argue it through the committee. Yeah. Do you have any thoughts + +CDA: Yeah. That wasn’t the significant burden. We didn’t have a chance to meet again as a group (inclusion group) prior to plenary to discuss this, but the ask was beyond that. It was, first of all, that we raised in plenary the types of issues that led to needing the training. I did go back and forth on whether to add this as an agenda item. But I was concerned that could manifest as airing grievances and really didn’t want finger pointing or anything like that. + +CDA: So I think that it’s still worthwhile to pursue, but I think that if we renew that effort, we should try and predicate it on something a little less focused on combativeness and acrimony and really frame it as something more positive. So . . . does that make sense? + +DE: Yeah. This seems like meaningful immediate feedback that we can bring to Execom while withdrawing the request for now, based on the previously-agreed on deadline. + +CDA: Yeah. It was – here is the verbatim: “A discussion be held among TC39 membership during a regularly session plenary session to surface the types of instances and issues that have encouraged newcomers to stop attending as well as other communication issues that led to the request for this NVC training and it should . . . during the training along with the proposed training schedule. Discussion should seek consensus that the training is appropriate and expected to address the improper communication techniques and concerns that initiated the training request.” + +CDA: It goes on. But . . . I just felt that the initial ask to surface the types of instances and issues that have discouraged new attendees and other communication issues may not have been a fruitful endeavor. + +DE: Yeah. It was very important when we were setting up the Code of Conduct in the first place to not have that be oriented around blaming any delegates in particular. And probably work on improving this stuff going forward, we will need to do something going forward. I agree with you, CDA. Thanks for looking into this. + +CDA: I will say, I’ve been pleased at how this plenary has gone. I really can’t think of a single issue or problem in this sort of area. So I am very happy at the, you know, collegial atmosphere that we have had for this plenary. I think the last one went well as well, but I wasn’t able to attend much of last plenary. + +RPR: We are out of time, but the initial request went in 4 years ago. And I think the – committee has done well since then. + +DE: That’s not to say that such a training is not helpful + +RPR: Agreed - it could still be helpful. + +JHD: Perhaps just less urgent than originally believed. + +DE: Sure. + +RPR: This is the end of the meeting. Thank you to our meeting host F5! + +_END OF MEETING_