Skip to content

Latest commit

 

History

History
751 lines (398 loc) · 151 KB

jan-26.md

File metadata and controls

751 lines (398 loc) · 151 KB

26 January, 2021 Meeting Notes


Remote attendees:

Name Abbreviation Organization
Ross Kirsling RKG Sony
Ujjwal Sharma USA Igalia
Waldemar Horwat WH Google
Bradford C. Smith BSH Google
Krzysztof Kotowicz KOT Google
Sergey Rubanov SRV Invited Expert
Ron Buckton RBN Microsoft
Rob Palmer RPR Bloomberg
Robin Ricard RRD Bloomberg
Daniel Ehrenberg DE Igalia
Jason Williams JWS Bloomberg
Philip Chimento PFC Igalia
Chip Morningstar CM Agoric
Caio Lima CLA Igalia
Devin Rousso DRO Apple
Michael Ficarra MF F5 Networks
Jordan Harband JHD Coinbase
Dan Clark DDC Microsoft
Leo Balter LEO Salesforce
Jack Works JWK Sujitech
Richard Gibson RGN OpenJS Foundation
Mary Marchini MAR Netflix
Guilherme Hermeto GH Netflix
Kaylie Kwon KK Netflix
Ben Newman BN Apollo (fka Meteor)
Cam Tenny CJT Igalia
Frank Yung-Fong Tang FYT Google
Felipe Balbontín FBN Google
Shu-yu Guo SYG Google
Justin Ridgewell JRL Google
Zibi Braniecki ZBI Mozilla
Istvan Sebestyen IS Ecma
Rick Button RBU Bloomberg LP
Yulia Startsev YSV Mozilla
Chengzhong Wu CZW Alibaba

Intl.DateTimeFormat for stage 4

Presenter: Felipe Balbontín (FBN)

To start I would like to take some time to remind everyone about the motivation for this proposal. So it is very common for websites to need to display date ranges or date intervals in order to to show the span of event such as the duration of a tree or like a time period in this this graph, when permitting or when displaying - [inaudible]. So as I was saying, how can we format this in a really concise way? So a naive approach would be okay. Let's instantiate a written format specific Locale and setup options and then we can just form at the two dates independently, and then we can format the strings together using some kind of delimiter like in this case a -. This would work, but it has a couple of problems. the first problem depends on the kind of fields that you're trying to display. by "calendar Fields" I mean like the month name, the day of the year or even hours minutes seconds. So depending on the kind of fields that you're trying to display and the dates that you're trying to fix the composer that interval it may happen that you may end up repeating some of these calendar fields unnecessary because they're not adding any new information like for instance in this example. Given that the two dates that you're trying to format are in the same month, you're actually repeating the month name. The second problem is that the way which you format or you are representing intervals is locale dependent. So in English, it's perfectly fine to use that - but there are other locales where we would use a different string a different symbol or something like that and together with this also the order in which the dates are being formatted is also called event what is that it there may be some locales where the second date should be displayed first, or it's customary to display the second date first.

FBN: Because of these issues, we are putting together the format range proposal. So in this proposal basically we are adding two additional methods to date information. These methods are from a range and from a rinse two parts both proceeded to dates parameters. These are the days that compose the interval the interval and the first one will return a string that contains the most like concise and string representation of this integral for the selected Locale and set of options the second one format range to parts on the other hand will return a list of items where each item will represent a different part of the format interval. This is really close to the other formatToParts methods having added in the past to date format the number format and we'll get into more details about that one. Later.

So it's important to note that this proposal is based on (?) on the intervals. So to give you an example on how we would use this in the case of format range. You will instantiate dateTImeFormat using providing the Locale options and then you'll just go from a range with the two dates and it will output the string as I mentioned before with the most concise string representation for this state interval or date range like in this case because the two dates are on the same month, then we're displaying the month named on the year just once on the on the interval the format string. In the case of format to parts, it will output a list of items each items and mentioned before represent a specific part of the formatted interval. Each of these items will contain three Fields. The first two are the same ones that are currently returned by the regular formattoparts the type basically indicates that they are associated to this particular item, the value is a substring of the formatted interval or formatted string and then we added a third field which is sort and in this case Source basically indicates to which of the two dates this particular item is related to or comes from. So as you can see here for instance the month name and the year are shared between the two dates while the first day that's displayed here at comes from the first date while the second day comes from the date. The reason for adding a formattoparts method here is the same as for the other formattoparts methods and basically is to allow for a more flexible display for the UI.

FBN: so second, so now that I went over the motivation for the proposal and some examples I would like to present the biggest updates that have happened since we got this proposal to stage 3. So one of the biggest user facing changes, normative changes, is that previously when either start date or end end date were undefined we were throwing a RangeError. now after some feedback we receive as well as a discussion at the for to working group the conclusion was that it was probably during this case to throw a TypeError because TypeError in this case would better represent the kind of error that we want to convey when throwing an error in this case. Also, it's important to note that this is more like - here we're making this more consistent with the other parts of the 402 spec where we're trying a type error in similar cases when an argument or on our options are undefined. The other two big updates that have been done to the spec text: now we're supporting two additional options that were added to datetimeformat that were other recently advanced to Stage 3. for the first one is fractional second digits this One is a simple basically this option allows whoever is using different permit to get it to display the fractional seconds when formatting and eight and now we're doing the same when formatting day intervals and similarly. We are also supporting to format to intervals when when the user is setting date style or or time style.

FBN: So what's the current status of this proposal. This proposal was was implemented in JSC and SM was was shipped in Chrome. Also, there's test262 tests and we got editor signoffs.

[queue is empty]

FBN: I will like to ask the committee for approval to advance to state four.

RPR: Okay. So are there any objections to stage 4? [silence] No one is on the queue. So oh, yeah. Congratulations. You have stage 4.

Conclusion

Stage 4

ResizableArrayBuffer and GrowableSharedArrayBuffer updates

SYG: I am not asking for advancement. This is just a few updates to let folks know of the some changes making to the proposal and for discussion around those changes.

SYG: So the first change is that currently the proposal has this behavior that was calling once out of bounds stays out of bounds. This was a kind of weird mental model, and also involves some extra implementation complexity, so I'm planning to change that on recommendation from Maria, fellow V8 team member. And then the second change I wanted to discuss with folks who have opinions here is maybe allows some implementation to do implementation-defined rounding of the max size and possibly also the size passed to the resize method. The motivation for the second change is that, one of the original motivating use cases for this proposal is webassembly which only allows resizes of its linear memory to be in multiples of its home page size, which I think is 4K. So you can imagine for a large buffer that you might want to resize especially with this in place growth strategy where you reserve some pages for you don't actually commit them in the OS that you want to round up to like a full page size. And the question is, Should we build the API basically to make that kind of rounding observable? and I'll go into that a little bit. So first up this out of bounds behavior. the currently the out of bounds behavior is something like this. suppose we have a resizable array buffer that initially is 128 bytes and then has a max size of 256 bytes. I have a view on it of 32 you and 32s starting an an opposite 0 So it's a fixed window window view from 0 to 32 x 4. And initially you can view everything because everything is in bounds. It's possible that this u32 view becomes partially out of bounds, for example, if I resize the buffer to be only 64 bytes. The View technically can look at byte 0 to bytes 64 but since it's a u32 length array parts of it is out of bounds and the current behavior is that when that happens It behaves as if its buffer were detached, meaning any access to the elements become undefined, the length reports at zero and importantly it can't ever go back in bounds. If it were ever observed as being out of bounds like this one has, even if the underlying Buffer were to be resized a to a size where the entire window view becomes in bounds again it then we go back into bounds and it still reports its length of 0 and a still reports all elements as undefined. The weirdness here is that I thought this would kind of make things easier to reason about you have more guarantees like if if you ever saw something out of bounds you can you don't have to worry about it ever coming back into bounds, but the weirdness is that, one, this only happens if you actually kind of observe the view going out of bounds. Suppose I deleted these two lines that I have highlighted. That is, I resize the underlying buffer. I don't actually touch u32, and then I resize the underlying buffer again so that it goes back into bounds. In this case the current/old behavior is that u32 never goes out of bounds because I never observed it going out of bounds. So it kind of stays working forever. And this is a problem upon thinking about it some more and getting feedback from other V8 folks. Namely like imagine. You have a debug and release build in your app where you're kind of poking at the buffer in the debug build and because you poked at it, you might observe it out of bounds and when you turn no release you no longer observe it out of bounds. So this kind of Divergence in behavior possibility between debug and release is kind of hard to think about and it's just kind of weird that it only gets into this detached state that you can't make it go back in bounds if you you look at it. Secondly this current behavior just makes things more complicated for implementations as well because it has to track this it now on each instance "Have I ever been observed to be out of bounds?" If so treat all access as undefined and so forth. So that extra bit is not really necessary if we just change the behavior to be what you would expect out of bounds checks to be, which is you check on every access. so the new behavior is basically the same except for the last three lines where if I make the underlying buffer back in bounds, because I'm just checking bounds on every access, it is possible to make an array buffer or typed array view go back into bounds. What has not changed in the new behavior, is that if a typed array is partially out of bounds you can't access any part of it. But if you resize the underlying buffer such that the entire typed arrays back in bounds, then you can access it again. Of course the newly resized memory will be 0, just like the regular resize. And the mental model for this is basically on every access, on a length access, on am element access, you just check the length of the underlying array and you check if the entirety of the type of array is in bounds of the underlying array buffer. If so, you can access, if not, everything is undefined.

SYG: There is an alternative that's possible where we allow the typed array to be partially accessible if it is partially out of bounds. I reject this alternative. I don't think it's a good idea because the only way that typed arrays can go out of bounds to begin with is if you give it a fixed length - remember as part of this proposal I am also proposing these auto-length tracking typed arrays. Auto length tracking typed arrays can go out of bounds if they started at a non zero offset and the underlying buffer is resized to be smaller than the offset. But other than that, the usual way that a typed array can go out of bounds is if it's given a fixed length, if it's partially out of bounds is it's because it was given a fixed length. and if we allow a partially out of bounds typed array to be partially viewable, that seems really weird. Do you represent - do you report the length as only the part of the array that is inbounds? If so, that seems to kind of break the intention and the expectation of the API. If I create a typed array with an explicit length, and then I view it later, because it's partially out of bounds I get a typed array of different lengths? That seems to be not good. So I would rather not have this alternative and keep the existing behavior of, if any part of it is out of bounds then consider the entire entire typed array inaccessible.

SYG: So that is the first change I am proposing. Before I move on to the second change, because it's kind of unrelated to this one, is there anything in the queue for questions? [no]

SYG: Then I will continue with the proposed second change. This is more of a discussion. I haven't really made up my mind on the extent of this change. So the the basic idea is that you are making a new array buffer or a new resizable array buffer. Should we allow the implementation to do some implementation defined rounding up of the maximum size? We'll start with the maximum size, the same question also extends to the length as well, but the the max sides is I think it's less controversial and this is very reasonable to round up because if you are doing the growth in place, the implementation strategy or the way you would do that is you would call something like mmap or your os's equivalent of mmap to get some virtual memory pages, but not actually make them be backed by the physical memory until you need it. And if you have some pages you're going to cream and being a size that is multiple of your operating systems pages. So even if I give you in this example I request four thousand bytes. The implementation probably will round that up to some multiple of this page size, which is often 4K. So just for the sake of the example it's rounded up to 4K.

SYG: So the pros and cons of allowing this is that like I said for large buffers you want to align on page boundaries anyway, And implementations have one fewer field to track. The max length the max byte length is currently currently observable because what it's observable because it's reported by Max byte length and it specified to have resize throw if you provide the resize method with an argument that is larger than the max byte lengths. if we keep with the current specification draft where the implementation does not do any observable rounding it can of course do its own rounding under the hood, but then it has to track both "What is the size that I actually am that I reserved" and "what is the size that I must report to script"? So if we say in the spec that we allow rounding to happen that if I request a size you might in fact get a slightly bigger size than we don't have to two separate lengths. I think a con here, of making this rounding behavior observable, is that we might expose engine allocation strategy. Meaning that for example for very small buffers if I want a 15 byte buffer, that it's much smaller than a page, and an engine might reasonably choose to not allocate a whole page for that. It might just malloc some memory instead. And in in that case you can observe that max byte lengths was not rounded up and that gives you some information about what the engine is actually doing that might be undesirable. And as final note, the implementations are of course not required to round for small buffers, or for any buffer. Most likely you would not want to round for small buffers.

SYG: a related question is, should we also allow the the size and this and the argument to resize round up for similar reasons. This is more directly motivated by the WASM use case. in wasm when I expose wasm linear memory as as a resizable array buffer. I now have the ability to grow the wasm memory from (?). In wasm you can only resize the linear memory in multiples of the page size in the current specification draft of resizable array buffers how you would handle that restriction on the J's side is probably just to throw if you pass it assigns that is not a multiple of the of the page size. And this is allowed because resize can basically always throw do to out of memory conditions or whatever else in this case Watson memory just has has this restriction. but a more ergonomic thing is to extend this Roundup allowance to resize as well so that if I request a non page multiple page size. The implementation will just round it up to the next allowable multiple and do do that for you.

SYG: And the pro here is WASM alignment and the con is that it may be surprising for developers who are not used to the concept of page sizes. Basically you might expect you know, like regular arrays that whatever size I give you, you give that size back if you can fulfill it, not anything more in addition if we do the science rounding for you on the length. for use cases that need to track the exact length, I'd struggled a bit with with coming up with a reasonable example here, but I suppose it's more of a theoretical concern. It seems like maybe developers who need to track their own length separately if we do the rounding for them and that seemed desirable, but I don't know if that's actually the case like can you create a buffer and you ask for size n and I give you n plus some extra space, most of the time you probably don't need to track the original n, you'd be happy with that got more than you asked for. But I don't know if there's some legit use case there.

SYG: So before I continue with the rest of the presentation, which is informational, anything on the queue to discuss these proposed changes? Are are folks feeling about this rounding up Behavior Brad

BFS: sure, So you discussed two different modes for at least the rounding. OOne is is for small buffers you don't expect rounding to occur, one is for larger buffers rounding occurs in order to to align with webassembly mostly or some implementation strategies. What's the expectation if you give webassembly something that isn't proper page size for webassembly

SYG: What do you mean give webassembly something?

BFS: so webassembly has I think it's webassembly.memory which can be passed around into into webassembly modules.

SYG: I don't know what happens now. Maybe it just errors. It's possible right because right now, you can give it, you know a regular array buffer up some unaligned unaligned size and -

GCL: To clarify you can't create a webassembly memory that isn't page aligned.

BFS: So I guess the question is does that mean that we're kind of trying to align with webassembly in a way that won't work for the actual interoperability? because if we can create unaligned stuff we have to solve for what happens if we are able to provide it to webassembly.

SYG: The question I don't know now is, what happens if you pass an unaligned thing now. I didn't know there was the ability to create a memory on the GS side to give to a what to what to a last module?

DE: WebAssembly doesn't allow that. It's The Wasm/JS API that lets you create a WebAssembly.Memory object that you can get the ArrayBuffer out of that. So I've been assuming with the development of this proposal that we would have something similar where the Wasm Memory constructor would take a new option that would give you back a ResizableArrayBuffer instead, and those ResizableArrayBuffers are always page-aligned. There were two questions, one was the question about whether the start location is aligned and another about whether the size is aligned. The WebAssembly API doesn’t let you create anything misaligned by construction because the APIs are all in terms of the number of pages. So I think we can make all this fit together, but we should do some explicit thinking about the how it fits into the WebAssembly API.

BFS: Yeah, that's all I was trying to bring up with because if we're making these kind of decisions based upon wasm interop it seems like we need to actually go and investigate that.

SYG: The point is that I'm not adding any new avenues to introduce buffers. The existing Avenue is this case is the JS api which sounds like a restriction is already in place and that restriction will be kept in place. The new Avenue is that when you get back this when you get this array buffer out, because it is resizable I can now resize it to be non-aligned sizes and the ergonomics question is for wasm memory vended array buffers. Do you just make the non-aligned resize requests throw or do you make it automatically round or something?

BFS: Yep, that would be fine.

MM: Yeah, I'm actually unsure, as the discussions continued I'm unsure of my stance on this. Let me just state a general preference that observable differences between platforms are costly so that's a significant con for making this rounding observable. So if the pros are not strong I think that we should avoid making platform differences observable, but I don't know how I feel about how strong the pros are.

SYG: Yeah, I share that concern. It could be people depending on implementation, the thing we always feared; it could be possible fingerprinting as well. Yeah. It's definitely a con.

MM: Yeah and to make just a completely bizarre analogy, when I grew up as a programmer C had a sizeof operation that was used a lot to write sort of portable code where the code itself was responsible for adapting to machine differences because the word size differences between different machines with such an important thing to optimize for, and these days we don't do that. We just pretend that over that, you know, we just ignore those the performance differences between different words sizes on different machines and just force everyone into a platform neutral common Behavior.

SYG: Yeah, I think that is that is a compelling argument to me in that you know, a big value of the web platform is the consistency across platforms. And the pros here are mainly kind of implementation complexity, we have fewer size fields to track. And that is not just, oh we care about an extra word or two of memory per table for instance. I think the main Pros their of implementation simplicity is higher chance of getting this right and fewer chances of security bugs given how popular an avenue of attack array buffers are. The more complex these new kinds of buffer implementations are, the more likely there will be bugs.

MM: You're certainly speaking to my heart there.

SYG: If you have a security question. Let's hold a discussion off because the second half of this presentation will be the security reviews that we've undertaken in Chrome and I'll present some feedback there.

SYG: So the second half of this is, we requested some security reviews. Not of the implementation but of the design and the implementation strategies that we had in mind from two teams within Google. One is the Google Chrome platform security, you know people who gate keep the security reviews of the actual features that we merge into Chrome, and second is project zero who of course has great experience in actually developing exploits and how to exploit web stuff and browser stuff. And I'm happy to report that they were both satisfied with the security risk mitigations that we laid out. To wit the three risks that we laid out to them, the three known risks, were that one, type the typed arraylength can change whereas they couldn't really before and the mitigation is that well the reality today, is that length can already change except they can only change to one value, zero, due to detach, and we have confidence that over the years we've kind of put the detached checks and all the right places now and all this new kind of array buffer and these new kinds of of Auto length tracking typed arrays would add is they generalize the logic the detached checks, but we already have a set of points where we know that we need to audit to make sure that they can account for changes Beyond Beyond just changing to zero. The second known risk is that the array buffer data pointer might change and this has had bugs in the past where a jit might incorrectly cache a data pointer and then due a resize - Sorry, not in the past, but like one possible class of bugs is that you might incorrectly cashe a data point where it was constant and if your implementation strategy is that you actually moved the data pointer you might now be pointing to Freed memory and that leads to you know, arbitrary exploits. And the mitigation here is explicit in the design in that for implementations where the implementation strategy both in place growth makes sense, the design explicitly allows, that by requiring a max length, so that the data pointer does not have to change. Then finally there's a risk that because this kind of overloads the typed array constructor it might - like implementing the new features might cause vulnerabilities in the existing typed arrays which are much more widely used and which are much more security and performance dependent. And there's really no magical way to mitigate this other than careful auditing and trying to reuse the battle-hardened paths that engines already already have for typed arrays. This is probably the risk risk with the biggest chance of failure due to human error, but I think it is a risk worth taking for the expressivity that we gain.

SYG: Before I go back to the queue, we are planning for stage 3 in March. The reviewers were moddable, Mozilla, and Apple. These of course browser vendors with also security concerns and moddable, who have a very different environment that they would Implement with different implementation strategy. I think they said they always want to reallocate and compact and it would be good to have them review and make sure that their use cases are met as well.

PHE: Going back to the rounding question. I'm a little uncomfortable with the idea of an API that gives a surprising result. I think it could result in code that isn't very portable. Before I understood the security concerns I thought you could get a similar behavior by providing a separate call that could perform the rounding. So that script could opt into getting the optimal rounded value and then just passing that to the resize caller, to the Constructor and that way everything would be predictable. But if the main motivation is for security than that, that doesn't help you because you'd still have to support both of those.

SYG: Sorry, the motivation for security for rounding? I don't -

Peter: well if you went down a route where you you explain that it would cease by rounding the buffer sizes that it would simplify the implementation of the growable buffers, which would have some security benefits, if that's the main motivation for rounding then making the call Separate doesn't help. I guess there's maybe a question of priorities.

SYG: Yeah, but I take the portability concerns very seriously and that it's the main trade off.

PHE: Okay. I mean in the embedded case if there was rounding going on would want the extra bytes to be available rather than sitting there on used which is very different in the browser. But anyway, that's all I had to say.

WH: Would this spec allow an implementation that always rounds up the size to the nearest prime number, and what implicit assumptions would such an implementation break in what people are doing in scripts?

SYG: I had not thought out the spec draft for the rounding mode. I would probably put additional restrictions other than just saying its implementation defined to at least a power of 2, but I haven't haven't thought more deeply than that.

WH: That’s one possible source of confusion. Another thing that might happen is that the implementation might decide it rounds everything to 2^56.

SYG: Right. Yeah, like if there's a bound, and what's the largest delta with respect to the original request size? Yeah, there are restrictions that we could put into place certainly if we choose to go with surrounding Behavior. It shouldn't just be implementation defined.

BFS: Sure, so I just realized a little bit ago. There may be some concerns actually about usability if we do rounding because if you create a buffer of an unexpected size, that means that if you use an API that writes all the bytes of a buffer you're going to be writing unexpected bytes after terminating length indeed.

SYG: This is one of the -- this is a great example because I was looking for an example where if we did a round up on the resize requests that the developer would have to track the length manually. And this I/O is a good example of that. If we did the rounding in an observable way where the byte length was rounded. And yeah, and then you want to write our exactly X we would have to remember to pass X to your I/O function. That would be an ergonomics hit

BFS: So in particular my concern is people are going to pass things to the typed array Constructors and pass those typed arrays that automatically get the full length because that don't exist in the wild.

SYG: That's a good counter example.

DE: The justification for rounding seems good. The idea about it being implementation-defined I think worries me a bit. I think it's okay if we have certain different classes of implementations, like some kind of embedded JS-only engines versus engines that want to support this for WebAssembly. The important thing is that at least for certain classes of engines, for example, JavaScript engines that run on the web, that we have interoperability. I think that's at least a baseline. So there's lots of precedent for this in the world of standards. For example webassembly/JS API sets resource limits for webassembly environments that are common across different JS+Wasm implementations. That's one idea. So let's keep going with this and think about how we can make sure that there's interoperability between implementations and that it works well with webassembly.

SYG: Sounds good.

JLZ: My biggest concern with returning larger buffers is the potential that we hide class of programming errors where you know, somebody's unit tests all pass but they're all overwriting a buffer by one just because they get lucky. Is that that clear concern?

SYG: I would understand buffer overruns And says you are writing into memory that is not your own and to conflicts with something else, perhaps like over running the stack, but if you just like if you round up here you that memory is still your own.

JLZ: What would have been a buffer overrun if you hadn't rounded up.

SYG: I see but these are also bounds checked, right? So like you're saying what would have been like an undefined or something that is that would have been clearly out of bounds becomes not out of bounds. So it delays you finding your bugs. Yeah, I see. Okay. Yeah, I can see that as a concern. I'm not too concerned about that case I guess, relative to the portability concerns. I think I weigh that the heaviest and then the I/O example the Bradley gave up was good as well. So after this discussion currently I'm slightly leaning towards not allowing rounding, but I'll think more about it because yeah, there's still a lot of use cases on the other side.

SYG All right, that's the queue thank you very much, and I'm not sure I've been good about opening issues on GitHub for these changes, but I will and then let's please engage their it seems like the bounds check and change is on controversial also, so that will just be that I would submerge that change and then we will continue discussing the the rounding the rounding issue. Thank you very much.

Conclusion

Was not seeking advancement

Dynamic host brand checks for stage 2

Presenter: Krzysztof Kotowicz (KOT)

KOT: Eval is evil. Ecmascript already has hooks to disable it - the problem is that it did not result in eval eradication for a large class of JavaScript programs. What happens instead? The larger the program gets, the more dependencies it has and the probability that a single eval call exists in them rises - in the end people just continue to run applications with eval enabled. In case of CSP & web applications an unsafe-eval keyword is used. There has been research done over the years of how large the problem is. It is significant - vastly over half of the web applications that do try to have content security policy struggle with dependencies and are blocked on their dependencies to lock down the eval. In practice we end up with a security control that is too restrictive, too high of a barrier to actually use to improve the security posture of a given program. There are practical examples of eval being used, usually through the dependencies, that are not very easy to replace. One of those exemptions is a polyfill for globalThis. Another one is checking whether a given syntax is supported, in this particular case async functions. Performance penalties have been mentioned as a blocker for removing eval, some third party dependencies do magic things that are simply harder to do with then without eval. Sourcemaps are generated with eval(), development environments very commonly use eval all over the place, so not enforcing eval on dev, but on prod may introduce production breakages. On top of that, the most scary example that I have seen in my work on web application security, is the angularjs example. Angularjs framework wanted to work in a mode that blocks eval, in Chrome extensions. The code pretty much introduced a meta evaluator of arbitrary code. You could reach to the window object then execute from there. This particular control blocking eval globally pushed angularjs into introducing this workaround - so I still want sort of arbitrary execution (expressions), but without using eval - how can I do it securely?. This is what angular came up with and in the end it introduces a whole class of problems that we call script gadgets.

KOT: Can we do better? I think we can. The roots of the problem is that libraries have large install bases, they have code size constraints, they need to support the interpreters like in the angularjs case - and those dependencies have little incentive to move off eval. Moving off eval is a cost for third party code most of the time, whereas it does bring benefits to applications, to the integrator. And the status quo seems to be after years that eval with no guardrails is the standard. The applications cannot effectively move to a more locked down environment because they have one, two or more instances of eval and the problem is growing, because it's not easy to stop introducing eval if eval is allowed.

KOT: And eval is not necessarily directly the same thing as cross-site scripting. In fact, there are intentional uses of it and there are cases in which user-controlled strings never reach eval, Sometimes eval usage is still required by the web application and the application from the security point of view or would be completely OK, if eval would continue to be allowed. The proposal here is to allow a gradual migration path for such applications by constraining eval, and there's a full context of the history behind our path towards restricting eval in this particular way on the GitHub.

KOT: This is a good moment of making a short segway to Trusted Types on the web platform side, because this class of a problem already has a solution in the web platform. We also have functions or setters that are risky, that should never be called with user controlled data, but nevertheless they are omnipresent. They are everywhere. Every web application or most of them use innerHTML. Some applications want to have rules on the inputs to those functions that might be local. For example, maybe your script loader component in your application should be capable of loading scripts dynamically, like for example script loader, but maybe other parts of your application shouldn't necessarily even have the capability of loading additional scripts. And similarly your HTML templating library that renders your web components for example, should be able to create pretty much arbitrary HTML and write it to the DOM but your markdown parser probably shouldn't be in the business of rendering anything, because most likely this very MarkDown parser is processing user-controlled data. Same for a webmail application. Trusted types do solve for these examples by introducing those brand checkable objects (trusted types) that wrap over strings and then putting some guards on how those objects can be created. Those objects can only be created through factories. Those factories are called trusted types policies and policies can enforce some security rules like - for example this code snippet shows a part of the policy that calls some sanitization logic inside. So it's even if untrusted user input is passed to the policy, the resulting object will wrap over a sanitized string. And of course it hooks into the CSP. It is possible to limit how the policies (those factories of trusted types) can be created in a web application and it also allows you to enable this final thing, which is to make sure that the sink functions only accept trusted types and not strings. On top of that there's a default policy object, which is meant to convert strings into trusted types for code that didn't get migrated to trusted types and still writes a string to an HTML sink. [The default policy] is to enforce some kind of baseline security rules and be able to go on with your trusted types migration, even if you didn't yet migrate everything to it.

KOT: What happens with that approach is that we can assure that those DOM XSS sinks, those risky functions can only ever be called with values that have passed through one of their policy rules functions. The policies are also guarded and they are a burden to create. It's an additional hurdle to create a policy, to have it whitelisted in your CSP for example, so in practice we see in web applications that have enabled trusted types, developers instead move to those secure alternatives. For example, very commonly CSS elements' innerHTML property was assigned some style text. Instead, now the textContent is used. What happens is even in a complex web application, there's a small number of policies at play at all and those policies form the security relevant surface and are being locked down for example in a closure, in a module, it very much bounds the security reviews. In order to reason about the web application one either needs to look at the entirety of its code - which could potentially, you know, call innerHTML with a user controlled string. [With Trusted Types] I only need to look at the policies and make sure that the risky policies don't leak into application code or the policies themselves make types secure by construction, such that the security rules to convert HTML work in a way that I deem safe and the default policy on top of that enables a gradual migration. What is important here is with trusted types, after enforcing the rules there's no regressions. Even if your web application was using a couple of eval functions that have been transformed into using a trusted type, you know for sure that no new dependencies that use eval will be introduced, so it stops being a problem forever. Right? The problem becomes bounded and approachable and you can reduce the "evalness" of the application as you go.

KOT: Here's where the proposal comes into play. We would like this approach to be also used for eval and the family of functions that compile code. The way we propose to introduce It Is by having a host defined slot called [[HostDefinedCodeLike]]. That's on pretty much arbitrary objects. What is very important is that the value of this slot is being set by the host and the host guarantees or promises to be immutable. Once it's been set on a given object instance, it doesn't change. Trusted types satisfy that condition. And then of course, there's a nice code-like check that checks the presence in the value in the slot. So once we have that primitive we can allow code like objects in eval such that objects blessed by the host could simply be passed either to eval or a new Function. That requires some hooking. And the real hooking is mostly done in this one host call out. So I propose to replace the HostEnsureCanCompileStrings into a HostValidateDynamicCode. What is different here is that on top of passing the realms to the host, I also propose to pass the string, extracted by ecmascript from the object back to the host for validation, and to pass a flag on whether the input to eval or to the Function constructor was composed of only code like objects. ECMAScript actually does the stringification of the code like objects, and informs the host on whether something was originally a code like object. Additionally there's some context for the host to make a more precise check - for example to distinguish the Function constructor from an eval call. What is also important is that the hook also integrates with the default policy behavior. This particular host callback returns a string and that string is eventually what would have been executed whereas previously the host hook could only reject the value. This one can reject the value, but it also can be turned into a modified one, if the host decides so. This is pretty much hooking into the default policy of trusted types.

KOT: How does it work in specific algorithms in eval? We need to change eval's early return. Eval is an identity function for non-strings, now this needs to be aware of a code like object. New Function is a little bit more complex. I propose to modify the algorithm to stringify all arguments from code like objects and compute a flag that stores if all of them have been code like. Currently the host check in CreateDynamicFunction is done before the function body is constructed, but the browser implementations don't follow, as CSP requires to put the assembled function body in the security policy violation reports, so the hook happens later. This presented code should not throw, the Function constructor should be rejecting this value before the object is being stringified. And this is for example how Safari behaves. Whereas all the other browsers, at least Mozilla and Chrome behave in a way that satisfies CSP.

KOT: A practical example - there's d3-csv CSV parsing library that uses an eval function. It's completely safe provided the column names in the csv are safe, and it's only using eval to avoid a performance penalty. How can we disable eval in entirety of the application, but allow this one specific use case that, for example, we have security reviewed? That's very simple. We create a policy. We give it a name called D3 CSV and let's imagine that this policy actually doesn't try to enforce any security rules. It's just an identity function, whatever is being passed to it will create a code code like object. The only thing that needs to change is we need to trust those two values in the Function Constructor. With that we have sort of converted the D3 Library into being Trusted Types compliant.

KOT: How is this improving things? Well, first of all, we require - we disable eval over strings. We allow a policy called D3-CSV to be created in the application and we allow it to be created once. We bundle our script - this one particular eval instance used by the d3 library is allowed, because the code-like object is being passed to it. However, no other policies can be created.

MM: there's a lot of points I have to make because of limited time. I'll stick to one central one. The Interpreter example from angular is very instructive the way you state it, in the way these are often stated its that the problem seems to be Turing universality that if you but that's not the actual problem. The problem is X Plus Authority if the angular had Incorporated such that the interpreted code had no access to anything that it was not supposed to have access to their would not have been in danger on the flip side if you build a thing that acts on data that is far less than Turing Universal but where the data can construct it to damage the goal of global object, then you've got the same excess Authority thing. The problem is how much Authority the interpretive thing has, not whether the interpreted thing is universal. The other thing is that the other thing that I'll raise is that overall I think some of what you're trying to accomplish here is good, but the overall system seems too much like ad hoc whack-a-mole rather than principled design based on some consistent principles.

KOT: To address the first thing, the proposal is not intending to solve for the angularjs case - this was just an example of what happens if you require the applications to work without eval.

MM: I'm using it as an example it as a contrast exercise Authority versus touring universality in order to frame the problem correctly.

KOT: And to the second point - the shape of the trusted types design on the web platform side is a result of many design constraints and the result of backwards compatibility issues, and there's a lot of thought put into it. I'm not proposing to discuss the Trusted Types side of things here. What I'm putting forward is the proposal to modify the dynamic compilation host hooks. These would be the minimal changes required to integrate well with Trusted Types.

MM: I've already given feedback about how to perform eval needs to have extra parameters so that the host can distinguish for example, a direct eval versus an indirect eval. this this as presented has clearly not reacted to that feedback and remains flawed remains in that way.

KOT: Yes, I considered your point and I asked you a question on the review - do you see hosts actually implementing this one? Nevertheless I'm open to suggestions, in the sense, I think this is a stage two issue to discuss. This feels like a fairly minor change of just changing one of the parameters in the host callout, allowing one more value to be to be passed to it. That is not needed for trusted types, that's something that you know, it's indifferent there.

MM: Perhaps narrowly. All together I'm very reluctant to see this thing go forward to stage two in its current state because it just seems too messy and ill-thought-out.

KOT: Which particular thing would you consider messy?

MM: I think that the extra slot you're proposing to add to all objects and then having it be an argument check is-code-like of all just has all of the same problems as the is-template-object. In order to be meaningful, it needs to be eval-relative and it needs to be eval-relative for exactly the reason that we've been arguing about with is-template-like and which the interpreter example helps highlight, which is foreign evals should be treated like foreign evals and meta interpreter. You shouldn't allow crosstalk between foreign eval that give each one power over the other because the any then any of L is completely threatening to all of them.

KOT: but for this not how web platform operates. Even the content security policy propagation rules say that the moment you create a blank iframe it inherits the CSP rules, right? So that means that you can eval across realms pretty much.

MM: Since there's other people in the queue and there's limited time let's take our disagreement on this offline, but I'll just say in order for something to make it in language it needs to be more widely applicable than the web platform.

BFS: I'm actually coming at this from the node side of things. I helped Implement a document for a possible security model for node, I helped implement the policies. One thing that seems to be in a lot of these discussions is at least the way it is currently being framed this is is basically a trust or no trust kind of ability being added to hooks, but things like your D3 CSV example, the privileges allowed in that trust on the second parameter in particular seem to be a little bit questionable at least from our side of things. In particular if you mutate any of the primordial through another eval or something like that, it will invalidate parts of your trust most likely unless you also have an Integrity check on that second parameter. In particular for your example dot map could be overridden and start to inject arbitrary code. I think instead of just having a trust or no trust, it may be fruitful to think of things as unique privilege per call site in these hooks are invoke and I think that also feeds similarly into Mark's concern.

CP: The situation is if you are using any library program from npm. Let's say you want a library to be able to evolve and use eval. How do you do that? Well, you have to modify that code you have to now introduce a trusted types part of it and that's not the way people develop. I understand that Google does that, they control the entire stack, control the entire code of the application and that's why trusted types like systems will work pretty well for Google. But the reality is that people are installing packages from npm every day and it doesn't solve that case. So we believe that the flip side of this is like, okay well, let's figure a way that we can evaluate some code and give some capabilities so that code to do what the code needs to do in a constrained manner rather than having this global settings that when you turn it on then it just doesn't work for many of the situations that you have in the web today.

KOT: Yes, but isn't it already a problem that is being tackled? Isn't this one a strict improvement (modulo the complexity) over the current state of things? Even if it doesn't solve for every use case.

CP: My opinion is similar to Mark's opinion. This is too messy. Now we have all these other things and this in turn has a lot of times I want so I rather prefer to find a solution that help us to have the level of control that you want without having to introduce this extra complexity.

MM: I am reluctant to let this thing go forward with Stage 2 without it having a much more narrowly stated purpose that is within the conception of what works for the language across hosts. There's some aspects of this that I think are good purposes that can be abstracted. But it's all together in a larger package of stuff that I think is just confusing. I would want more clarity first before even going to Stage 2.

Conclusion

Not advancing

Realms update

Presenter: Shu Yu-Go (SYG)

SYG: Up front I'd like to give an update on the the Chrome position on realms and how we're working with the champions like Caridy and Leo and the Salesforce folks. We've had some progress there, but since I don't have slides or anything technical to really talk about let's not go into technical details too much. I think that might be a non-productive discussion.

SYG: Very high level first. Realms has had internal push back from Chrome for quite a while as it has to it has has developed and Notably, there's disagreement that they're - you know, we think that realms does solve a use case there is value in Realms in particular even for some Google properties like amp. It would help amp kind of run their amp script in a more synchronous way; currently amp runs its amp scripts in a worker and has to deal with asynchrony for no good reason. If we had Realms there doesn't need to be any asynchrony. There is a valuable use case to be solved where you want to run some js code that's kind of trusted, like you can trust to run the code or you at least trust it to not be like actively malicious because the realm is a synchronous in process thing. So you partially trust it, and you want it to not have to be exposed to the effects of mutations in the outer realm, you want it to have like a fresh set of stuff. This use case we think is important. The cons of the current Realms proposal as we're debating this out internally is that as we have seen with the development of Spectre, and and as we have seen a common misunderstanding with possible users of Realms, is that folks tend to misunderstand the isolation guarantees that are given by Realms. Now, this is somewhat nuanced. In a post Spectre threat model, if you care about side-channel attacks, then Realms do absolutely nothing, because they are in-process. The state of the art for trying to isolate your code from side-channel attacks by a spectre-like gadgets is at least a process boundary. So depending on how hard-line you are are in thinking what kind of security guarantee that even possible with realms. From the security folks' point of view when we spoke to the Chrome security architecture, they were very concerned that the users might treat this as if it had isolation guarantees because it certainly seems like an isolation primitive where you like spin up this new thing that you run some code in that in and it doesn't have any access to the outside world. Whereas in fact that cannot be implemented as secure as the security architecture folks want.

SYG: so that's I think the main push back is that there are there's a foot gun here that makes it easy to reach for this this isolation like mechanism that in fact does not give you the isolation you think it gives. Their possible ways forward here. The simplest one is, you know, rename it to something real ugly, like with with the word "insecure" or "unisolated" in the name or something, there's compromises here. During this discussion with various teams internally in the various Chrome Architects security architecture groups internally and with Domenic Denicola, who has a lot of as you know web expertise, one of the interesting middle paths here is that we came up with an alternative that I won't officially present here because we're still working through the details, nothing really is pinned down and Caridy and team are also considering just how well it works for their use case internally - that there's a middle path to kind of get ahead of the foot gun that we're mainly worried about. One of the foot guns in addition to Spectre, which is you know, I think we have to just basically admit that Realms is a thing that is vulnerable to Spectre, there's no way to get around it, the whole motivation for the proposal is in-process sync, so that kind of that means you will be vulnerable to Spectre. But one of the other foot guns that we're also worried about is - suppose like Specter didn't exist. Even in that world it is still difficult to use Realms correctly because you have to pass stuff into Realms to give it the initial set of capabilities that you want, if you you don't properly intercept kind all paths, it's very easy to get at something in the outside realm that breaks whatever application Level isolation guarantees that you wanted to give. It just seems it is difficult to use correctly this kind of API. And to get around that kind of foot gun one idea we were playing around with with is, imposing a actual hard boundary at the realm that you do not let references pass back and forth between the inner and the outer realm, you do not allow the two object graphs to be intertwined and instead you have some kind of synchronous message passing API to do your message passing back and forth out of the realm. This could be something like using structured clone except it's synchronous. It could be a new API that there's some kind of copying thing, I know Caridy has some other interesting ideas to add kind of new levels of expressivity here that's not you know, just like copying that might be better the drawback to this approach.

SYG: So, okay, so to the pro to this approach is without like by construction, you cannot mix the object graphs, because by construction you cannot mix the object graphs you would have more guarantees. The con is that it is strictly less expressive than the current proposal. This is a a subtle point. The way in which it is less expressive is that because you can no longer actually mix the two object graphs with strong references as you normally would, if you have a cycle between the inner realm and the outer Realm, you can no longer keep that cycle alive - like you can make the cycle leak all the time, but you can no longer make it be garbage collected as any inter-realm cycle would be. Meaning you you lose this this nice lifetime automatic life time management. If you need a cycle of live objects across Realms like a proxy one side that keeps its its Target alive on the other side. And the target itself on the other side points to a proxy that points back to a Target on the other side like a like a normal cycle. Because these are not actual references, the GC has no visibility into what the liveness, what the reachability property is. It just thinks there is no reference. So if you don't manually keep it alive it'll just collect half the cycle and the only way to really keep it alive is you just like pin it and you make it live forever. So that's kind of crappy. It precludes a certain use case that Fires live life cycles across the realm I admit I don't fully understand the use cases around that but caridy and team says to me that that use case is used by (?) close. So that's the primary drawback.

SYG: So that's about it. I'm sorry I don't really have more to report. We're still having internal discussions with several different teams, or treating this in the usual like PM problem-solving way, where we identified the use case. In this case the Amp script like use case where you want to run some code that is kind of trusted. Because this is not an actual security isolation mechanism. and see what technology perhaps the current proposal its best before that use case. And as we come back with more actual decisions, hopefully, you know we can I can give a more official Chrome position on Realms. We have and I kind get maybe a couple of minutes. I'll to add up more details are

CP: so first of all thanks to Shu for all the follow-ups and I'm getting these rolling. We definitely to put more time on these but for me personally It seems a good compromise, and we as he mentioned we have been looking into what solutions might be orthogonal and complementary to these counterproposals so we could do the other things that we want to do like being able to create visualization and being able to have these cycles between the two Realms and having more connected graph that are in principle disconnected but connected via the some user agent artifacts that we haven't defined well yet. But I feel that we're in a a good track. We probably will spend time the finding or looking into the APIs or how you interact with these Realms if they are completely disconnected because in principle we were saying if they are not disconnected and you have access to the internals of the realm the global then you can easily get into it and do evaluation and other things there, but if there is a hard boundary between them and then we have to figure out some more API so you can interact with the with them. In principal it's is looking good and we'll definitely put time on on trying to get these rolling.

SYG: One thing I forgot to mention is that from a chrome security architecture point of view, from that teams point of view, these are the folks who care about browser security like hardcore browser security, these are these are kind of the Spectre hardliners if you will the sync hard boundary is not not any consolation to them. Like I think it a consolation to to our application-level ergonomic isolation guarantee concerns, but to the Spectre hardliners security folks, it doesn't matter. So to them one possible compromise for them might renaming and like more vigorous education that this is not isolation in the way you might think it's isolation. And this is a general hard problem given that Spectre - I I guess like a lot of folks apparently - or maybe what I heard from some internal communication is maybe folks are not yet convinced at a deep level, an emotional level that Specter actually works. And because it's just so magical I guess I suppose that's true. Like how many years has it been since Spectre came out have there been a lot of use of Spectre exploits. I don't know, but yes, it does definitely work, unfortunately and the yeah, we have to be careful to kind of separate the level of security guarantee here and be very very explicit and vocal.

CP: Yeah, we have mentioned before we were okay looking for a new name if anyone has any idea or any any proposal for that we're open.

MM: So first of all, I wanted to agree strongly that simply saying security boundary is very far from nuanced enough. I wrote a document to clarify these matters - "security taxonomy for understanding language based security" - That ill put into the notes that's already in some threads on this proposal and others. The key thing there is confidentiality and integrity are very separate concerns and rather than simply a binary is or is not a security boundary. It is an Integrity boundary. It is not the confidentiality boundary nobody at any point ever imagined that that Realms would be a boundary for protecting against a meltdown or spectre or other side channels. Okay. Now the thing about the object graph leaking between raw realms that are are in contract; that's absolutely true. That's very very tricky to get right. Agoric at one point when we were trying to do that in a more ad hoc way repeatedly got it wrong, So I completely agree that that's very hard to get right. The right way to address that is with a membrane between the Realms but the problem with combining the membrane with the realms proposal is that now, we don't a universal membrane abstraction. A membrane Creator is not currently a a understood as a reusable abstractions, it's understood as a reusable pattern, and that's why there's so many membrane implementations. So I think that it's important if Realms are don't have any additional mechanism that they they should have strong advice to only use it with an additional installation mechanism like membranes with Realms. The other thing is Is the cycle problem is already solved by membranes. The reason why we one of the reasons why we separated weak maps from weak refs when we first proposed weak Maps is because weak Maps as the mechanism for crossing membranes still allows cycles that cross membrane boundaries to still be collected. In fact, they can cross multiple membrane boundaries and still be collected because of the way weak Maps work.

SYG: So that's it. So weak maps. Let me see if I understand correctly. Maybe I misunderstood your point. Neither weak maps nor weak refs can solve the cycle problem problem. If there is a boundary where you cannot have strong references across the room balcony, which current --

MM: That's incorrect. I bet it's directly graphic is correct for weak references. That is not correct for WeakMaps.

SYG: he problem is that there's no way to like it's the converse problem like you you cannot tell that the GC Tracer about a reference that is not there to keep the cycle alive.

MM: Thing that the GC Tracer traces is the key-value association in the WeakMap on either side of the membrane. That's how -

SYG: You can't have that with the hard boundary to see the alternate proposal is that you cannot physically get at the reference what

MM: my point is that the membrane should be the hard boundary. It's built on the WeakMap mechanism, but the entire point of membrane, is that if doesn't keep the object graphs isolated is already not a working membrane. The entirety of the whole theory behind membrane. is that the first Integrity criteria is that it has to be completely isolating of the two sides of the object raft from each other. We should not be any direct references across across the membrane.

SYG: Mark, I appreciate your singular vision for computing and integrity and membranes, but I will be unequivocal to say that membranes are about as not-compelling for the Chrome use case and the web platform use case of Realms as it can get. Like we appreciate that you are able to layer membrane implementations on top to solve your use cases, but the foot gun we're worry is specifically that if to use realms for this even this application Level Integrity guarantees require you to shim membrane on top. That is a foot gun We are very very worried about.

MM: Okay, I understand. There's nothing shor. I can say in response, but there's certainly a longer conversation which we should have on the side.

CP :Mark just to add to that like we need to work on these, but I do believe there are options on the table that we can look at that might allows us to do the membranes on top of these hard boundary between Realms and let's this offline.

JWK: I stand for Mark, I think the membrane built-in mechanism is very important for developers to use it with less effort. Although Shu says this is a footgun that developers might think is a security boundary, we can make the hard boundary as a default to make it less error-prone. If we are giong have to have some hard boundary mechanism, it should be the default but not an enforcement. By exposing some options we can still have some direct access to enable some more advanced usages.

LEO: Yeah, just to expand two things here on what Shu and Mark has mentioned. I agreed the membrane is not the use case directly the immediate use case, but actually like the path towards functional use cases. It is important as Mark has mentioned like there is a pattern and not a but single abstraction. I think one of the ideas here is actually if we can work on top of that to create some abstraction that can be used in general in we should be discussing that as like a step forward, like homework, and in the same way one of the use cases this would solve, and I don't think I've made it clear before that and I also play to expand is, like like which is Salesforce used. but I believe we'd see use case for other mostly Enterprise to come in into using the web. (?) Which is like app Marketplace. The same way the browsers to use extensions, we do have an app marketplace where clients can use a mixture of their compartmentized components in their application and that the company itself is the system. This is a pattern that Salesforce uses today. This is a part of me that like other companies need to use on the go where you have like an at-scale set of clients and Etc. We should expand these use cases more. When we are not talking about security the integrity remains like really a fundamental aspect of what we use and we need to use use and we hope to achieve with this Realms. Once again as Caridy has mentioned we need to work through like how to tackle this. I believe there are chance there is a chance that we can get the best of both worlds here.

SYG: I think the marketplace use case itself is a spectrum depending on your threat model. Like how much do you trust - Or even the components use case. Iit's a spectrum. like if bought your component you kind of trust it or something. Maybe you are happy to run it in process. If you are more concerned that it is possibly malicious the answer the web platform security architecture folks would say is you should run it async. Yes, there are ergonomic issues, but you owe it to having the security boundary at the process level.

LEO: Well, there are many things that you actually just get to async and it's more useful when you have a specific process in run-outs like specific executions. I believe for that part when you go on a whole system of compartment-ized web, or componentized web, where you have like a different applications in a single DOM, that's what I mean like and that's how it operates today with like within browser extensions. A browser extension is basically a whole application with multiple having synchronous access to the same stuff. What I want to bring is actually like operating under the same system and the system lives in the same unit of DOM tree. This separation requires some sort of integrity and synchronous communication and for a whole application that just like not getting a reference just messaging would not suffice, but that's something that we're still working on. This is just improvise. This is not something that I prepared to present here today, but of course, this is homework.

KG: Shu, I know you and I have talked about this, but I want to make sure that it's captured. I appreciate the Chrome architecture folks' hard line on, the process boundary is the only security. I want to emphasize that I think this view is extremely short-sighted; that in practice it really does matter that - people's response to this is not that they are actually going to start running their npm dependencies in a separate process. It's just not happening. So I don't think that we should say, oh if you don't run it in a separate process then you get no security guarantees, wo we're not going to try to do anything for you. Like I think that's a terrible idea, I think that we should recognize that people are going to continue to run their npm. dependencies asynchronously and if we don't give them tools that make it possible to do in - to get at least some stronger guarantees, or at least guarantees that like I might have network access but at least they're not going to like Magecart your DOM. I think those cases are important and to neglect them because of Spectre I think is a huge mistake.

SYG: I agree with that. I think you convinced me personally to be not a spectre hardliner. I find those arguments compelling. I think the difficulty is here that I am not in the security world. And I've come to learn there is a spectrum of how seriously you consider security in depth to be actual security. And I think there's philosophical disagreements for folks within the security world. That may be where we're seeing some of this disagreement on the Spectre hard-line. As a PL person I personally would rather give folks expressive building blocks and I wash my hands of the crappy things they do with it. Personally I'm fine with the Realms proposal as it stands today, and I also don't think it is dead from the official Chrome position. I am highlighting the amount of pushback that we are getting because of the implications that has for the whole of the web platform and not just not just the V8 engine. Yes, it's in the JS language, but it has pretty pretty big implications for the web platform. So I take the security-in-depth thing to be important as like moving the bar at least a little bit higher and unfortunately, I don't think that's that compelling an argument to the actual security folks who speak for security and make security decisions, but I think that utility point is well received. and the Integrity point can be kind of be spun as a utility point,

BFS: I'd be curious Shu. We just had a presentation on dynamic brand checks where these code likes and it seems like those were also taking kind of this incremental approach towards a problem where they weren't trying to solve or they did try to solve with these absolute hard boundaries with the CSP unsafe eval. Is there any expectation from the security folks that there's actually going to be any adoption of a security mechanism they propose? Or are they going to have to dial it back just like trusted types of doing?

SYG: You're asking me if the security folks think trusted types should be adopted?

BFS: Well, no, so one of the things that we were talking about with the code brand checked is there was a mandate of a hard line with unsafe eval when they created that all feature of CSP. People are unable to practically adopt this security boundary boundary. It was determined to be an absolute mandate at the time. I'm just very concerned that whatever mandate is being handed down to us. I don't see any path towards practical adoption from the ecosystem and it will be good to know what they expect an adoption path to be.

SYG: Adoption of which Alternatives? the the sync import?

BFS: adoptions that they are proposing even a middle ground. I've got some slides on the memory leak problem apparently from two years ago exactly like you're describing where we were having problems. It's like that parking with node.I don't see a practical adoption path towards any of the Alternatives they're providing and I don't see them achieving their security mandate. So if this is never able to actually exist, if this is somewhat similar to unsafe eval in the wild. What are they trying to get?

SYG: Well, I think this kind of goes back to Kevin's point, which is if you're heart if your let me play Devil's Advocate if you're a super hard-liner and you think the only process the only security boundaries to process boundaries everything needs to be done async if you can't rewrite your third party dependencies, too bad - like if that's you're position I guess what we're hoping for would be that you don't do the thing at all. Then you have security because you didn't do anything. But the but the reasonable flip side of the argument is that the reality today is that people are just importing third party stuff and running them synchronously without any boundary whatsoever? Is that the gist of your argument? like the alternative that the best alternative is like no, no boundaries even worse. Is that what you are arguing?

BFS: I'm not arguing, I'm inquiring, I don't understand what they are trying to get motivation-wise here when we saw a presentation kind of taking the opposite approach an hour ago.

SYG: I'll let KOT chime in here. I think think the key importance between the trusted types program and the concerns about Realms is that trusted types is explicitly about filtering out stuff for code that you already trust to run, whereas realms could be easily Miss construed to be designed for code that you don't trust that you run inside a realm. Whereas as it is from the security folks point of view it in fact does not guarantee that right because of such as well

BFS: The code example on the D3 CSV example did have the potential to be exploited because you're trusting something that's enforced(?) by a potentially mutable API. We could get into details here and more. Just I don't understand any adoption path that they're aiming towards with these and I don't understand the mandate of a security boundary except as kind of a mandate. There's no expectation for people to achieve it.

BFS: [new topic] It sounds like there was no contention with the use cases outside of isolation for whatever you want to call it, integrity, security, all that. Is that true?

SYG: No contention is a good way to put it there's I don't think there's some fizzy a stick support. It's just true that it is useful for amp, and we consider that to be a use case, it is used for Salesforce. We consider that to be a use case.

BFS: So one thing that may be a use case as well that I haven't been hearing about is in general hot module reloading or customization of modules. The way people are doing it now they are doing it the same process because they have no real way to do it across an encapsulation boundary and it would be good just while you're talking to get some feedback on how they expect people to encapsulate that boundary for loading rather than having, you know, every version of every app in a hot reload under the same same.

DE: I think this cycle problem is very inherent. It comes up whenever you're bridging multiple different places where code is being executed. For example, when WebAssembly and Javascript interact, they have different object graphs that could point to each other. Sometimes you can use WeakRefs. We decided to add WeakRefs to meet some use cases, but they don't handle the cycle problem. So we're working on the Wasm GC proposal to allow a shared object graph between Webassembly and JavaScript with cycle collection. It is similarly important that we make multiple places for JavaScript code to run so that they'd be able to have rich references to each other in some way or other. Maybe it's possible to be this over a hard boundary, but there needs to be a way to handle this cycle problem because it continues to reoccur in different places.

Conclusion/Resolution

Discussion continues internally at chrome and elsewhere.

Intl Locale Info for stage 2

Presenter: Frank Yung-Fong Tang (FYT)

FYT: Okay, so, my name is Frank Tang work for Google on v8 internationalization team. So today I have three proposals to talk about this meeting. So two of them will be back to back right now and the other one for our stage one proposals. They Advanced Mobile tomorrow for about time zone. So the one I first talked about squat until Locale in for in this particular, You would like to advance to stage two one sec Frank. I had neglected to make sure that we have no takers. So the motivation of this proposal is to try to expose our local information For example the “week” data. What does that mean? In different locales systems in particular when you render a calendar to the first day of the week to be different some for example in UK. I think the start was Sunday and us You'll start Monday or vice versa one way or the other I guess. In the U.S. I think usually people consider Saturday is a started a of weekend and those Sunday and of we can but in a lot of other part of the world, I think Israel and a lot of Muslim country. They start the weekend on Friday and end on Saturday, so those kinds of information and also with the particular year, which we can consider the first week the end. how many days minimum stay in that first week. Which hour cycle is used in the Locale and what kind of measurement system are used in that Locale Locale so this do You have been discussed and tc39 meeting September last year and then stage two one and I'm come to ask for the stage two proposal.

FYT: Here are some prior Arts that are listed here. It just very briefly shows you that it have some JavaScript libraries already doing this kind of thing, and Java and C libraries that are already doing that. In particular most Mozilla internals UI internal Library, which is not exposed to web developers, but it's mainly for Mozilla internal UI, can have the thing for a while and there's a need to expose it to the web developer. And we see that in several of these JS Libraries, some of them to pretty good jobs, some they're not that great. But it seems like like there's a lot of Need For This

FYT: the progress so far so in stage zero. We have some design choices that expose this information as a proposal to consider. I put together a profile to try to see what makes sense and currently we stuck it out a draft to use design option two, which means For option called function call as a getter while in hello Cal and just return object to expose those information. I think the other proposal we drop on the option drop is that each of them has a function called. I could be too much. So that was drop can be seen in that particular URL. I will show you here in high-level. Just roughly show you what it looks like still with the thing. It will return an object and have those value and depend on the Locale will return different values here. For example, Arabic or turn the direction or normally what Arabic Locale the text the global direction for RTL. And similarly for d for week you may have some different value. (re: slides - So apologize I somehow didn't capture that. I think as I mentioned this getter should be attached to Intl.Locale instead of Intl itself.) So this is a proposed change in the Function. so have spec'd here. Also for the unit in form, which mean that CLDR in the Unicode standard we have three different kind of system for the unit info Matrix In this area we could have some discussion of if it is useful or not. I think we may see you have some discussion whether this is used for not whether we can do to change it. There's some discussion in this area but we think this probably should be able to discuss this in stage 2. And also the week info I think is basically the change: that we're going to return the object to expose this value. So for example a calendar widget they can use this to render the calendar calendar or calendar application can use it. Yeah, the .defaults part there is not really clear spec out yet. So we still try to figure out what kind of information should be exposed here and whether it is why way to expose because that could be a burden to the implementation and we have to access different kinds of objects to get this information. We're still trying to figure it out. But the idea is that we have some way through exposure (?).

FYT: so what happened at the January 14th meeting: we already agreed to bring this for stage two advancement. Notice as I mentioned there are still some areas we're not quite sure on for stage two is that entrance materials that will have an initial spec? And the idea is that the committee does expect a feature to be developed and eventually included. It doesn't mean that all the details are narrowed down to the high-level scope here. So this is my understanding about what stage 2 means. I believe that has been met as a criteria because we have the initial spec. Probably still have some details which need to be modified again. Of course here is the thing we have found for stage 1 which already passed so - any question or for discussion about this.

MF: So I want to preface this comment with, I'm certainly not an expert on this topic, and I'm also unsure of whether this is a stage two concern or not. But yeah as you mentioned there's some work to do on the measurements API. From what I've seen it appears overly simplified. We have like three results which are US, UK, and SI units. Again, I'm not an expert on this. I understand in certain cultures -- like I'm somewhat familiar with Canadians -- it depends on what you're measuring for what unit you use. Is our intent to, you know, say I'm measuring gasoline, so it's going to be liters, but then I'm measuring milk so it's going to be in gallons like depending on what they do in my culture? Or like when I'm measuring the height of a person it's going to be in feet but I'm when measuring the height of a building it's going to be meters? Is that something we want to capture?

FYT: yeah, so this area about unit info on the I think you're absolutely right. We have some discussions with CLDR folks. Yeah, we have some issues in this area. So it is possible during stage two. We may want to reconsider whether we have to reduce the scope of all this. You are absolutely right. This is the area that I think there are other people who expressed some concern. It could be subject to change you can stage two, from my understanding.

MF: Yeah. I guess it's something that we could address in stage 2, but it also seems like what you've shown here is going in a direction that I don't see being too useful. I don't know whether maybe you'd want to split this part out and kind of try to address it more deeply. What I guess I'm trying to understand is what the goal is. Are you trying to provide an API that has that kind of depth?

FYT: I think it didn't get my point. I'm fine to change this so that this part could be removed. That's what I'm saying.

SFC: So Frank’s proposal is based on what's currently standardized in Unicode technical standard 35, UTS 35, and UTS 35 only specifies these three sort-of coarse groups. Now there's been a lot of work on this subject. There's already a stage 1 proposal called Smart Units and Unit Preferences. So I think that you know one direction we could go here is to say that, okay, we're going to go ahead add the the UTS 35 style, the coarse three category measurement systems, or we can continue doing this work over in the unit preferences proposal that's already at stage 1. Because it is definitely a large and challenging space. I have teammates who have been working on this space over much of 2020 and it's definitely a challenging problem and without any simple solutions. With a simple solution, there's a risk of being too simple and not being correct, and being too complicated could be difficult to use correctly. I think that's sort of the nature of this unitInfo getter. I think that we could drop this from this proposal and continue working on that later. But I also think that it has some value since it's conforming with the existing Unicode standard.

MF: The simplified API seems to make it very easy to do the incorrect thing and that worries me.

FYT: Yeah, I agree with you. That's why I saying it could be dropped during stage two.

MF: Is that something that we should consider before stage two, though? If it seems like we're generally unhappy with that API other than for the consistency of providing the results according to UTS 35, or whatever, then we just proceed without it by default? like that seems like the right thing to do.

FYT: Are you proposing conditional advancement to Stage 2 by removing that is that why you're proposing?

MF: What I would do if I was championing this proposal is I would ask for the proposal to move forward to stage two without the measurement API, unitInfo

BT: ZB agrees is on the queue. for what it's worth.

SFC: We discussed this in the 402 meeting in December and I also generally agree with the sentiment that unitInfo should be part of the unit preferences proposal. I have a slight preference for not including it.

[queue is empty]

FYT: Okay, if no one else had a lot of questions the only concern about this is that there's a preference to move to stage 2 without the unit info part. Is that idea? If we decide to just drop this part and include the other part, would that be okay to move to Stage 2?

BT: I'm not hearing any objections.

FYT: Anyone can second that?

SFC: I second.

FYT: Michael how about you? How do you feel?

MF: I'm happy with that.

Conclusion/Resolution

Stage 2 without the unit info part.

Intl.DisplayNames for stage 2

Presenter: Frank Yung-Fong Tang (FYT)

FYT: Yep. So this a display name V2 for stage 2. So a little background so intl.displayNames has been proposed and moved to stage 4 in September 2020, and this is the version 2, which is an enhancement from that. Basically what happens is you missed a (?) time. They are some issue that we cannot agree and there's some difficulty so we just drop that for the first version and the talk about later and so around all of us last year. We put together the proposal for Fortune to to capture some of the more important more important issues in life. all over promise that our version one and the same things that September we move to stage one already. And so this one is talking about proposed to move to safety to but with one things there's a huge scope wreck reduction in between the stage one and right now

FYT: so the motivation for this API is to enable developers to get human translation language regions script and other display info on the fly I which is commonly used for also provide a straightforward API to why the functionality instead of some work around way to get things down.

FYT: So here is the change during the stage one period which means from last September to now. First of all, we have traps back then modify couple Klein one key thing is during the January meeting with ...

[killed the bot]

FYT: There was discussion about month, week, timezone. It's complicated but we figured it could be addressed as part of Temporal. Instead of adding this here, we are just going to delegate to temporal.

FYT: There is another thing proposed from IBM, from <?> We're still considering it but we don't have spec. That will need to be discussed in detail. What we are changing from a high level is, we are adding two of the new fields:

FYT: One is calendar, which returns the name of the calendar (e.g. gregorian), the other returns the name for the unit (e.g. if you pass meter it will return meter). The spec change to ecma402, based on displayNames, is Scheduled to publish in 2021, . Also it will define instances. Basically the only changes for v2 is adding additional support for calendar and unit. At the general meeting we agreed to bring to tc39 after removing the other parts from the draft spec. Again this is for stage 2, not 3, so we have a lot of discussion as I mentioned earlier. Are there any questions?

BT: No one on the queue yet…

[Bot turned on]

SFC: I just wanted to address, after discussing with other 402 members (ZB in particular), there are still a couple open questions about exactly the scope of this proposal, and exactly which types of items to include. I think that you know, I'm still interested in seeing date fields, not months and weekdays, but date fields themselves, included in the proposal, and I think ZB has some concerns about units. I think that ideally those are concerns that we could resolve before stage 2, but I think that like in general I'm happy with this proposal and I believe ZB is in general happy with the proposal modulo that the exact fields that we're including could still be subject to some change. So yeah, I just wanted to get that on the record when we're talking about stage 2 advancement.

WH: [Referring to the slide about what to do if type is "calendar"] What is the type nonterminal on line 4a?

FYT: So in the UTS 35 there is grammar describing the grammar of the Locale and one of the how to say that one of the tokens, or one of the non terminal is "type". [displays tr 35 http://unicode.org/reports/tr35/#Unicode_locale_identifier ]

WH: Okay, so it's an alphanumeric from three to to eight characters.

FYT: right. in theory, it could be separated by "-". it had to be between 3 & 8. I think there are a couple calendars. There are having - but they have a shorter name. I think the Islamic calendar has two different names and one of them has a dash. So for example this Islamic [shows screen - https://github.com/unicode-org/cldr/blob/master/common/bcp47/calendar.xml ] - those are the possible values for the calendar.

WH: This table is what I wanted to see, and answers my question. Thank you.

FYT: So currently I think all the calendars we support will have those. Yeah, so those are between three and eight characters.

FYT: Any other questions?

BT: The queue is empty.

FYT: Okay, so I'd like to ask to advance to stage two.

BT: We have consensus.

Conclusion/Resolution

Stage 2

Chartering Security TG

Presenter: Michael Ficarra (MF)

MF: What I'd like to do today is ask the committee about chartering a task group focused on the topic of security. We've seen that this is a topic that we frequently discuss within the main group here, and that there's also a lot of contention around, or a lot of different understandings. I would like to address some of those issues as well as generally focus more directly on the topic of security as I feel it's important. So if you're not aware, (slide 2) Ecma has this concept of task groups. There are currently two task groups within tc39. There is TG1 which you are all attending right now and a part of, and then there's TG2 which works on the 402 stuff, which you saw just recently. So I'm proposing that we create a TG3 focused on security. (slide three). This is an important slide. These are the topics that I propose as the mission for this group. Hopefully everything that we discuss will be aimed at advancing one of these topics.

MF: So firstly is assessing the security impacts of proposals to TC39. Eventually. I would like to get to a place where we could designate security reviewers in the same way that we designate regular reviewers or editorial review for proposals that had been advanced this way. We can constantly have a security focus on things that go through the committee. Number two, I would like to produce documentation on the JavaScript security model. Note I say "the" JavaScript security model: this is possibly not even a thing; we don't currently have a shared understanding of a universal security model for JavaScript. I think that the more progress we make toward that, and toward documenting our understanding of it, the better. Number three, I think that this TG should work on proposals, in the same way that TG2 does, that help developers create secure programs. So this would be newly introduced proposals, Stage 0 proposals that come from this group and presented to TG1. Number four, I guess I just say recommend a committee response to be privately disclosed security issues. But this also means triaging privately disclosed security issues and being the initial point of contact for such a thing. So when topics like Spectre or meltdown are brought forward, I know that when we had that these discussions involving those topics there was some uneasiness about talking with such a broad group about these topics, and maybe having this smaller security focused group would make that easier and allow us to kind of form a recommendation for a response to these kinds of things before bringing it to a larger group. Number five I think is a super important one. Creating best practices and recommendations for writing secure programs. It's very hard to write secure programs in JavaScript today. There are many pitfalls. There's like no real place that is trying to collect all of these things, there are not many groups that have formed with individuals who are qualified to evaluate these kinds of things. Having this as an output I think would be great. And monitoring the changing threat landscape for popular embeddings. As the security world changes - we should be able to, we should at least be aware of those changes.

[ clarifying question from SFC]

SFC:Yeah, so my question regarding scope was - we've raised this issue multiple times to the committee about privacy concerns. And for example fingerprinting, adding new APIs, does that give a way to fingerprint users and other and in other ways to create privacy issues? I know that privacy and security should not be conflated because they're two different problems, but do you envision privacy as also being in scope of this task group or do you envision this being solely focused on security?

MF: At this time, I don't see privacy being a topic that this group should handle. I think that people who are in the privacy world have qualifications in there, and skills within their knowledge on a topic and similarly for the security world. Yes, they do interact a lot of the time as a security compromise can also lead to a privacy compromise, but I think if the committee finds it important to specifically focus on privacy, we could possibly do a similar thing with a privacy TG. But I'm not proposing that this TG be responsible for privacy in the same way that it is responsible for security.

ZB: I want to support what Shane suggested and counter a bit what you're saying. My motivation here is that I've been working on [audio cuts]

ZB: so atmosphere microphone this year or something. So at Mozilla we fairly often get to the position where we are trying to balance the API design for TC39 APIs with the resist fingerprinting and Browser interest in privacy. And from the proposed Mission and scope.I would say that all six of those points apply perfectly to privacy as much as to security. I hear Michael's point that the community of people who may be experts in those topics may be not overlapping but at least I would love to - since I don't think we have enough momentum to start the TG around privacy. I would love to have an opportunity to work on things like point 5 from your list - best practices and maintaining best practices and recommendations. I would love to also start building a best practice in recommendation for privacy during that API design in JavaScript and I think it's going to be a very very similar model of culture and organizational drive for TC39 for security best practices and privacy best practices.

MF: It may be the case that these topics even within the committee overlap a lot and need to have good communication. I don't know if that's the case though. I think maybe we could get through the whole presentation and revisit this topic after, not just within the mission section here.

MM: The term "security" is a broad umbrella term. I think it's a fine umbrella term for this TG. And as we go forward we will be making distinctions because that's crucial to working things through, different mechanisms will serve one part of a distinction and not others. I think privacy very much falls under that umbrella, particularly falls under the overall set of confidentiality concerns, preventing leakage of information, which is very different than integrity concerns. But yeah, I mean if my privacy is violated I consider my security violated so I think it does fall under the umbrella term.

MF: I imagine that there are topics that we would consider privacy topics that would be appropriate for this group group to address. But I also imagine that there are privacy topics that I wouldn't consider part of this group. This is a complex relationship that I just haven't fully considered so I don't know what the right thing to do is here.

MM: I think iit can be the group to consider her privacy topics that people want to bring to the group and try to figure out whether we want to include it under our umbrella or not. I think that the meta discussion there is in scope.

MF: Sure, I think that's fair. And as we get to later slides, I'll be talking about the leadership that I'm proposing and how we will be working. That might make it easier to have a concrete proposal here.

WH: I agree with MM’s position here. If you have privacy violations which can leak sensitive data, that's a security issue. So these two are very tightly intertwined.

ZB: I want to stress - because I think it it has been (?) that we can talk about it the different domains of expertise, but I think that one thing that I see in this proposal is that it proposes a certain cultural change in TC39 and how a particular group of interest is meant to, point one, assess an impact of a proposal, point five, maintain best practices for proposals. Independent of whether it's security or not security is a new model of interacting with our dynamic of our proposal advancements and development of the language and I think that this in particular maybe Security Group is going to be the first one to do this, but the impact of that work is going to directly benefit privacy considerations, which are going to be going through exactly the the same model of interactions with TC39. On that level I think is very much aligned and on the level that the WH was listing, and Mark was listing that they do interoperate and impact one another. I do think that there is a strong reason to consider some privacy considerations within this group. I would be very happy to see that represented. I understand that this is not an extension of scope that Michael was hoping for

MF: I came into this very open to changes to this proposed mission. This is something I came up with, I ran it through some people on the reflector. I'm happy to have it evolve as we have this discussion and evolve as the group meets and kind of understand its purpose better. I think that the only thing I strongly feel is that as the group continues to operate we have a well-defined mission, even if it may change over time, but something we can point to to say this something we should be doing or working toward or should not.

MF: Before we move on from slide three, I just want to reiterate this is not a fixed thing right now, and I'm happy to change this as the committee desires during this presentation.

MF: (slide 4) I have some things I've listed as explicitly not in scope. Now a few prefaces to this. I felt that it was important to give examples of things that would be not in scope to try to make our understanding of what is in scope a little bit more solid. But also if you read the note up here it says these are not in scope for TG output, but that doesn't prohibit the TG from considering or discussing them. What I want to avoid is the creation of new ecmascript subsets or contributions to existing ecmascript subsets that do not benefit ecmascript, or just generally things that are out of scope of TC39 to begin with, which I think that we would have broad agreement that those those actions are already outside of the scope of TC39. So while we can discuss these things and discuss how they affect all JavaScript programs, I think that we should avoid going down the route of trying to subset the language or advance subsets or make changes to how the origin based security models of the web works for instance, that kind of stuff.

MF: On to slide 5, I've put together an example of what I would like to see as a high-level agenda. Now, this is just an example, the actual agenda would be set by the chair group which I'll get into in a moment, but these are some topics that I think that a security TG could focus on. We can understand and document adversarial domains in which JavaScript is currently and commonly used. So obviously we all understand that the web is an adversarial domain with different origins being potentially adversarial. But there are other domains that JavaScript is used, such as in in contract systems on blockchain technologies. So we should understand and document how the dynamics of those domains work. We should understand and document how JavaScript is currently used to write secure programs. So as I briefly mentioned in the scope, I would like to document recommendations on how to how to write secure programs using JavaScript, but there are patterns and strategies people are using today to try to write secure programs using JavaScript, there are also Frameworks that are meant to help you out. There's tooling that is meant for this purpose. We should understand how that works, what they're based on, and document those things.

MF: Something I'm always reaching for when writing secure programs is what kind of language invariants do I have? Something as simple as just calling a function ends up having all sorts of ways to intercept it and be not reliable. I think all of us here are pretty familiar with those kinds of things. What language invariants are available for us that we can use them as building blocks and build secure programs? A big topic that Natalie from Google was interested in was collecting implementation defects that have led to security issues which I guess is encompassing both of these next topics. She had brought up typed arrays and how those have led to many implementation errors, but there's also just like language features that can that can lead to other kinds of errors - not just implementation errors. There's the __proto__ accessors which are often accidentally used by doing a computed member access with user input string. Mapped arguments object break what we would otherwise think are invariants, but actually, you know, sneakily are not, and of course direct eval. Those are some examples but not necessarily what this TG would try to work on to advance our mission.

MF: So let's move on to slide six. How would we organize this group? What I'm proposing is that we have these three roles: a chair, specifically a group so that we don't block the progress of the group on somebody being too busy to do their shared duties; the chair would be responsible for prioritizing the TG agenda, communication and scheduling meetings and refining our scope over time. Second role I propose is that we have a speaker. This role would be for doing the main communication between this new TG and TG1, creating presentations etc, so that it clearly communicates our recommendations and our understandings to TG1. And a secretary so that we record and document everything that the committee outputs, or that the TG outputs. So those are the three roles that I proposed this group have.

MF: Slide seven, This is again, like on slide five, an example of the process we could have. We would have to have the chairs set the actual process, but here's something we could do. We could have monthly meetings and we could gate that on whether there are enough topics on the agenda. We need to figure out the duration. We need to figure out where we published our notes if it's published to tc39/notes or not. We need some mechanism for communication outside meetings. Github discussions might be one way. We should be doing regular status updates for TG1 meetings. Remember TG1 meetings are the ones you're in right now. I'm proposing yearly selection of those leadership positions from slide 6 coinciding with the TG1 election. I think this makes sense as the TG1 editor group also was proposing that their term coincides with the chair term in TG1 and of course deference to TG1 on all matters. So TG3, the security TG, would be only responsible for making recommendations, never for producing standards of their own or making unilateral decisions.

MF: And then on slide eight. I have a list of people who have, on the reflector thread, expressed an interest in participating in this group. I think this is a fairly strong list. I don't want to speak for any of them. They haven't all explicitly agreed to any of the content that's in these slides, just expressed a willingness to participate in the general concept of a security-focused TG

MF: So finally slide 9. These are the concrete things that I'm asking for us to agree to today. Number one consensus from TC39 to create this TG with the scope proposed on slide 3 and their roles proposed on slide six. Number two, the chairs following this approval prescribing a process for selecting TG leadership. From there, the rest of those undecided things can follow. So that's it. That's all I have.

MM: Just clarification. Jesse was listed as a language subset or variant and it's important to understand Jesse is an advisory subset, in the sense that there's no mechanism, there is no language mechanism that enforces the code stays within it. It's actually much more like advice on how to write secure code. And in fact, that's what it was saying actually designed to be, the notion that we're going to be making recommendations on how to write secure code. Jesse is an advisory subset of SES. So I think as a recommendation of how to write secure code. I think it's in bounds. But otherwise, I agree with the point of the slide, just not how Jesse was classified.

MF: That's a fair point. The mission for number five maintaining best practice recommendations for writing secure programs, that output may be in the form of like they have Jesse is presented. It may be in the form of more typical documentation. You're right and I was calling out existing technologies and existing work, but yes, I understand that.

SYG: Yes process questions first a comment. I think given the initial size of the folks and called out on the slide there - three formal leadership positions seem more bureaucracy than needed for the initial size. Wondering if we should scale with growth instead of having a chair plus Secretary plus whatever the third one was.

MF: Someone could have more than one role. I guess if they chose that there's nothing preventing that from happening if we don't have enough people to all fill those roles separately.

SYG: Yes, that's of course possible, but don't see the need for the upfront structure I guess. But that's not really a major concern.

MF: So each of the responsibilities would have to be filled either way, right? Like even if we don't have somebody assigned as speaker somebody would have to create presentations to TG1 and deliver them, whether or not we call them something.

SYG: Yes, I'm worried mainly about lengthening elections, but I guess once a year is not too bad, but I don't see the need for three separate roles right now.

SYG: Did you go into what the actual process for arriving at the recommendations are? Is it also consensus? the output of this TG to TG1, what are they? It seems like mostly they were recommendations perhaps on like impact assessment on proposals or these I guess these one-time documents like the security model. How do we arrive at that? What is the process for working within the TG? Is it also consensus?

MF: That's a good question. I had kind of I guess implicitly assumed consensus up to this point.

SYG: I ask because as you alluded to, of course, the topic is very contentious. Different parties care about very very different definitions of security, and I am concerned about our ability to have a unified recommendation as output. There is something to be said for - we do not have a clear and shared understanding of security in TG1. And if we pluck a subset of folks into a separate group and hash it out, are we going to come up with a different result?

MF: Well, I can't guarantee that we would but it seems like we should take the optimistic approach at first, of assuming that we can, and if that doesn't work we should move from there.

SYG: Sure and if part of the hypothesis here is that we haven't given security sufficient time in plenary itself to truly hash out the stuff. Maybe we can be more productive if we block out separate time, but I'm kind of on the fence on that.

MF: Yeah, that is pretty much the idea. When the topic of security comes up, It's always in relation to some specific proposal and it's based on whomever is speaking at the time's understanding, and we never actually address that difference of understandings directly in TG1 nor do I think it's appropriate for us to do that given given the size of the topic and how contentious it.

SYG: Yeah, that's fine.

PHE: I'm referring or commenting specifically on the agenda slide. I understand the slide is helpful in giving some examples of what the group might take a look at. I'm not at all comfortable with the places where it uses words like "current" and "common" and "today" I think there is - security is not a popularity contest and security is not exclusively about what's being done right now. There is a lot of work being done in JavaScript that is on the edges. The work that we're doing in TC 53 is certainly like that and yet has very real security considerations around the language. There's a great deal of research and academia around security and JavaScript that's relevant that wouldn't qualify as "current", "common". And so I don't particularly care for that aspect of the high-level agenda in that it kind of implies that the focus of this group is the web today versus the full scope of how JavaScript is used and in taking full advantage of all the knowledge and experience that's there.

MF: Yes, I hear those concerns. We had a bit of this conversation on the reflector. Slide five here is showing how, if I was chairing this group, and I was selecting agenda items to prioritize -- how I would prefer to prioritize work. I would prefer to have our earlier work, the work we work on over the first maybe a year or so, address things that are common and are popular. This isn't to say that we couldn't address these smaller topics or use cases or language theoretical security, just that I think the highest impact we could have starting out is addressing the most used and most common issues.

PHE: Right, but I mean security isn't a race. I mean, it's not like how quickly can we do a little something. It's about long-term planning to put the right infrastructure in place in the language, the right practices in place, and if we start out with a narrowed perspective the wisdom of the committee and the wisdom of the discussions is diminished. So, you know, I think that would be the wrong approach. And I understand that this agenda such as it's written doesn't define what the TG will or will not be, but I wanted to put it out there in this forum that I disagree with that particular bias.

SYG: I would like to remind folks that we are a standards body and the point of these standards bodies is first and foremost interoperable technologies by a bunch of participants that cannot legally cooperate otherwise. I don't see any TG under the TC39 umbrella as a general research group to further any kind of vision. We have different viewpoints and we all have our own visions of computing and security. The whole point of a security TG, I imagine, is to focus on some kind of interoperable agreement so that the current stakeholders which have current problems and common problems have interoperable solutions that we all agreed to adopt so that the standard is more secure. Not to do research to set a ten year timeline for how we might want to evolve the language to be more secure based on academic works. We can consider academic input, there are security researchers and they have great input. ,I think this scope of "current" and "common” is correct.

PHE: Sorry if I wasn't clear, I'm not suggesting we should become an industry research group. I'm suggesting that I'm reminding this group that that's from time to time. This group, from time to time, hears from people in Academia who have relevant input to our work and there should be nothing in the chartering of this TG which suggests that we would do otherwise.

SYG: Then I'm confused. I do not understand the first bullet point to mean that we would not consider academic papers or something like that, but that they not be the motivating thing that we do in the group, just like the motivating thing we do in TG1 is not read papers and do things as resulting from that but instead listen to the problems that our participants have and solve those current problems.

DE: I was a little concerned about the framing of the chartering excluding things like the origin model, but it sounded like the the goal here was to say we're not redefining the origin model or deciding whether the origin model is good or not, just considering it as we analyze - just seeing that is the thing that's out there in the world not for us to decide on as we're considering how to analyze the security of the JavaScript features. So to me the scoping sounds good. I think it's important that we expect to continue to have disagreements within the group. I don't think we should have an exercise to determine the JavaScript security model and put all the analysis for proposals on hold until we have agreement about that, because I'm not sure we'll ever have agreement about the JavaScript security model. I mean it would be great if we could but we just have these standing disagreements.

DE: So about the amount of TG administration, just from my past work on internationalisation and outreach groups. It's possible to bring more people into running these groups, but it's pretty hard to recruit a lot of people to do that work. For some groups they've been taken over by somebody else or somebody's co-leading them in others not I'm involved in or - there aren't groups that I started that have like five different people all jumping to be in some management subcommittee. So let's just be realistic about that.

JHD: So the first topic says that we should try to for example understand the common cases of X, but I'm confused why that seems to imply that we are refusing to understand uncommon cases. And in fact the bullet point which talks about how javascript is used today, which doesn't talk about common usage. It just says usage. Which would include everything no matter how - it seems like it would ensure that we do in fact understand uncommon usages as well. Is it my misreading that or is that the intention?

MF: My response there would be to remember that this is what I put together as an example of an agenda for our first year or so of work, and it is intentionally prioritizing things that will have the biggest impact in my opinion. Focusing on current and common technologies and domains was intentional. It does not say that we will discount things that are uncommon or not eventually work on those things. But if working on one of these agenda topics here we come across a technology that is very niche or not very capable today, but maybe has promise in the future, it doesn't seem worth putting in that much effort now because again the point of these topics is to have an impact that is tangible. So yeah, that's why these topics are phrased the way they are -- it is intentional.

PHE: JHD, I won't say your reading is wrong, but I don't think the second bullet point expressly invites uncommon and un-current things. You could take it that way. I will give one very practical example concisely in the interest of time, which is TC53 when we were looking for a security model that was relevant to serial ports and other I/O we found it in the security work being done on blockchain. Neither of us were current or common and so my point is that there's a lot of interesting stuff that's going on in this space. There's a lot to be learned from taking a broad look rather than a narrow look and I just don't want to see this group get off on a posture that tends to diminish how we look at those things. Alright, I hope it helps.

IS: In Ecma we have two types of end products. So one is the standard (thus “normative”) such as ECMA-262 and then the other one the Ecma Technical Reports (thus “informative”). Is my understanding correct - that here we would see maybe some Ecma technical reports as outcomes? So basically we would describe certain best practices in the area of ecmascript security. If the answer is yes, so it would be some Ecma technical reports, then. You mentioned the different types of leadership people in the group. So including for instance the secretary, but also next to the secretary then you would also need editors. So my question is, is understanding regarding the type of output correct or not?

MF: I think your understanding of the type of output is correct. I think that some, or possibly all, of that documentation that I mentioned we produce about best practices or about language features that have led to security bugs and how they led to them. This kind of stuff would be good for technical reports, and I did not consider the work that would need to be done editorially for producing those technical reports. I do want to again remind the group. I still think that all of the output of this TG should be run through and approved by TG1 before being published. So that would include these technical reports. But yes, we would either need editorial assistance from the editor group of TG1 or we would need to do our own editorial work in this TG to produce those technical reports.

IS: thank you.

SYG: The folks who are on the initial slide, and most folks in TC39 itself are not in my experience really in the trenches security folks who as part of our day jobs do security work. I would like some of those kind of people to also show up, especially in the discussion of the second to last bullet point, of implementation defects. Large swaths of the security bugs come from incorrect implementation. And while we have some expertise in the implementers in dealing with those bugs after they kind of crop up. I doubt we have any expertise for the same mindset as the people who are actively looking to exploit, to chain together these exploits and so on. I will of course extend the invitation to Natalie if she would like to come as representative of project zero who has great expertise and build that kind of thing. But it would be good to widen our representation beyond the current TC39 delegates. I believe we skew a certain way.

BFS: I just want to be sure that we don't treat implementation security as the actual meaning of security and whatever working group we Charter. So at least in the past, Google has held a very strong line that the only security boundary of importance is the implementation. Just JIT exploits and things like that while it may be true that we do not do that as As our day jobs of us are actively trying to prevent various kinds of security issues caused by JavaScript coding patterns, and think having both is good, but we shouldn't try to categorize anybody as not expertise without categorizing without specifying what kind of expertise we are seeking.

SYG: I think that's good feedback. I would certainly love to see folks in there who have expertise on supply chain attacks and so on as well. My position on the kind of implementation security bug being the only security boundary; I wouldn't say it's the only one I would say, it's the prerequisite one. Without that the other work that you do on top doesn't help if you can just exploit the underlying implementation and break through whatever things you built on top. So I think it is very important that that category be represented.

BFS: I agree. It should be represented, but I think even if you give us a solid foundation if the APIs are abused such as __proto__, it doesn't help actually in practical matters. The serious amount of noise in security alerts at least in the JavaScript ecosystem from the coding of application side of things is exactly from API usage.

SYG: I think we're agreeing. I think I'm pointing out the nuance that part of the tension that has become clear to me into security of like information security versus applications level security, especially around new language proposals, is that often new language proposals ask for hooking points and the like to implement a certain security model for the application and these extra points and this extra expressivity is In fact in tensions with simple and correct implementation, and in fact raises the risk of having an incorrect and insecuring implementation. And I think that tension necessarily means that both people who are experts in implementation security and like implementation of tags and folks who are experts in implementation and application level security and how to exploit application Level security bugs should both both be present, but I am biased and I do think that the implementation level has greater importance in that it is the first line of defense without that that the other ones don't matter.

BFS: I strongly disagree because we have outstanding security bugs that V8 knows about in node and they don't want to fix because they don't don't use it in their applications. So I think there would need to be some definition beyond just implementation security here because they're known issues that are not wontfix but are actively ignored. So we can take that offline. I think that's all for me.

LEO: My opinion is neutral to the creation of this test group, but a general thing about test groups that we might eventually assemble. One of the things that is interesting here - because this task group does have an agenda and the items like their items that catches my eyes like that I want to participate, that I have have direct interest. What I feel like it would be good for as an indication for any task group that we eventually create is a more active agenda of actually people discussing these topics like the task group for ecma 402 was created for much longer than the Intl started discussing. I also have plans eventually, to (?) discussions and find people who do have an interest a shared interest in like would say testing to eventually become a task group. The items here are interesting, but I don't like what I would feel more compelling to the creation of a task group would be active discussions about those items like regular discussions one of the first things that raise attention was similarities, contextual similarities with the SES meetings that we have weekly. The content here is a bit different and I feel like the lifetime should be related to active discussions. The topics are interesting, but I feel we should still like to first establish active discussions about it. Otherwise we can actually create a task group that ends up not being active, that might become inactive by lack of participation etc. I have interest in this, I think this one can have some traction but I think traction is important before the establishing formal TG.

BT: The queue is empty and we have four minutes left.

MF: So the first ask is that we say that we would like to create this TG with the mission and roles that I've described here. I believe that for the mission we were fairly on the same page. With roles I do understand Shu's reservation about difficulty in filling them if we have a limited participation, but I think if we're allowing people to have multiple roles. I think we should be able to do it. So I guess before asking for the second one, can I have consensus on the first point?

MM: agree with PH on emphasis but support creating the TG.

BT: Okay. So I think that's consensus.

MF: Okay, and then I guess just something directed at the chair group, would you add it to your chair group topics to decide on and prescribe a process for selecting TG leadership? Let us know possibly at the next meeting then and then we'll proceed with that process.

BT:Yes.

MM: Whatever they come up with is a suggestion to bring back back the plenary, correct?

MF: I mean I think the chairs are well within their right to just choose this kind of thing. But if they want to run it by plenary, that's fine, too.

AKI: Don't worry, Mark, we don't get to do anything unilaterally.

MM: selecting the TG leadership could become - I can see scenarios where it's contentious and political and overrides the preferences of some of the stakeholders. I think that's unlikely, but I don't want to just give them a blank check to decide what the process of suggesting selecting the teaches leadership is

BT: So I think just as a practical matter will come back at the next meeting with what we think the process should be and I also don't think it'll be particularly surprising so. if that's okay with you Michael, then I think the next meeting would be great.

MF: Sounds good.

MM: Okay.

MF: Thank you everyone and thank you. Whoever it was that was running my sides for me I couldn't do without you

WH: What are the Ecma formalities for forming a TG?

IS: You know normally, you know, it would be good if before such kind of decision, if the concrete proposal is made available to the entire technical committee three weeks before the decision is being taken. Now here in this case I was already a little bit hesitant about it, whether I should be very formal or not. Because I saw that over the GitHub, you know, there's discussions, so it doesn't come completely out of the air. But so this is the only formal hesitation that I had but if TC39 is happy just to go forward and to create this task group then then it is basically fine with me. Anything in relation to the creation of a TG is completely in the hand of the technical committee. So practically you have a tremendous amount of Freedom also for selecting the chairmanship of the tg, for closing it down, for setting it up to setting up as many TGs as you want Etc. So so so that's the reason you know why I really do not Want to play here too much the role of a “standardization police”, but certainly according to my feeling it would have been it would have been much more “kosher” - if I may say so - if a concrete proposal was put up and published three weeks before this meeting when we took the decision.

WH: Does a TC need to take a vote to form a TG?

BT: We unfortunately need to move on from this topic. Let me suggest that Istvan if you have the standards police concerns that there's a thread on this and GitHub we should maybe take it there. Likewise Waldemar if you could chim him in there, maybe Michael you could share the link to that in the chat. Unfortunately, we have to move on from this world overtime ready.

Conclusion/Resolution

Do Expressions

Presenter: Kevin Gibbons (KG)

KG: I brought this up last year, but to recap, I'm picking this proposal up from Dave Herman who hasn't had time to do this sort of work recently. I am presenting a slightly different variant than he would so, please don't attribute my opinions to him. I presented this a few meetings ago and nothing has substantially changed since then except that there is now spec text which was an ask from delegates. By the way, I have a new URL where I have put the spec text just because neither Dave nor I could get me admin access to the actual repository. So please be sure to look here rather than at the actual repository for the next couple of days. Hopefully, I'll get that sorted soon.

KG: So the point of do Expressions is that they are an expression which you can put statements into. So for example if you need to scope a variable or - we'll see a few more examples later. This is the approximate syntax. There's a do followed by a block in expression position, and the value of the do expression is the completion value of the list of statements. Now, as a reminder, completion values are already a thing in the language. They are observable using eval. They are just not frequently observed, because who uses eval? They are in some cases quite unintuitive. Usually they are just like the last expression, or the last expression in statement position at any rate, that you evaluated. But there are weird corner cases and we'll talk a bit more about that later.

KG: So just a couple of motivating examples. This is a very common thing that you want to do. You want to try to JSON.parse something and if it fails you want to provide some fallback value, frequently null, and you don't really care what the error was and you don't really want to have to do the try/catch outside of the assignment to the variable because that implies - well as a stylistic question it is nicer to have the operation of setting up X be a fundamental unit, which which this allows, which you can't do in current style without having an immediately invoked function expression, which is a quite a lot of overhead. A similar example is that you might have some temporary variable that you need to create in the course of setting up some other variable, but there's no particular reason that you should need to have this temporary variable visible for the lifetime of the containing block. It can just be scoped to the creation of that variable. Right. So that's the motivating examples and roughly what it looks like.

KG: There's a few cases that are perhaps surprising. If you have an if with no else then in the else branch the value you get is undefined. That's, I suppose, not that surprising. Next, I am proposing it you would inherit the ability to await and to yield from the context in which the do expression occurs. This again I hope is not surprising, because you can already await and yield in any other expression. This is just a particular kind of expression.

KG: Now some things that I think are bad. The completion values for loops - people get really really tripped up about these. And this is one of the places that the proposal has gotten the most pushback from people who have looked at it. The completion values for loops are extremely surprising to some subset of the people, no matter what semantics you propose for them. Some subset of people are absolutely convinced that the completion value for a loop is naturally an array holding the completion values for each iteration of the loop and that is - that is not the actual semantics of completion values, and I am not proposing to change that here. Similarly declarations have very strange completion values - you the thing that was on the previous line rather than - like in this case you might expect to get the function. Well, you don't. You get 1 because that's the completion value of the previous line. So in this proposal I am just banning them outright. I have spec text in the repository, which is also in the agenda, which says that if the block of the do expression ends in an iteration or declaration, which is formalized by this syntax directed operation, then it's an early error. So under my proposal you simply would not be able to write this code or for that matter this code. Okay.

KG: So a few more cases. There is this question of what do you do about var declarations? Do they hoist out of the block? I have been convinced that the right behavior is that they should hoist out of the block and that is the behavior that I am proposing, with the exception that if you put a var declaration in a do expression in a perimeter position, either as the default or in a computed property for destructuring, this would be an early error. You can't add new variables to the parameter scope just because it's too confusing. Also one last edge case saying that you would not be able to do the sort of nonsense b33 functions in blocks hoisting. Just this would not hoist. This is just a function that is scoped to the block that contains it. There's no magical hoisting of functions. They're hoisted to the top of that block, of course, but not to the top of anything else.

KG: So yeah, I'm just going to go through the last couple of edge cases really quick here. I'm proposing that the completion value for an empty do expression is undefined. I am proposing that break and return and continue across the boundary of the do expression would be disallowed. I know some people have asked for this. I have also gotten a lot of push back for allowing this. A lot of people saying that this absolutely must not be allowed. Under my proposal it's disallowed, you cannot have a break or a continue that crosses the boundary of the do. I'm not going to go into this example, but you can read the slides if you care. I'm not gonna go into this example either, but you can again go into this if you care. The thing I wanted to highlight here was the engines are not completely in agreement on completion values which implies that people aren't relying on them particularly much. I'm also, at a future point, going to propose async do expressions, but that is not currently part of what I am asking for. I'm just asking for synchronous do expressions. You can review the spec text. It's relatively complete. There's a couple of to-dos but I am only asking for stage 2 at this time. It's up on GitHub. Yeah, that was everything I wanted to cover. Do we have a queue?

BT: Yes, we do.

BFS: Just wanted to check something. So you said you're banning declarations and for as well at the end of your do expressions. Does that mean you have to take into account any empty completions at the end as well?

KG: Yes, it does mean that. The current spec text does handle that. It was somewhat painful, but yes, you can't have for example a for loop followed by like 50 semicolons. That would still be an early error.

BFS: Okay.

SYG: I think my topic is up next. It was about the var stuff, where you have an early error for vars in do expressions in parameter expression. Var declarations can already be introduced in parameter expression positions due to direct eval. Is the reason we're not aligning with that just because it's too gross? I agree that it's super gross.

KG: Honestly, I had forgotten that we do that. Yeah, I banned them because they were gross, but you're right that there is no technical need and we could allow them. I don't have that strong of feelings.

SYG: Okay. Yeah, not a stage 2 concern.

MM: So I like the idea that we're starting off saying anything that looks hazardous or that we can't agree on or that people might be confused by, we start off disallowing. Most of those things we can always incrementally decide to allow them if we start off disallowed. Let me add a suggestion to that pile, which is that some problem cases go away if we say that this construct is for strict code only. For example your nested function. Anything you do in sloppy mode is going to cause confusion anyway, so I suggest that we start off with that being one of the restrictions and that we consider this a strict only construct.

KG: I am very reluctant to do that. We have had this discussion for a number of new syntactic forms and every single time, we have come down on the side of making the syntax legal in both sloppy mode and strict mode. I think that B.3.3 function hoisting is the only benefit where there's a distinction between sloppy and strict that would matter here, and I think just saying B.3.3 hoisting doesn't happen within do expressions, at least unless you cross function boundaries, is fine to address that concern. And then we don't have to tell people they can't use this in some of their code.

MM: So we agree that this is not a gating issue for entering stage 2, but just because we've had the discussion before for other syntactic constructs, this is a different syntactic construct and I would like to have the discussion again. We can do that in stage 2.

KG: My position is that it should be legal in sloppy. But I'm happy to continue discussing this.

MF: Instead of disallowing it in sloppy contexts, could you introduce a strict context?

KG: I really don't like that solution. I just in general don't like mode switches that are not tied to function boundaries. I think they are surprising.

MF: Like class bodies?

KG: So we just had this debate. I think class bodies are much more - a class is more like a function than a do expression is but I agree it's not like that clear-cut. It could in principle just introduce strict mode again. My preference is that it not do that because it doesn't look like it does that.

YSV: We reviewed this as a team, and I want to read some of the comments that were brought up by the SpiderMonkey team. So one concern we have is, this does introduce new implementation complexity namely around the fact that this is neither an eval nor an arrow function. We predicted this will have a lot of complex error checking and verifying that things are working as expected, it increases the number of test cases we need to check for explicitly. It would have been preferable if it had one or the other behavior. This would be a good discussion to have in stage 2, to see if we can reduce the divergence. Yeah, I think that summarizes the main point that we had.

KG: It's a great deal like eval, in terms of - it creates its own scope for variables to be declared in, you get the completion values the same way you do from eval. It just restricts what you could write, but in principle, I believe that just switching this out with a strict direct eval would be pretty much identical semantics.

YSV: A few checks would be for forbidding things such as loops. I think that this is something we can nail down and determine just how significant this implementation complexity will be once we get into stage two.

KG: Yes. Thank you. And you know, I'm not dead set on having these restrictions. I just think that they will - well, I'll talk about that a little more later.

WH: I just find the restrictions to be very inconsistent. Some of the things which are not necessarily problematic are banned, while things which are problematic like var aren’t banned. I strongly believe that you should not be banning loops at the end of this thing. Similarly, it's very useful to be able to return out of these things. Without support for return this essentially duplicates the functionality of immediately-invoked function expressions. The thing that would make this thing carry its weight is the ability to to just return. And I note that you have a proposal for a second do which does allow return…

KG: Async do is not intended to allow return.

WH: You wrote that it's kind of hard to ban (via yield) so you didn't ban it.

KG: No, I did not, or if I did that was a typo. I don't believe async do allows return. Certainly it was not my intent.

WH: Anyway, I'm not happy with the restrictions here, and I don't think it meets the bar with those restrictions in place.

KG: So let me talk about a couple of these. The loops restriction I think is kind of crucial because it is too confusing to allow loops. I realized that there is a semantics which some people think is obvious, but just from following the discussions, so many people are surprised by the completion value of this that it seems that allowing this trips more people love than would be served by it.

WH: For trailing declarations, I agree. For loops, I strongly disagree.

KG: Okay, so I could be persuaded to allow loops here if a lot of other people share your sentiment.

MM: I strongly disagree with WH. I think that banning loops is really essential.

WH: Why?

MM: because of the point KG already made which is that people will form conflicting intuitions of what the obvious answer is, and therefore will cause unnecessary surprises. If you ban them, then people will put an expression statement at the end to have the value be exactly what they intend. And the key thing about that is code is read more often than it is written and the expression statement at the end is a little bit more trouble for the author, but makes it very clear to all readers exactly what's being returned.

WH: Okay. So Mark, what is what would you expect the completion value of a while loop to be?

MM: Because the body might be executed zero times, I would expect it to be the value of the head.

WH: I can apply the same argument to say that you should not have an if statement at the end of it. A missing else clause might not be executed.

MM: I understand the form of the argument. I think some of this is aesthetics and our experience as programmers and as someone communicating to programmers. I just think that people will more often arrive at the wrong conclusion with while loops than they will with if statements.

WH: I disagree; if you're going to ban while loops because the body might not be executed at all, then you should also ban if statements without else causes.

KG: My concern was less that they might not be executed at all than people would expect to collect all of the results into an array, or like all the results look like that every completion of the body of the while loop would be collected into an array of length equal to the number of times that the while loop is executed. Several have said that this is a thing that they want from loops in do expressions. I agree, that's not what they do, but like it seems to be what people think they should do, or a lot of people think they should do, and that is why I am proposing to ban them.

JHD: This is my queue item as well. My experience is that nobody has common intuition about this thing and it also varies across languages. Some languages, I believe, do produce an array. It's one thing to say that there is an intuitive answer and then decide to ban it or not and argue about that. But the issue I see is that there isn't an intuitive behavior.

WH: Are you allowing them inside async do?

JHD: I assume that anything that is allowed or banned in one must be in the other. I would argue confusion about that.

KG: That's not precisely the case, but certainly these syntactic forms, the loops at the end are banned. The proposal is to ban them at the end of an async do just as for a regular do.

WH: Okay, and I'm still opposed to this due to the return issue.

KG: Yeah, I see both sides of that argument very clearly like it is a very useful thing. The thing that I will say is that there are there's a lot more complexity there than there is in the basic version of the proposal, because that allows you to do like something that you genuinely can't do currently, which is like put a return statement in a for loop update or whatever. I've written this proposal in a way which would allow us to relax that restriction later if people start using do expressions and then start reaching for this and finding that that they want to write return in a do expression. So my hope is that we could advance this proposal with this restriction and then, should it prove to be the case that this restriction is too burdensome, then we could do the work necessary to figure out exactly what its semantics would be in all cases and allow it.

BSH: So I was just wondering if it would simplify things if you just said that the final statement of the block always has to be an expression then it's obvious what the value would be.

KG: So both of my examples - in this case the final statement of this isn't an expression, it's a try catch. In exactly the same way that the final statement of this is a for-of. I think we should allow this case. This is one of the main cases that I want to write so I am not excited about the prospect of not allowing people to write this.

BSH: Is it mainly the try catch case that you are concerned about?

KG: The other big thing that I hear from people when I tell them about this is that they're excited to replace their nested ternaries with an if else-if else-if else.

BSH: Okay. Yeah.

LEO: I want to express strong support for this conservative approach with loops. I think this is a good conservative approach. We are limiting what it can do and we are limiting weird intuition. I think this is very good for stage 3 experimental, and if we do find demand for anything that looks like a for loop at the end of do expressions, we can remove the ban on for loops. If we allow them we are not going to be able to remove them. And yes, there is confusion here, if we go to check with regular JavaScript developers what they think this loop is going to do people are going to be confused. There will be a mix. I would say there will be a mix of answers and it is misleading. JavaScript can be misleading and I think this is a conservative approach to remove potential churn of this feature and I was against it before but I was convinced. This is a good approach.

KG: Cool. Thank you. Can I get a time check? What do I have left in my time box?

BT: Like one minute?

KG: Okay, great. Let's go to TAB real quick. Wait, TAB is perhaps no longer here. I don't know. I see a microphone - okay. Well, I will read the question from the queue and answer it at the best of my ability. "Is the return/break ban for technical reasons, or just style?" Technical reasons and style reasons. The style reasons are just that a lot of people, myself included, think it will make code review much more difficult when do expressions get this extra power. The technical reason is there are a lot of contexts where it's not obvious what should happen. IIf you put for example, a continue in the update of a for statement. Like what does that do? I don't want to have to find an answer to that. And we could find answers to all of these questions, but my preference is to avoid having to answer questions about what do break and return in e.g. loop heads do. But it is mostly a style thing. I think it will make reviewing code more difficult.

TAB: I can see that break and continue being more difficult in certain cases. It's still a little bit frustrating they wouldn't be allowed, but return seems like it's not any more - it's zero problematic compared to throwing, you can still exit an expression anywhere because of a call throwing or something, and it seems like this is the exact same thing. I mean, it's not an exceptional case if you're returning so more anticipated, but yeah, it still seems worthwhile to bring up the ability to still early return in the middle of these things.

KG: Like I said, I can definitely see the argument for it.

TAB: I wouldn't block everything and I'm happy enough. Well, I would disagree with it. I'm happy enough for this to progress with the restriction in place for now if that it's a sticking point.

KG: So I think it is a sticking point for now. I think it is reasonably likely that if we put this out into the world people will come to us and say we want to do early return and then we should explore it at that point.

TB: Right, Okay.

KG: So I see from MM a clarifying question about yield implying return. I agree that it is technically true that in a generator you can already put a return in arbitrary expression positions by having a yield expression and then having the generator prototype dot return be called on the instance of the generator to inject a return completion into the middle of the generator. I don't think this is a fact that JavaScript devs are likely to be familiar with, and it is also restricted to generators rather than arbitrary functions. So I am very hesitant to generalize from its example. I have a point about this in my README in fact. If you want to say more things MM, go ahead.

KG: So I hope that we can clear the queue, but if someone has to leave and they really want to be here for this discussion, please speak up, but otherwise, I think this should hopefully be just be a couple more minutes. The remaining things on the queue seem like they're just agreeing with the decisions in the proposal. So I'm hoping that we can just go straight to asking for stage 2.

BT: Okay, any objections? Stage 2 for Do-Expressions?

KG: WH is objecting. Okay, is it specifically the lack of break, continue, and return?

WH: Yeah, it's the lack of return as well as the inconsistent treatment of trailing statements. A consistent solution would be to ban any trailing statement which produces an implicit undefined. So were you to ban both loops and if without an else, I'd be okay with that.

KG: That was not the reason that I banned loops. It was not that they could return an implicit undefined. But if you say that it's necessary to also ban this case to make you happy for stage 2, I'm okay with doing that. Okay, so I guess I'd like to ask for stage 2 with this case that is currently on the screen ("if: a little weird") also being banned

WH: And the other sticking point is return.

SYG: Okay. What is being proposed?

BT: We can't keep track of what's in and what's out, I think that's not going to work. But I also get the sense that there are those that would have objected to this proposal with the changes that WH was proposing. So I think maybe it would be good at this point to maybe put together a table of what's in and what's out and circle back with those who were vocal about what should be in and what should be out and if there's a tenable middle ground somewhere in there maybe we can circle back real quickly and get stage 2 if we have time at at the end.

KG: I don't have high hopes, but sure.

BT: Okay. All right. So it seems like it won't get stage 2 today, but we'll see if we can finagle something.

Conclusion

Proposal does not advance.

Reason: Conflicting requirements about return and loops need to be resolved.