Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Figure-out what supporting Array-subclassing implies #345

Open
tobie opened this issue Apr 18, 2017 · 40 comments
Open

Figure-out what supporting Array-subclassing implies #345

tobie opened this issue Apr 18, 2017 · 40 comments

Comments

@tobie
Copy link
Collaborator

tobie commented Apr 18, 2017

ES spec is now set up for things to actually be an Array subclass, but this hasn’t been used in the platform yet.

Does this imply new syntax?

New bindings?

Are there specific requirements in terms of defining certain methods etc. that are required?

@domenic
Copy link
Member

domenic commented Apr 18, 2017

Does this imply new syntax?

It might be a good idea. We could hijack : Array or similar, but that'd be special-casing things.

On the other hand, it's hard for me personally to start brainstorming new syntax without boiling the ocean, so maybe sticking with existing syntax is better?

New bindings?

Definitely.

Are there specific requirements in terms of defining certain methods etc. that are required?

We may want to figure out the story for isConcatSpreadable; that was added specifically so that legacy classes could subclass Array: http://stackoverflow.com/a/27024188/3191. E.g. if NodeList became an Array subclass, it would set isConcatSpreadable to false so that it wouldn't break code. However, NodeList can't really subclass Array anyway since e.g. its read-only. It could do a fake-subclass via [LegacyArrayClass] + fixing isConcatSpreadable, maybe.

So new subclasses might want to just leave isConcatSpreadable alone.


I think our biggest issue is we need a consumer to help us shake out the bugs and edge cases. That used to be DOM's Elements, but that was removed since there was no implementer support (which was in turn at least partially because of the lack of Web IDL support).

The biggest question surrounds the fact that Array subclasses cannot control what type of items they contain. E.g. Elements could contain numbers or Nodes, not just Elements, via els.push(5) etc. So consumer APIs need to use sequence<Element> to get proper type-checking, and any methods defined on Elements need to properly skip or throw on non-Element members.

Resurrecting that spec might be enough, but having another spec as a second consumer would make me happier.

@annevk
Copy link
Member

annevk commented Apr 18, 2017

(I'd revive Elements if IDL got this by the way.)

@bzbarsky
Copy link
Collaborator

As I see it, we need the following bits here:

  1. A way to indicate that a named thing is an array subclass, not a normal interface. I would vaguely prefer dedicated syntax here, to prevent the problems we have with mixins and their syntax.
  2. Some decisions on what it means to do a JS-to-idl conversion for array subclasses. I suspect the answer is that there should be no such thing. That is, this would be the first type where there is some sort of meanigful idl-to-JS conversion but not a meaningful JS-to-idl conversion. I sometimes wonder whether we should have done this for FrozenArray too... In any case, the obvious way to use an Array subclass would be as a return value only, with sequence used as the corresponding argument type. What this means for writable attributes, unclear.
  3. Operations and attributes defined on an array subclass would need a somewhat different processing model from existing interface members. In particular, it's not clear that they would be able to do the sort of brand checks that interface members do, nor that it would be desirable to do them even if they could. My gut feeling is that such operations/attributes should be written in a way that works generically for any Array or maybe even arraylike "this" value.
  4. It's not clear to me whether we want to support some sort of additional internal slots on array subclasses. Not least because it's not clear to me how one would use them and how one would ensure they actually exist, etc.
  5. As @domenic says, we would want to think about how operations on array subclasses handle incorrectly typed elements. We could leave it up to every operation, or we could have some default behavior somehow or a few predefined options...
  6. We would want to think about how operations actually iterate over the array (length + indices, array iterator, something else?) and whether operations should define this themselves or whether we provide a default or a few predefined options.

NodeList can't become an Array subclass.

Between Elements and what CSS typed OM is doing I think we would have two consumers...

@domenic
Copy link
Member

domenic commented Apr 18, 2017

A great list. I agree with all. Some extra thoughts:

For (1), to expand on my "boiling the ocean" comment: new syntax makes me think class X extends Array. But then I'm all like, "maybe we should switch everything to class ... extends ... eventually." And then I start thinking, "isn't it weird to mix 'old' syntax like readonly attribute with a JavaScript-aligned syntax like class? So I dunno what the right path is here. Maybe it's best to forget about the larger aligning-with-JavaScript project in this context.

(3) sounds like a pretty disruptive change for implementations and spec-writing, but cool if we could get away with it.

(4) sounds like an OK "for now" limitation, but we'd need to check with our consumers...

(6) is quite interesting. I guess length + indices would be most consistent with existing array methods. https://tc39.github.io/ecma262/#sec-createlistfromarraylike is convenient but non-idiomatic in most cases; it would be useful if you need to snapshot some state before going "in parallel" though. In most cases you should probably perform work as you iterate, which definitely would benefit from a helper.

@tabatkins
Copy link
Contributor

Any Array-like that doesn't let us type-check insertions is really missing the boat. With any other attribute type, I can ensure that the object is of a known type, and the author messing that up will fail early, at the point you try to make a mutation. This makes it much easier to write specification prose, as I'm working with a known type, and reduces the amount of type-checking that has to be done overall in the platform. (In particular, when passing the object to another API, it can depend on the object's attributes all being of the correct type too, and doesn't have to repeat all the checks, recursively until you hit primitives.)

A non-type-enforceable Array subclass throws all this away. Spec authors have to be much more careful when writing their prose, which they will get wrong, and the platform has to recursively check that Array subclasses hold the right type every single time it passes into browser-land code. (Again, recursing all the way down to primitives, or at least to types that the engine knows ahead of time can't recursively-contain any array subclasses.) Page authors have a more difficult time too, because inserting a bad value won't fail immediately; instead it'll throw at some later point when the collection is passed to a platform API and fails its recursive typecheck.

It's seriously not worth subclassing arrays if we can't get this right. If this requires further ES changes (such as defining the [] hook that AWB proposed years ago), so be it, but let's not faff about with more half-solutions a la [LegacyArrayClass], FrozenArray<>, etc.

@tabatkins
Copy link
Contributor

(For more detail on this recursive-checking failure mode, see w3c/css-houdini-drafts#239 (comment).)

@bzbarsky
Copy link
Collaborator

I have to admit to being pretty confused at this point.

If someone wants typechecking on set, we have a mechanism for that: an interface with an indexed setter. That's a hook that catches sets with integer-named property sets and does something. That is its whole point.

That object won't be an Array, obviously, because it has totally different semantics from Array, both in terms of indexed sets and in terms of length-mutation (it probably either disallows length mutation, or allows it but doesn't use undefined as the value for newly added entries). That's all fine: the desired semantics are explicitly different semantics from ES Array.

Now whether this is what web authors want, I can't tell you. AWB would claim that they want the "typecheck as late as possible" behavior, and that this is faster because you only do the typechecks when you really care about them and not at every API boundary, and further that any typechecks should be duck-typing, not branding. There are obviously philosophical differences about that...

But the point is, we have a tool for getting the observable behavior being described. If that's the behavior we want, and we have a tool for it, why not just use the tool?

@tabatkins
Copy link
Contributor

But the point is, we have a tool for getting the observable behavior being described. If that's the behavior we want, and we have a tool for it, why not just use the tool?

Two reasons.

  1. I keep getting told that it's a legacy feature that no one should use.

  2. "It's not an Array because of some relatively arcane details that are barely even known among authors, let alone used" casually ignores all the other stuff that Arrays do that authors actually do use, and want to use on Array-like things. It's silly that we have to do [].slice(document.querySelector(...)).map(...), just because the platform, as it currently stands, doesn't define map (and the slew of other Array methods) on Array-likes.

All I want is something that satisfies the following:

  1. Lets you iterate over it.
  2. Lets you use the various Array methods on it (map, foreach, filter, etc.) with reasonable (typesafe) semantics
  3. Stretch goal: lets you do obj[1] = foo on it with typechecking.

Now whether this is what web authors want, I can't tell you. AWB would claim that they want the "typecheck as late as possible" behavior, and that this is faster because you only do the typechecks when you really care about them and not at every API boundary, and further that any typechecks should be duck-typing, not branding. There are obviously philosophical differences about that...

Given that this is contrary to the entire philosophy of WebIDL, I don't think AWB's opinion is particularly germane here. (Plus, his old proposal for a [] hook is precisely what would allow you to early-typecheck.) I refuse to entertain arguments that Array-likes are special snowflakes here that should behave differently from every other interface on the platform.


If the only way I can do all of this is to make this a Maplike with a k-v iterator that only uses integers for keys (imposing some semantics on what integers you can use), and manually define a mixin that duplicates the Array builtins, then by golly I'll do it. I'll be very unhappy about it, and I'm certain that y'all will be too, but I'm not going to define an API with a broken, slow, or needlessly-weak interface just because no one is willing to actually define something useful.

@bzbarsky
Copy link
Collaborator

I keep getting told that it's a legacy feature that no one should use.

I think you are being told wrong.

It's silly that we have to do [].slice(document.querySelector(...)).map(...)

You don't have to do that if you use [ArrayClass]. Yes, I know that's claimed legacy too.

That said, if you use [ArrayClass] and then use slice() or map() you will get an Array, not your thing. If you want to get your thing from slice() or map(), then you really do need an Array subclass, and then you can't control writes to it. Well, mostly. What ES allows you to do is to have your object be a Proxy whose target is an Array, and then you can still control construction of slice/map return values, plus of course you can control writes, because now you have all the proxy machinery at your disposal.

I know there are concerns proxy performance being a problem, but we really can't have it both ways as things stand: either we have fast indexed writes that just blit memory and the JIT can optimize all it likes, or we have indexed writes that run some sort of verification code that the JIT knows nothing about conceptually and then we end up taking slow paths, unless the JIT is explicitly taught about this verification code and what its invariants are. That's all independent of whether we have a [] hook; the presence of such a hook would cause the JIT to deopt just as much as being a full-blown proxy does, until special fast paths are added there.

So in principle we could spec something like the maplike/setlike in the spec (which delegate to ES Map/Set after doing some sanity checks), by defining the object as a proxy, with an Array target, with Array.prototype on its proto chain. I'm pretty sure that will do the "right thing" when that object is this. What it does when the object is an argument, I'd have to think about. It's not clear to me whether this is a thing people want.

We could also try lobbying TC39 to have more of a carve-out in ArraySpeciesCreate for exotic objects that quack like array subclasses but aren't really, just like they already have for Proxy.

As far as your goals go, what do you consider reasonable semantics in this case for map, foreach, filter, slice, etc? Or rather, do you consider the semantics of the corresponding Array.prototype methods to be unreasonable in this case, and if so in what way?

Is the point that you want IDL to predefine these methods for you, instead of you defining them yourself, with whatever semantics you want? (I'm not saying this is an unreasonable desire; just trying to figure out what the actual constraints are here.)

@tabatkins
Copy link
Contributor

Sorry for the frustration coming thru in this reply, but this topic is getting me increasingly frustrated. I'm being given exactly opposing advice ("just use indexed getters/setters", "nobody should ever use indexed getters/setters") from the exact people I've been trained to trust on these matters, and it feels like there's a collective shrug about the terrible ergonomics this topic has inflicted and will continue to inflict on authors, which is extra-frustrating given the generally good ergonomics that have been granted to Maps and Sets in WebIDL.

@bzbarsky
Copy link
Collaborator

You're not the only one frustrated by the state of this stuff, trust me. :(

There is definitely a very longstanding problem here in terms of disagreement about how arraylikes should behave, because different people seem to want very different things out of them. We've literally been arguing back and forth about this for years, and people can't even seem to agree on what "good ergonomics" means for arraylikes. :(

One thing we could do is have some way in IDL to just opt into behavior similar to what typed arrays have in ES. Those are not array subclasses, but have various things like slice/map/filter/etc defined on them, with special typed array semantics that are not generic unlike the Array versions. They perform coercion (which can throw) on write. This sounds like more or less what you're asking for, right?

@tabatkins
Copy link
Contributor

tabatkins commented Apr 27, 2017

That's all independent of whether we have a [] hook; the presence of such a hook would cause the JIT to deopt just as much as being a full-blown proxy does, until special fast paths are added there.

Presumably it would deopt into the same speed as any normal method call, yes? If that's the case, then I can stop worrying about this, and just use it. I've been given the impression, tho, that it actually de-opts to something substantially slower than an ordinary method call.

As far as your goals go, what do you consider reasonable semantics in this case for map, foreach, filter, slice, etc? Or rather, do you consider the semantics of the corresponding Array.prototype methods to be unreasonable in this case, and if so in what way?

The Array.prototype semantics are "unreasonable" only in that they assume the object is an Array (with no arg-checking) and act accordingly. I want a type-restricted Array, with relatively obvious semantics (at least, other languages handle this pretty consistently):

  • forEach doesn't matter, it just consumes the Array-like and outputs nothing. It's fine as-is.
  • map should throw if the callback's return value doesn't type-check, and it should return a value of the original class
  • filter should return a value of the original class
  • etc.

In general, the semantics are "return an object of the same class, and if mutations/additions happen, typecheck and throw if necessary". In keeping with general WebIDL semantics, they should probably also brand-check their this value to be of the proper class, but I don't actually care about this; it should assume whatever answer is better for implementations.

One thing we could do is have some way in IDL to just opt into behavior similar to what typed arrays have in ES. Those are not array subclasses, but have various things like slice/map/filter/etc defined on them, with special typed array semantics that are not generic unlike the Array versions. They perform coercion (which can throw) on write. This sounds like more or less what you're asking for, right?

I'd have to study the details, but the surface gloss you're giving me here sounds 👍.

@bzbarsky
Copy link
Collaborator

Presumably it would deopt into the same speed as any normal method call, yes?

Normal DOM method call, as opposed to normal JS function call, right? It .. depends. I can only speak for SpiderMonkey at this level of specificity, and there is a bit more overhead here in SpiderMonkey than a normal DOM call, but not a lot more.

Note that a lot of things that are DOM calls are not normal DOM calls, in both SpiderMonkey and other engines, in a variety of ways. But even the "slot path" vmcall really isn't that slow. As in, I'd need to see a use case to see whether the performance matters. Chances are it does not.

Your description of what you consider reasonable semantics sounds very much like what typed arrays do.

@domenic, @annevk thoughts on having a way for things with indexed getters/setters to opt into that sort of behavior? Honestly, we could just have iterable<V> on a thing with indexed getters induce all that stuff in addition to the things it induces now; we'd need to check whether we can define these operations in a way that works for existing consumers.

@tabatkins
Copy link
Contributor

Normal DOM method call, as opposed to normal JS function call, right? It .. depends. I can only speak for SpiderMonkey at this level of specificity, and there is a bit more overhead here in SpiderMonkey than a normal DOM call, but not a lot more.

I don't particularly care about the difference? People are pretty happy to call JS functions all over the place; if [] get/set was at JS method speed I'd be happy.

Put another way, there's zero perf issues with the speed of Map.get/set function calls; if the Array-like's [] speed was similar, it would be just fine.

As in, I'd need to see a use case to see whether the performance matters. Chances are it does not.

Yeah, the perf case we're fighting against is "the UA constantly has to parse author-provided strings back into C++ objects" (the current CSSOM), so there's a lot of ceiling to work under.

Your description of what you consider reasonable semantics sounds very much like what typed arrays do.

Yes, looking into the MDN description, it does indeed sound like Typed Arrays have my desired semantics. They coerce rather than throw, but that's done via a pre-write hook anyway, and so can be changed to do whatever I need.

@bzbarsky
Copy link
Collaborator

People are pretty happy to call JS functions all over the place

Right, JS (scripted) function calls are faster, generally, than either DOM calls or proxy hooks.

there's zero perf issues with the speed of Map.get/set function calls

OK. The performance of those is pretty comparable to what we'd see here.

@tabatkins
Copy link
Contributor

Reading over the ES2015 spec, yes, Typed Array semantics are exactly what I want.

(Man, it's pretty hard to chase down where, precisely, the coercion semantics are defined. You have go look up Integer Indexed Exotic Objects, which aren't obviously linked, and chase some definitions there before https://www.ecma-international.org/ecma-262/6.0/#sec-integerindexedelementset finally defines it.)

@tabatkins
Copy link
Contributor

Circling back: so does this approach (copy the TypedArray spec, plus a few more Array operations that can change the length of the thing (which TypedArray didn't include for obvious reasons)) sound good? If so, I'd like to put it on Tobie's todo list. ^_^

@annevk
Copy link
Member

annevk commented May 10, 2017

I don't think there's agreement on adding more typed lists. We explicitly decided against that quite a few years ago.

@bzbarsky
Copy link
Collaborator

I think we explicitly decided against auto-generating such lists via the old IDL Array mechanism.

I don't see why we shouldn't have interfaces that opt into looking like a typed array: more or less API-compatible with Array, but without actually being an Array subclass, so there is control over writes into the array. You can get there right now with indexed getters/setters and some manual definition of various methods; the only question is whether we should have a built-in IDL shorthand for this sort of thing.

@annevk
Copy link
Member

annevk commented May 11, 2017

Because when we discussed those kind of approaches with TC39, they told us not to and to just use Array. It wasn't just about [], it was also about FileList, NodeList, etc. And the problem with those was not that they did not have enough methods like Array, it was that they required a proxy.

And so then we investigated alternatives to current APIs and found out we could use Array and sequences (for input) just fine.

You could go back to TC39 I suppose and say you really want to do typed arrays again...

@tabatkins
Copy link
Contributor

They don't need a proxy with this approach - that's the entire point of this thread's discussion. TypedArrays have intercepts reads/writes using the [] syntax already; we'd be copying that for WebIDL array-likes.

@domenic
Copy link
Member

domenic commented May 11, 2017

[] syntax interception is proxy-like in typed arrays as well. There's no way to intercept that syntax without proxies.

@tabatkins
Copy link
Contributor

Whatever typed arrays do (and it's not defined Proxy-like in the ES spec), it's clearly Good Enough for the web platform, tc39, and engines for it to be specced, shipped, and used. The plan here is to lean on that exact mechanism.

Here are the alternatives:

  1. I say "fuck it" and just use indexed getters/setters, which definitely invoke a proxy. (And define a mixin that adds all the array methods.)
  2. I add .get(i) and .set(i,v) methods to the interface. (And define a mixin that adds all the array methods.)
  3. I use plain iterators, and users have to manually cast them to an Array, fiddle, then construct a new object from the Array, every time they want to mutate something.
  4. I use plain Arrays in an attribute of the class, and then have to insert at the top of every single function that takes a CSSNumericValue, CSSUnparsedValue, or CSSTransformValue "1. Check if the |foo| arg has any random crap in it, and throw a TypeError if it does.". Note that for CSSNumericValue this involves recursively searching an arbitrarily-deep tree: https://drafts.css-houdini.org/css-typed-om/#complex-numeric. (And as a result, the other two can also host trees, as they can contain numeric values.)

All of these suck to varying degrees. Copying Typed Arrays is the only option that doesn't have crap ergonomics, as it puts array-likes on the same footing as map-likes and set-likes, and every other interface in the web.

@domenic
Copy link
Member

domenic commented May 11, 2017

It is defined proxy-like in the ES spec; they override the various internal methods.

@tabatkins
Copy link
Contributor

Yeah, internal definitions can override each other; it's all spec-ese.

@bzbarsky
Copy link
Collaborator

And the problem with those was not that they did not have enough methods like Array, it was that they required a proxy

Look. If we want to be able to do typechecking at obj[index] write time, we need a "proxy" in ES spec terms (or more precisely an object with non-default internal methods), because the default [[Set]] and [[DefineOwnProperty]] do not allow doing such a typecheck. So if we want to support this behavior at all, this is the way to support it. Telling people "just use Array" corresponds to telling them "yeah, you don't want to do typechecking on set". There are situations in which that's appropriate, and there are other situations in which it might not be. In the end, whether it's appropriate or not is an API design decision. There are some people in TC39 who think that any API that does such typechecking on set is automatically a bad API. I understand where they're coming from, but I'm not convinced they're right.

Now Web IDL already provides a mechanism for doing typecheck-on-set: indexed setters. The same people who think typecheck-on-set is a bad idea also think that indexed setters are a bad idea. This is not surprising.

I say "fuck it" and just use indexed getters/setters, which definitely invoke a proxy. (And define a mixin that adds all the array methods.)

To be clear, all I'm saying above is that we could have a simple Web IDL syntax for doing exactly this, so spec authors don't have to reinvent it from scratch (as @tabatkins notes we do just that for maplike/setlike). The typed array bit is just about the exact behavior of the mixins involved: we would have them behave like the relevant functions on typed arrays do.

@domenic
Copy link
Member

domenic commented May 11, 2017

That's literally what it means to be proxy-like, is to override your internal methods (i.e. be an exotic object which does not use the default internal methods).

@tabatkins
Copy link
Contributor

[] isn't magic; it desugars to some sort of property access just like .foo does. Full-on Proxies are more expensive than normal property access because they can do a lot of surprising things, and intercept all the traps. [] "proxying" like TypedArrays do is no more expensive than any other getter/setter pair, because that's literally what it is, just using [] instead of . for the syntax.

I feel like it needs to be reiterated, because y'all are pretending like this is something exotic and unheard of: Typed Arrays already do [] typechecking. The thing I'm asking for is already present in the platform and widely-implemented, and not, to the best of my knowledge, considered a mistake. What is making y'all so resistant to the idea of reusing that exact same mechanic?

@annevk
Copy link
Member

annevk commented May 13, 2017

Typed arrays are considered a mistake of sorts. TC39 was certainly not happy with them, but it was also their own fault because they didn't add byte support in a timely manner.

@bzbarsky
Copy link
Collaborator

My point is that people clearly want "brand check on set" behavior in some cases. The standard TC39 response is "you shouldn't want that", just like the standard TC39 response is "you shouldn't want that" any time brand checking is mentioned...

@tabatkins
Copy link
Contributor

And, to be specific, we already very explicitly ignore that particular "you shouldn't want that" advice everywhere else in WebIDL; this one spot isn't specially privileged in that regard.

@tabatkins
Copy link
Contributor

All right, finally sent the email to es-discuss to ask for this functionality.

@tabatkins
Copy link
Contributor

Capturing for the future: https://lists.w3.org/Archives/Public/public-webapps/2009JulSep/1346.html is a 2009 thread where everyone in tc39 seems pretty uniformly in favor of allowing platform APIs to do "integer catchalls" (that is, Proxies that intercept get/set on integer keys). FileList and DOMTokenList are explicitly brought up as examples in that thread.

@js-choi
Copy link

js-choi commented Jan 13, 2018

For history’s convenience, the es-discuss email to which @tabatkins referred to above may be found here: “Intercepting sets on array-like objects”, 2017-06. Mark S. Miller, Allen Wirfs-Brock, Adam Klein, and Domenic Denicola participated in the thread. The thread does not seem to have resolved to a conclusion.

@tabatkins
Copy link
Contributor

So at this point TypedOM is just using iterable<> and indexed getters/setters for its array-like interfaces, with the assumption that engines can optimize array-like accesses better than general proxies (which does seem to be the case in Blink and Gecko, at least).

This means that, per the current state of the world, the array-like interfaces still don't have access to any of the array methods - if you want to map over them, you still have to manually do [...foo].map(...), rather than foo.map(...) directly. This is less than ideal, and I'd like to get this sort of thing fixed sooner rather than later.

@tabatkins
Copy link
Contributor

tabatkins commented Jan 31, 2018

To be specific here, the TypedOM use-cases are:

  1. CSSUnparsedValue - this is just a typechecked array-like, nothing more - the values must be DOMStrings or CSSVariableReferenceValues.

  2. CSSNumericArray - same, it just restricts its values to be CSSNumericValues.

  3. CSSTransformValue - this one's more complicated. It restricts its values to be CSSTransformComponents, but also has an is2D getter (which consults the is2D flags on its contents and combines them) and a toMatrix() method (which calls toMatrix() on its contents and combines them). This means it still doesn't actually have any state of its own besides the array-like contents.

All three of these are "dead" - updating them has no action-at-a-distance on anything else. (You have to actually assign the object to a property manually, at which point it "snapshots" the object into an internal CSS value disconnected from the JS object.)

Two of these (CSSUnparsedValue and CSSTransformValue) are mutable; CSSNumericArray is readonly, both in its contents and the bindings that it's assigned to (to prevent cyclic values and some other inconsistencies from occurring). This means it doesn't matter what sort of object is returned from .map() for CSSNumericArray, but for CSSUnparsedValue and CSSTransformValue we want to get the same type of object back, so it can be used directly again without having to manually pass the result to a constructor.

Wrt Boris's questions from earlier in the thread:

  1. I don't have any writable attributes yet, but two of these types are subclasses of CSSStyleValue, which is the type accepted by StylePropertyMap.set()/append(). That's the type that matters for us, not the array-ness of the value, so we do need a JS-idl conversion story; we can't just rely on sequence<> auto-conversion. (And we will probably end up needing writable attributes at some point too, as we fill in more properties with special-purpose CSSStyleValue subtypes; as long as the types aren't recursive, there's no problem with making the lists writeable.)

  2. The CSSTransformValue getter and method are currently written with the assumption that the contents are definitely going to be CSSTransformComponent objects; if they were applied to an arbitrary array the behavior is currently undefined. I'm thus fine with brand-checking still applying; I'm not really sure why we wouldn't brand-check in general.

  3. I'm not currently using any non-trivial internal slots on any of the classes. (That is, nothing beyond the implicit ones storing the contents and the original value of the methods, etc.) I'm not sure if that'll change in the future as we introduce new array-likes, so I can't help with figuring out how to deal with them yet.

  4. The operations obviously have to type-check. The constructors and indexed setters already do; there's no justification for not type-checking on any other mutator. There's an interesting question about when things are type-checked for .map() and similar - is it immediately as the callback returns each value, or at the end when an internal array of results is finally passed to the constructor? I have no opinion on this and don't think it matters for anything, so we can go with whatever's more convenient/sensible.

  5. As I've said elsewhere, I think it's most reasonable for array-likes to iterate the same way that arrays do. (I'd prefer the set-like iteration, but I think consistency is more important.)


Additionally, we probably want to require some consistency in the constructor/setter signatures - the primary constructor should always take a sequence<> (possibly with additional optional arguments), and the inner type of that sequence<> should be identical to the type of the indexed setter.

Related to this: what happens if the array-like currently has, say, three entries (for the indexes 0, 1, and 2) and userland code calls foo[4] = someValidValue;? In arrays this implicitly extends the length to 5, and asking for foo[3] will return undefined. I suppose things using indexed setters already have to deal with this; I need to go look at some examples to see what they do.

@annevk
Copy link
Member

annevk commented Jan 31, 2018

I think the main problem here is that you are very interested in type-checked arrays, but ECMAScript doesn't provide them and nobody else seems motivated enough to solve this problem. Furthermore, even a normal subclassed array would not help you, as some of these interfaces already inherit.

The other differences is that you are interested in having type-checked intermediate values, whereas the rest of the platform mostly performs that checking when values get applied.

Though sometimes there are intermediate interfaces available, such as Headers, but you can pass its constructor values straight to fetch(), Request, and Response too. If Headers was a list it would have had a similar design problem as you face here. The advice I remember from TC39 is that even in that case, don't require a Proxy still applies, and until someone actually does the due diligence of working it through with them it's hard to support this.

@tabatkins
Copy link
Contributor

Note that my current design, which I'll be hopefully looking to standardize in some "suggested pattern" in WebIDL in the future, is to use indexed getter/setter/creator, all unnamed, with the creator only allowing creating the very next index (in other words, you can do foo[foo.length] = newThing, but nothing else). This is paired with an iterable<> declaration.

(See https://drafts.css-houdini.org/css-typed-om/#dom-cssunparsedvalue-length and the following algorithm block for an example of this.)

The advice I remember from TC39 is that even in that case, don't require a Proxy still applies, and until someone actually does the due diligence of working it through with them it's hard to support this.

I tried to go thru TC39; currently the responses are "indexed getters/setters don't use proxies in implementations, they seem like the right way to do this" and "new APIs should use Arrays and not typecheck", which disagree with each other, and there was no response to my attempt at follow-up. I'll ping the thread again.

@annevk
Copy link
Member

annevk commented Feb 22, 2018

FWIW, I think unfortunately es-discuss is not representative of TC39 as many members don't participate there. You actually have to put your thing on the agenda and get someone to discuss it for you (or do it yourself, I saw you attended recently).

@bzbarsky
Copy link
Collaborator

#840 might effectively resolve this...

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Development

No branches or pull requests

6 participants