-
Notifications
You must be signed in to change notification settings - Fork 170
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
fix reducer scale override confusion #1367
Conversation
It’s still unfortunate that TypeScript can’t infer that Plot.barY(
penguins,
Plot.groupX(
{y: "count", fill: {reduce: (values) => d3.mode(values), scale: true}},
{x: "species", fill: (d) => (d.island === "Biscoe" ? "orange" : "green"), fy: "sex"}
)
).plot() The problem is that TypeScript interfaces are loose; you’re allowed to define additional properties, e.g. const r = {reduce: (index, values) => d3.mode(index, (i) => values[i]), foo: true};
const s: Plot.ReducerImplementation = r; You only get the strict validation when the type is strongly implied: const r: Plot.ReducerImplementation = {reduce: (index, values) => d3.mode(index, (i) => values[i]), foo: true}; Fundamentally, the issue here is ambiguity. We use a lot of duck typing in Plot, which is great for easy extensibility but leads to ambiguities. As we saw before, it’s easy to confuse an array (which has array.reduce) with a ReduceImplementation; there’s a similar potential confusion with an array and a MapImplementation (array.map). And #1310 introduced a confusion between a ReduceImplementation and a reducer scale override. Here are some ways we could reduce ambiguity:
Regarding backwards compatibility, we could continue to support the existing duck types but only declare the new types in our TypeScript declarations. And (3) is outright compatible because we haven’t released reducer scale overrides yet; however if we don’t use the name “reduce” for reducer scale overrides, that will be inconsistent with the reduce option we use in other contexts (area, line, and auto mark options; imputing scale domains from channels; window transform options). Changing those would be very disruptive. I suppose we could simply break compatibility, too, given that reducer and map implementations are rare—but I’d like to avoid that to make it as easy for folks to keep upgrading. So, disregarding (3), how do the other options look? Let’s consider the simplest identity reducer. Today that is: const reduceIdentity = {
reduce(I, X) {
return take(X, I);
}
}; Under (1): const reduceIdentity = {
applyReduce(I, X) {
return take(X, I);
}
}; Under (2): const reduceIdentity = {
[Plot.reduce](I, X) {
return take(X, I);
}
}; Under (4): const reduceIdentity = new class extends Plot.Reducer {
reduce(I, X) {
return take(X, I);
}
}; (2) is attractive, especially with JavaScript precedent from Symbol.iterator, Symbol.asyncIterator, Symbol.toPrimitive, etc. But it means that you can’t mix-and-match implementations across different versions of Plot (because if you load multiple versions of Plot, each will declare its own unique symbols). And there will always be non-symbolic duck interfaces in Plot that we inherit from D3, such as intervals and projections; I don’t think we can or should expect that all duck typing will use strict symbols. (4) has the same problems with multiple versions of Plot. And the syntax is clunky… it feels strictly worse than (2). So I guess that leaves (1)? And (1) I think is fairly easy to implement. The additional name “apply” is a little clunky, but it’s not too bad. |
Yes, (1) seems like the way to go. In terms of words one could use: summary, summarize, aggregate, agg; with a more biological metaphor: condense, digest, extract, distill… with a picture/camera metaphor: snap, depict, describe. applyReduce (and variants such as reducer, indexReduce…) has the merit of not introducing a new concept. |
I am now thinking “reduceIndex” because it is more explicit and has the advantage of a shared prefix with “reduce” and hence better autocomplete. |
* fix reducer scale override confusion * DRY test * more specificity for map and reduce implementations * Update README * revert mark duck test * fix scope and label with deprecated reduce implementation
Fixes #1363.
Strictly speaking this is not backwards compatible because now a channel that is expressed as {value} will no longer be interpreted as a channel value spec (with an undefined scale); but, that means we can use a consistent test for both channel value specs and channel reducer specs. I’m a little worried that this is now not consistent with Plot.auto, though. 🤔The alternative would be to check the length of the reduce function, which is what I did first…Update: This is now backwards compatible.