-
Notifications
You must be signed in to change notification settings - Fork 196
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Inferring return type(List<Object>
) for a function that actually evaluates to List<Enum>
.
#3290
Comments
Good catch! (FYI, this is related to a broader topic and an old discussion, cf. https://github.com/dart-lang/language/issues?q=is%3Aopen+is%3Aissue+label%3Aleast-upper-bound). We're computing the least upper bound of the types This algorithm computes the shared superinterfaces of the operands (that is, As far as I can see, the CFE gets an incorrect result and concludes that the list is a @chloestefantsova, WDYT? |
This seems to be a repeat of dart-lang/sdk#51730 There is no bug, every implementation behaves according to specification. They just don't agree on what the class hierarchy is for And since the upper-bound operation depends on the depth of a type in the class hierarchy, that means they produce different results. I'm moving this issue to the language repository, because we might have to specify which behavior is actually correct, or prioritize our work on a better upper-bound algorithm. The introduction of switch expressions have increased the pressure on the upper bound computations, so the existing flaws show up more often. (Vastly more often, based on issued filed.) (One possible small step: Remove inaccessible types from the sets before doing LUB on them. That would mean that the UP function behaves differently depending on which library it's used from, but that may also mean it can more easily be computed modularly, using only public summarizes of other modules. But no code outside of |
This still sounds like the CFE should remove |
We can do something special-cased for the We could probably also add some unnecessary superclasses to Like: import "dart:convert";
void main() {
var r = DateTime.now().millisecondsSinceEpoch < 0;
var l = r ? AsciiEncoder() : Latin1Encoder();
print([l].runtimeType); // JSArray<_UnicodeSubsetEncoder>
} Here we leak that two unrelated encoders have a shared superclass in their implementation. |
That's a different topic because that's regular Dart code. We might compute least upper bounds in such a way that private declarations in different libraries are always removed from the set of potential results. However, I doubt that we would actually do that. At least, we'd need to double check that we don't get results which are too weird if Anyway, that's one discussion. The discussion about |
The
It's not specified which class the EnumImpl is, or what its depth is. It doesn't say "the same as That's just highly annoying when Up behaves differently depending on that implementation choice. You might be right that the best solution to this problem in isolation is to formally specify that the EnumImpl class should be ignored completely for Up computation: not included in the superinterfaces and doesn't count in the depth computation, for depth, the implicit declared superclass of an But there is still the general problem that the behavior of the classes that a library exports, wrt. Up, depends on internal implementation details, which may make it a breaking change to add or remove a private superclass. Something which should be completely invisible, to anyone who cannot name the supertype. Or, short: depth is part of your public API, and it shouldn't be. The set of public supertypes it's part of your API, private ones shouldn't be, they should be implementation details. |
Right (and I should have looked it up ;-), but I think it would be useful to avoid that the type of an expression is implementation dependent, and we seem to agree on that:
This behavior is specific to
We could do that. However, it might actually be simpler to change the analyzer to take In addition to this we also have the other discussion, about the observability of private classes in a hierarchy:
I think it would be a rather substantial undertaking to make it unobservable that public classes (or classes from the current library which may be private or public) have superinterfaces that are private. We can, essentially, inspect the superinterface hierarchy by observing the response from tools like the analyzer about subtype relationships. Next, it gets complicated if those private superinterfaces add new members (or override existing ones with a new signature) — in this case the developer needs to know that those new members (or new signatures) exist, and it will easily get out of hand if we start to pretend that those declarations are located in one/some of the public classes in the hierarchy. For instance, it would be weird if we pretend that a private class from the current library doesn't have the declarations that we can actually see in the source code. As usual, I think we should admit that the world has a particular structure (including: this class has a superinterface which is a private class), rather than trying to stitch up a picture that might seem "more appropriate" based on some abstract principles. |
Last case doesn't contain as clear instructions as the other ones. I think that the issue is related more to how an implementation applies UP than UP itself. I'm wondering how the implementation proceed given T1 |
The actual type hierarchy is: graph BT;
E1["E1"] --> EM1["_Enum&M_1"]
E2 --> EM2["_Enum&M_2"]
E1 --> I
E2 --> I
EM1 --> Enum_["_Enum"]
EM1 --> M
EM2 --> Enum_
EM2 --> M
M --> Enum
Enum_ --> Enum
Enum --> Object
Enum_ ~~~ I
I --> Object
As you can see, the intersection (everything above the two lowest levels) has two types at each level before And that is why I'd recommend removing the inaccessible |
Okay so I see two issues here:
This means that even if we devise a new algorithm we still has to choose whether new algorithm will exclude in-accessible entities from the process because we cannot rule out the possibility that an algorithm:
The right thing to do here would be to keep every entity in search space and allow the algorithm to return an entity if and only if the entity is accessible in the library where it's about to be inferred. Current algorithm, as it stands, is trying to select the type that is common to both heirarchies and has the largest unique inheritence path. This means that inference can and will fail in certian cases which is OK. However, the part which is NOT OK is that failures of this approach will never be graceful. As it turns out, this is also the root cause of most of the issues we're facing. Of course you can choose to apply an easy fix and make the current issues go away if that's not a priority item. Anyway, here's an alternative to LUB that I can think of: class A extends T1, T2...Ti with M1, M2...Mj implements I1, I2...Ik
Note: There could be cycle that must be avoided in this part and I know that you know that but I just wanted to let you know that I know. Anyway, using this, we'll get a decent version of our actual heirarchy:
And given
Not sure if it's correct but I'll consider it wrong if it require a fix for few special cases – efforts must be doubled not steps. |
Thanks for spelling out an algorithm idea, @hamsbrar! I should say that we have to be very careful about changing algorithms like this one, because it is used during type inference and promotion and typing in general, and any change is likely to be breaking in a way that is pervasive (because type inference etc. are used everywhere), implicit (especially type inference, but also promotion), and subtle (say, the expression A few things to note: With The repeated application of step 3 (that is, step 3 and 4) will then yield a set of superinterfaces of Step 3 and 4 will terminate unless the superinterface graph is cyclic (which is an error in its own right), so there is no termination issue for these steps. The reference to One possible interpretation of that ordering relation R is that it is determined by a breadth first traversal of the declared superinterfaces of However, that ordering relation is radically different from the ordering which is used today (which is the 'depth' of each of the superinterfaces), and it can actually contradict the subtyping relationship: class D {}
class E extends D {}
class F implements E, D {}
class G1 extends Object implements F {} // "Object < F < E < D".
class G2 extends Object implements F {} // "Object < F < E < D". So I think this illustrates that there are many opportunities to get this wrong. In any case, do keep an eye on https://github.com/dart-lang/language/issues?q=is%3Aopen+is%3Aissue+label%3Aleast-upper-bound in order to see where this goes. |
The issue that the legacy LUB is trying to solve is, given two types, which can each have an arbitrary (and infinite) number of supertypes, choose one which is a supertype of both, with an optimization goal of making it as "close" to the original types as possible. And do so in a deterministic, consistent, and efficiently computable way. The first step it takes is to ignore all the supertypes, and focus only on the shared super-interfaces. A Then, given the set of shared superinterfaces, it tries to find a measure for "closest" to the original types, that it can optimize for. That was chosen, semi-arbitrarily, as the maximal depth of the type in the super-interface graph, rooted at Then it orders the potential results by their depth, and starting from the deepest potential solutions, it checks whether that depth level has a single type. If it does, it's the chosen type. If not, the entire level is skipped. The reason to skip the level is to avoid making an arbitrary choice, because "arbitrary" goes against the goal of "predictable". Your design here is deterministic, but it still suffers from choosing arbitrarily, based on properties that should not have semantic effect.
So, if we were to just pick a type that both share, and do it predictably, consistently and not affected by arbitrary syntactic choices, instead of using random to choose, the original algorithm could say:
That's completely deterministic, it doesn't depend on how things are declared syntactically, it doesn't depend on the order of the operands to the LUB operation. It's main issue is that it's arbitrary. And, heck, maybe that's OK. Sometimes it's better to get a result which is correct, and predictable, but not obviously the only possible choice, than to get a result which is We can filter out inaccessible types first, if we want to avoid having them as result. (I'd want that.) (And it still cannot choose an anonymous mixin application, because any type which has an anonymous mixin application as superinterface will also have its singular subtype as superinterface, so we'll always find that instead.) The biggest issue here is that it relies on depth, which itself depends on implementation choices which shouldn't be publicly visible. If you change
I strongly disagree, and believe that ensuring that abstraction actually works, and prevents depending on implementation details behind the abstraction, is a cornerstone of providing a solid foundation for programmers. The "public API" that I keep talking about is the totality of what your library is promising its clients. Anything that they can depend on in such a way that changing it is breaking. The types, their subtyping behavior, their member signatures, and their dynamic behavior are all part of that. The ability to add Every part of your public API should be deliberate. And for that to be something authors can actually work with, it needs to be predictable. We generally say that what's inside a method body is invisible from the outside. That's why A public class exposes its interface (set of member signatures), its static declarations, its constructors (and even whether some of them are generative, if the class can be extended) and its public superinterfaces. There is, or should be, no way for other libraries to detect the private superinterfaces. A library can break its own abstraction.
But if it doesn't, it should be able to rely on private types being invisible to other libraries, even their position in the type hierarchy. Refactoring should be safe. If it isn't, it's because we're breaking abstraction. We should not be exporting the type hierarchy, just the superinterface relation. The complete type hierarchy is a red herring, that is, itself, an implementation detail of our type system. It contains details that are implementation details to individual libraries. And depth breaks that abstraction. And it shouldn't. (Or we should stop using it.) |
@eernstg I'll give a another read to your message later. I just wanna say something about the example that you gave: Given the task class D {}
class E extends D {}
class F implements E, D {}
class G1 extends Object implements F {}
class G2 extends Object implements F {} I'd turn this into: class G1 extends Object implements F implements E implements D extends D {}
class G2 extends Object implements F implements E implements D extends D {}
// 1.~~~~~~~~~~~~~~~~~~~~~~~ 2. ~~~~~~~~~
// 1. Results from applying step 3 on F
// 2. Results from applying step 3 on E And then remove all the noise: G1 Object F E D D
G2 Object F E D D And turn that into a list: [G1, Object, F, E, D, D]
[G2, Object, F, E, D, D] And then applying the variant that I described I will indeed get I can choose to get all the possible results( |
@lrhn wrote:
That sounds good, and I'd very much like to support that principle. However, I'd put forward an even stronger principle which is that we should be honest about the technical properties of entities in the language as they are actually defined (and we may very well be unable to change them such that they fit any given abstract principle). In particular, types that are private are not unobservable. First, they can be observed in the library where they are declared. I think it is at the very least a questionable idea that we should claim that they are unobservable in other libraries, if this causes such anomalies as // Library 'a.dart'.
class _Secret {}
class A1 extends _Secret {}
class A2 extends _Secret {}
bool b = true;
var a1 = A1();
var a2 = A2();
_Secret a = b ? a1 : a2; // OK, `UP(A1, A2) == _Secret`
// Library 'b.dart'
import 'a.dart';
void main() {
a = b ? a1 : a2; // Error, `Object` is not assignable to `_Secret`.
a = a1; // OK.
a = a2; // OK.
var aa = a; // OK, `aa` has declared type `_Secret`.
aa = a1; // OK.
aa = a2; // OK.
aa = b ? aa : aa; // Error, `Object` is not assignable to `_Secret`.
} You could say that 'a.dart' shouldn't export The last statement in Another issue is the ability for a private type to introduce public members. // Library 'a.dart'.
class _Secret {
void publicMethod() {}
}
bool b = true;
class A1 extends _Secret {}
class A2 extends _Secret {}
// Library 'b.dart'.
import 'a.dart';
void foo(A1 a1, A2 a2) {
(b ? a1 : a2).publicMethod(); // OK.
} If we acknowledge that
That's a non-sequitur, "cannot name" is not the same thing as "is unobservable". I actually think it's a good rule of thumb that public APIs shouldn't contain private types (so However, the rules of the language should be based on the actual language, not on some subset of the language which is considered "good style". Moreover, I'm not quite convinced that this rule ("don't use a private type in a public api") is universally desirable. The fact that a type is private puts some restrictions on the ways in which it can be used, and there might be some programming idioms where it turns out to be useful. In any case, if we had had the desire to ensure that private types are unobservable in other libraries then we should have had much more restrictive rules about private types. For instance, they should not be able to declare any public members, and it should not be possible to infer a private type from a different library as the type of any expression, etc.etc.
If there's a well-defined set of rules, and they have been implemented faithfully, and it is possible to express said API exactly as desired using those rules, then that's fine. In real life there will be cases where the rules don't quite fit the desired properties. For example, Dart doesn't support a mechanism which is similar to The slack (that is, the difference between the ideal API and the actual API) may be a delicate balancing act between different trade-offs. You could say that this balancing act is always a deliberate choice, but I think it's equally fair to say that some API properties will be accidental (in the sense that "this is what we got", and we basically can't get exactly what we want).
That is not necessarily a violation of the abstraction. For instance, we can use a type alias to impose an extra discipline on the provision of actual type arguments (such that we always have type arguments of the form // Library 'a.dart'.
typedef FunctionHolder<X> = _FunctionHolder<X, X Function(X)>;
class _FunctionHolder<X, X Function(X)> {
int Function(X) intFunc;
String Function(X) stringFunc;
FunctionHolder(this.intFunc, this.stringFunc);
}
// Library 'b.dart'.
void main() {
FunctionHolder<int> holder = FunctionHolder<num>(...); // Error.
} This allows us to express the constraint that In other words, the ability to exert a certain amount of control over the usages of a private type from a different library can be useful, and hence the story doesn't end with "breaks the abstraction". |
I want UP to depend on the context type too, so depending on the library is small stuff in comparison. 😁 As long as the superinterfaces do not contain any of your own library's private declarations, the behavior of such an UP would perfectly predictable. And if it does contain some of your own library's private declaration, you'd be the one to know, and care, and not be surprised by their existence. We can't know what users intend. But we can make generalized assumptions. A type being declared private is 99.9% of the time a type that other libraries shouldn't know about. Otherwise, why go to the effort of making it privately named at all. And absent of a better hint, I think we should use that as a hint that giving such a type as the result of UP in another library, is giving a useless result. And we should give a better result instead. |
This comment was marked as outdated.
This comment was marked as outdated.
I was trying to see if we can do it more efficiently. The hidden version above was the first attempt. Second attempt lives as history in this message. The third version is the final variant that actually works(as long as we don't find any issues with it). It's capable of minimizing the number of valid results that algorithm has to check. For instance, in the following example: class A {}
class B extends A {}
class C extends B {}
class D extends C {}
class E extends D {}
class T1 extends E {}
class T2 extends D {} The already purposed variants will try to find the best item from all valid result items(which in this example are: Algorithm
This variant ensures that best item(the closest type which is common to operand types) is part of L when we reach the last step(8). So last step can be changed to one's liking or requirement. And the idea here is that the best item or the item that leads to the best item is always part of declarations of operand types. The only way we can miss the best item using this approach is if we allow the best item to hide behind an invalid result item. By replacing an invalid result item with a valid one we'll force the best item to turn up in L, the spot where we're dead set on hunting it down. Note: This is assuming that there is no entry in The question we're left with is whether this idea is sound(I think it is), and whether everything else is working accordingly. If it's true then that would mean that this variant is capable of ignoring all the sub optimal results and selecting our best item in an efficient way. And there's still a room for micro-optimizations here and there but those are the obvious ones so I deliberately chose to ignore them. |
@lrhn and I had a long discussion about this IRL. We still don't agree, but the discussion clarified some overall relevant points. The main reason why I want to use an If we do that then we should also do the following in order to be consistent:
We could also introduce the ability for a mixin to have no type of its own. For every member declaration D it would then be forced to have superinterfaces such that there is a combined member signature with the same name as D and with the same types everywhere (same return type, no covariant specialization, same parameter types, etc). A mixin with no type could reasonably be ignored during the computation of But we don't do that, and it would be a potentially massively breaking change. Moreover, the restriction on type aliases will gratuitously destroy a significant amount of expressive power. Surely there will be some useful designs relying on not having each of the other restrictions. I'd very much prefer to have lints to support every developer who wishes to enforce these restrictions as far as possible. We already have library_private_types_in_public_api, and we could easily add an extra lint for public members of private classes, for inferred variable types which are private to a different library, etc. It should also be noted that if we have these restrictions in the language then the Dart 1 algorithm in In short: Let's use simple and consistent rules, and treat private types in a way that resembles the treatment they get elsewhere (e.g., when |
And my general argument against is that UP is not about being consistent. It's a heuristic function that tries to give a useful upper bound, while being deterministic, symmetric, non-arbitrary and possibly efficient. The "non-arbitrary" means that if there are two equally valid choices, it doesn't choose one of them. Not even if it could do it deterministically and symmetrically. And that might still be more of an artifact of being deterministic, symmetric, and fairly easily specified, than being an actual design choice. But it's a heuristic, which worked adequately when it was designed. And so, heuristically, I think we'd be giving the user a more useful result in the very vast majority of cases, if we omit types with inaccessible declarations from the set of possible results. (And any anonymouys mixin applications.) Any type you can't name, is more likely to not be a type that you can use. The examples here, of private types that are usefully inferred outside of their declared libraries, is code that would never pass code-review. It's possible to write such code. It's just not a good idea, except perhaps in a very minuscule amount of cases, and we should prevent ourselves from improving the majority of cases because of that. And the added benefit would be that authors don't need to worry about their private declarations getting leaked in that way. They can still leak them in other ways, but that's on themselves. This was one way of leaking a private type that was mostly outside of the author's control, and not something lints would have much of a chance of detecting. Leaking them through your own API is much easier to detect. (But I also think that a complete redesign, where we use the context type as well as the operand types, and go away from depth entirely, and possibly generalize to more than two operands, because UP is not associative, would probably be even better. In the design space, with smaller modifications to the existing UP, I'd remove the inaccssbile types that usually just get in the way of finding another result. If we allow bigger changes, the sky is the limit!) |
@eernstg if we want to support both users i.e a user who is NOT OK with an inaccessible type and a user who is, then I don't think adding a simple lint will help. I understand that a user will be able to see the warning but how they're supposed to fix it? wouldn't there be case in which algorithm return an inaccessible type because it was deeper than a type that is accessible and now user is seeing a warning that they can't fix? I think this is where we'd have to provide a method using which user can tell the algorithm whether they're okay with inaccessible types as results in their code. We could do that in analysis options or something but then it'll get more complicated from here. What if a user is NOT OK with inaccessible types but some of the packages that they're using are OK with them? Sure we'll be able to find a way to proceed but this is the point where more things, more features will start depending on this behaviour. Which is OK TOO but what if in future we find out that some of the features don't align with the user's ability to choose this behaviour and our algorithm(which is pervasive) start giving weird results? this is the point where we might want to roll back that decision and support just one user(could be any one) but now we're locked in a design that we cannot change because it'll break everything. So it should be just one user(could be any one). Constraints don't travel through time, at least not as far as decisions do. In future, we might find ourselves in an environment where we've the flexibility to support both users so this could be seen as a limitation for now. (Also, I've added the optimized version #3290 (comment), just in-case someone wants to see/evaluate the approach) |
I think what is left here is the discussion about what to do next and this is something I should stop meddling into. So I have no problem if team thinks that they can provide assistance/support in both use-cases(the one where a user is OK with inaccessible types as results and the one where a user is NOT). I appreciate that team members(eernstg and lrhn) shared details to public(like me) but it'll be okay too if team decides to do the further discussion elsewhere(or turn it into an internal discussion). And I understand that such discussions can be time consuming and may not always lead to a decision so I'm perfectly fine with this issue remaining open, indefinitely. |
I was trying to see if there is a way to proceed, but it turns out that my thinking got a bit muddled recently. The correct thing for an algorithm is to return multiple results if there are multiple results. If I'm trying to return single result from an operation capable of producing multiple results, without consulting the user who initiated the operation, then my thinking is flawed. And if a user is expecting me to always guess exactly the result they're looking for, then their thinking is flawed as well. So if I allow users to supply types in a way that can lead to multiple results, then I should ask them which one to continue with whenever there are multiple results. I see that making an arbitrary choice for them is bad but completely swallowing their results like some black hole and offering no explanation about their whereabouts is what I'd say "pretty damning" :) Here's an another approach: If there are multiple valid types that algorithm can return, then it return a type Additionally, a warning can help users avoid casting Furthermore, I think algorithm can decide to return Patchwork is for people like me who know nothing about designing these systems. |
Here's one example where we're already making an arbitrary choice: mixin M1 { void f() => print('f from M1'); }
mixin M2 { void f() => print('f from M2'); }
class A with M1, M2 {}
class B with M2, M1 {}
A().f(); // f from M1
B().f(); // f from M2 As you can probably see call to I hope this isn't specified in the rule book but if it actually is... Jesus H. Christ |
It is specified. Mixin applications are ordered, so that later mixin applications can do |
It just looked to me that specification is OK with arbitrary choices if they're made for implementation convenience but NOT OK if they're made for user convenience. If this |
I still don't get why I'd want to do a Even if I add a type constraint on This doesn't look like a feature to me unless someone explains me how. |
@hamsbrar wrote:
The lint isn't going to help a developer who is experiencing a typing which is an outright error or perhaps just an inconvenience, it is intended to help the authors and maintainers of the types that we're computing an upper bound for. The assumption is that those authors and maintainers will want to avoid having certain structures in their type hierarchies. For instance, they might well want to make sure that no public type is a subtype of any private type. If that's true then we don't have to worry about how to deal with public types whose If that's daily life, and we have a ruleset with no exceptions, then it's not going to be hard to reason about cases where it does happen after all (that is, some expressions have a static type which is private to another library). Such cases might turn out to be a useful design pattern, and we generally don't make anything an error in Dart just because we haven't thought of a useful way to use it, yet. In client code, it is always possible to force a particular solution if class I {}
class J {}
class C1 implements I, J {}
class C2 implements I, J {}
var b = true;
void main() {
var x = b ? C1() : C2(); // Must yield `Object`, which we don't want!
var y = b ? C1() : C2() as I; // Disambiguate explicitly, `y` gets type `I`.
}
Sorry about the delayed responses. We do have lots of internal discussions (as well), and things can take time to settle. But do continue to contribute to any and all Dart discussions here if and when you want to do so!
Superinvocations in mixins are useful when you want to create a behavior from parts: class A {
final String name;
A(this.name);
String greeting() => name;
}
mixin Hello on A {
@override
String greeting() => 'Hello, ${super.greeting()}';
}
mixin Nice on A {
@override
String greeting() => '${super.greeting()}, nice to see you!';
}
mixin Join on A {
@override
String greeting() => '${super.greeting()}! Come and have cup of coffee!';
}
class B1 = A with Hello, Nice;
class B2 = A with Hello, Join;
void main() {
print(B1('John').greeting());
print(B2('John').greeting());
} The mixins don't have to depend directly on each other, they just need to be designed for contributing to a behavior that has multiple parts. |
No worries!
I think most users are going to write this: class I {}
class J {}
class C1 implements I, J {}
class C2 implements I, J {}
var b = true;
void main() {
var x = b ? C1() : C2();
var y = b ? C1() : C2();
// -----------------------------
if(y is I) {
// y.somethingFromI();
}
if(y is J) {
// y.somethingFromJ();
}
// -----------------------------
} Which is also good so no issues here. They maybe able to write this too but maybe in distant future: void main() {
var x = b ? C1() : C2();
var y = b ? C1() : C2();
// y.somethingFromI();
// y.somethingFromJ();
} One thing, it'd be great if UP errors carries more information because in many cases they aren't related to user code.
This pattern looks fragile but if that's what users want then what can I say. Also, I admit that there's no other way to enable this pattern since it depends on everything there is to depend in the current design. I think a lint for detecting mixin applications having conflicting members can help users who are looking for more safety and predictability.
Thank you for sharing this one! |
Good point! @bwilkerson, do you think there is a way ahead for telling users how the type bool b = true;
void main() {
List<num> xs = [1];
Iterable<int> ys = [2];
Iterable<num> zs = b ? xs : ys;
} Perhaps it could be managed as a form of provenance on a |
Question from the
What if I generalise their conclusion as Now, is this going to run into problems? and if not then what would be the impact of this generalisation? |
Check out #3282. ;-) |
I did check it, the purposed version over there is somewhat similar to the generalisation that I made but yours is definitely better, less restrictive and isn't missing any details. The algorithm purposed in this thread can be extended with proper handling of type arguments while avoiding termination issues. That is, I can imagine algorithm accepting If I'm not mistaken this time, the only problem that remains is arbitrary choices, and it's not related to the algorithms. It's a decision that other parts have to make.
If second/third changes are not-possible/infeasible and not everyone likes the first change then:
This will fix issues stemming solely from inaccessible types. (I'm not seeking immediate actions; this is just an overview.) Also, I can finally see how depth function is defined(#3282) and it clearly suffers from choosing arbitrarily: class A {}
class B {}
class B2 extends B {}
class T1 extends A implements B2 {}
class T2 extends A implements B2 {}
var t = (true as dynamic) ? T1() : T2(); // B2
Specification need to stop shifting the goalposts and be transparent about arbitrary choices: If specification doesn't want arbitrary choices then it should say 'Select type that is common and is more specific than every other common type' or 'Choose none'. An implementation can use depth function to determine which type is more specific iff all common types are related else it return none(or fallback to (OR) If depth function is allowed to make arbitrary choices elsewhere then it should be allowed to make an arbitrary choice when there are multiple types at the deepest level(when |
@hamsbrar, thanks for the kind words!
Point well taken! On the other hand, the property of being relevant to the context doesn't appear to be algorithmic (unless it is expressed directly as a context type schema), so we probably can't even hope to eliminate the reliance on some property which is "arbitrary" in that sense. I think this would be yet another reason to keep this issue in mind: #1618. |
"technically" :) even random numbers aren't random. |
At this time, In particular, this example is accepted with no errors with enum E { a, b }
mixin M on Enum {}
abstract interface class I {}
enum E1 with M implements I { a1 }
enum E2 with M implements I { a2 }
List<Enum> f(E e) => switch (e) {
E.a => E1.values,
E.b => E2.values,
};
void main() => f(E.a).forEach(print); I'll close this issue because the fix is in the pipeline and will be available soon. Note that the example used to be related to #3665 as well, but this is no longer true (because the context type overrules the upper bound computation). Anyway, that issue is being fixed as well. |
[Edit by eernstg: This issue revealed that the analyzer and the CFE treat the superinterface graph of an enum declaration differently. It should be decided which treatment is correct, if any, and implementations should then use that approach.]
There are no warnings from the analyzer.
Compiler throws the following error:
Analyzer report an issue if I remove the type constraint from the mixin
M
. i.e if I usemixim M {}
notmixin M on Enum {}
.Analyzer report the same issue if either
E1
orE2
stop mixing withM
.Example work as expected if
f
is inlined.Example work as expected if either
E1
orE2
stop implementingI
.Example work as expected if there's only one match in switch. i.e it works if I use
switch (e) { _ => E1.values }
.Example work as expected if I use switch statement. i.e it works if I use
switch{ case E.a: return ...
.Actual example(where I'm having this issue) is bit large but do let me know if it's required. Also it could be true that I'm not using some of the language features as they are intended, and in that case it'd great to see some relevant warnings.
The text was updated successfully, but these errors were encountered: