New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Proposal: Tuple deconstruction rules should be equivalent to method overload resolution rules #1998
Comments
What's your cite for this? I'd argue that not only should it not be allowed, but it also doesn't make any sense. Deconstruction of a tuple (or tuple-likes) should be strictly by arity, not by some combination of the types of elements. |
The arity of a deconstructor is just a random number--it's an accident of how the type is factored and has nothing to do with its semantics. Consider, for example a type 'Resource' that can be deconstructed into a (name, path) or (name, guid) pair. The fact that the second pair was (name, guid) and not some other random combination of implementation data structures is an accident and I don't see any good reason to forbid this kind of behavior. |
It's not a random number. Deconstruction is strictly positional and is based entirely on arity, not structure. In the design of that deconstruction is also recursive pattern matching where each position can be further matched against a type, where overloading is not something that makes sense. The idea that a type can be deconstructed into two separate tuple forms of the same arity but with different structures just doesn't make sense. |
Here's an example where it might make sense - what if an instance of
As a determined fan of semantic types, I'd argue that the first form is superior - but I know (and respect!) developers who would be equally certain that the second form is the best. Currently, only one could be supported because of the arity rule. TBH, I'm not convinced that either form should be available - it seems to be bordering on misuse of deconstruction. Then again, I'm not wholly against it either ... #conflicted 😕 |
What makes you think that? I was gonna say nay on this feature because I thought that order doesn't matter when we deconstruct, but since it still does, I think it would still make more sense to be more consistent with methods. At the same time though, it could be confusing using someone else's code as deconstructors aren't as easy to find out about as methods are, where you can just hit |
Definitely worth reading the thread that @ufcpp links to (#1092). As @HaloFour says, by allowing multiple deconstructs with the same number of parameters, we'd cause all sorts of issues with recursive pattern matching. The latter is made far simpler by effectively only allowing one deconstruct per arity count. It's a trade-off. Overloading of deconstructors would occasionally be useful. Recursive pattern matching will be incredibly useful. So it makes sense to favour language rules that benefit the latter over the former. |
The current rules are necessary to support pattern-matching. The rules are intentional and a result of deliberate decisions in C# 7. |
I don't understand how this conflicts with pattern matching in the case where types are explicitly stated on the left-hand side. There is no nontrivial pattern matching in that case, and that rule does not conflict with more elaborate pattern-matching schemes. In those cases, the types would be absent from the left-hand side. |
Pattern-matching does not have a left-hand-side. More elaborate pattern-matching scheme is to have a parenthesized list of patterns to match, and we use the number of patterns to decide which Deconstruct to use. |
Any chance this will change in the future? It limits the "neatness" of deconstruction pretty severely, imo. |
The ‘Deconstruct’ declarations:
and usages:
Lead to ambiguous invocation errors.
Calling these as regular methods, however, works fine:
Of course, using ‘var’ would be ambiguous in this case, and the behavior of deconstruction seems to indicate that it is equivalent to the above but with ‘var’.
But it’s not uncommon for a type to be deconstructable into two different tuples that just happen to have the same arity, so I think the deconstruction behavior should be equivalent to explicitly calling these methods.
The text was updated successfully, but these errors were encountered: