-
Notifications
You must be signed in to change notification settings - Fork 1.6k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Auto-generated sum types #2414
Comments
Thanks for creating a separate issue for this. To be honest, I'm not sure we need explicit syntax for this. It's more of an (important) implementation detail than a user-facing feature, no? But if one does want to make the syntax explicit, then I suggest putting something like |
@alexreg one reason to make it explicit is it does have performance implications, each time you call a method there will have to be a branch to call into the current type (hmm, unless these were implemented as some form of stack-box with a vtable instead, either way that still changes the performance). My first thought was also to make it a modifier on the |
I might be wrong, but wouldn't the |
@Pauan Oh indeed, I was thinking @Nemo157, @alexreg The issue with putting this as a modifier in the type signature is the fact it wouldn't work well inside a function: fn bar() -> Option<LinkedList<char>> { /* ... */ }
// This is allowed
fn foo() -> impl enum Iterator<Item = char> {
match bar() {
Some(x) => x.iter(),
None => "".iter(),
}
}
// Either this is not allowed, or the loss in performance is not explicit
fn foo() -> impl enum Iterator<Item = char> {
let mut tmp = match bar() {
Some(x) => x.iter(),
None => "".iter(),
};
let n = tmp.next();
match n {
Some(_) => tmp,
None => "foo bar".iter(),
}
} |
Haven't you just invented dynamic dispatch?? |
Yeah, fair point about the performance hit. It’s a small one, but it wouldn’t be in the spirit of Rust to hide it from the user syntactically. |
@Ekleog you can use the exact same syntax for inside a function, somewhere you need to mention the trait that you're generating a sum type for anyway: fn foo() -> impl enum Iterator<Item = char> {
let mut tmp: impl enum Iterator<Item = char> = match bar() {
Some(x) => x.iter(),
None => "".iter(),
};
let n = tmp.next();
match n {
Some(_) => tmp,
None => "foo bar".iter(),
}
} @est31 a constrained form of dynamic dispatch that could potentially get statically optimized if the compiler can prove only one or the other case is hit. Or, as I briefly mentioned above it could be possible for this to be done via a union for storage + a vtable for implementation, giving the benefits of dynamic dispatch without having to use the heap. (Although, if you have wildly different sizes for the different variants then you pay the cost in always using the size of the largest.) One thing that I think might be important is to benchmark this versus just boxing and potentially have a lint recommending switching to a box if you have a large number of variants (I'm almost certain that a 200 variant switch would be a lot slower than dynamically dispatching to one of 200 implementors of a trait, but I couldn't begin to guess at what point the two crossover in call overhead, and there's the overhead of allocating the box in the first place). |
@Nemo157 Thanks for explaining things better than I could! I'd just have a small remark about your statement: I don't think a 200-variant switch would be a lot slower than dynamic dispatch: the switch should be codegen'd as a jump table, which would give something like (last time I wrote assembler is getting a bit long ago so I'm not sure about the exact syntax) So the mere number of implementors shouldn't matter much in evaluating the performance of this dispatch vs. a box. The way of using them does have an impact, but this will likely be hard to evaluate from the compiler's perspective. However, what may raise an issue about performance is nesting of such sum types: if you have a sum type of a sum type of etc., then you're going to lose quite a bit of time going through all these jump tables. But the compiler may detect that one member of the sum type is another sum type and just flatten the result, so I guess that's more a matter of implementation than specifiction? :) |
LLVM is capable of doing devirtualisation.
That's a point admittedly. Dynamically sized stack objects are a possibility but they have certain performance disadvantages. |
Is there anything wrong with the bike shed color I think procedural macros could generate this right now. If coersions develop further then maybe doing so would becomes quite simple even. Right now, there are |
So if we are going to start painting the bike shed (there seems to be little opposition right now, even though it has only been like a day since initial posting), I think there is a first question to answer: Should the indication of the fact the return is an anonymous sum type lie in the return type or at the return sites?i.e. fn foo(x: T) -> MARKER Trait {
match x {
Bar => bar(),
Baz => baz(),
Quux => quux(),
}
}
// vs.
fn foo(x: T) -> impl Trait {
match x {
Bar => MARKER bar(),
Baz => MARKER baz(),
Quux => MARKER quux(),
}
} Once this question will have been answered, we'll be able to think about the specifics of what |
So, now, my opinion: I think the fact the return is an anonymous sum type should lie at the return site, for two reasons:
On the other hand, the only argument I could think of in favor of putting the marker in the return type is that it makes for less boilerplate, but I'm not really convinced, so I'm maybe not pushing forward the best arguments. |
@Ekleog Yeah, I'm with you on return site actually, even though I proposed the return type syntax above. As you say, it reflects the fact that it's more of an implementation detail that consumers of the function don't need to (shouldn't) care about. Also, I think the analogy to |
For a closed set of traits, sure, but to allow this to be used for any trait requires compiler support for getting the methods of the traits. Delegation plus some of its extensions might enable this to be fully implemented as a procedural macro. I'm tempted to try and write a library or procedural macro version of this, I am currently manually doing this for |
Various forms of
I believe the main thing preventing these discussions from going anywhere is that making it even easier to use impl Trait further accentuates the only serious problem with impl Trait: that it encourages making types unnameable. But now that the @Nemo157 Hmm, what delegation extensions do you think we'd need? The "desugaring" I've always imagined is just a Though I think we should probably not block this on delegation, since it could just be compiler magic. |
From the current delegation RFC the extensions required are "delegating for an enum where every variant's data type implements the same trait" (or "Getter Methods" + "Delegate block") and "Delegating 'multiple Self arguments' for traits like PartialOrd" (although, this could be implemented without it and this feature would be in the same state as normal delegation until it's supported). One thing I just realised is that delegation won't help with unbound associated types, required to support use cases like: #[enumified]
fn foo() -> impl IntoIterator<Item = u32> {
if true {
vec![1, 2]
} else {
static values: &[u32] = &[3, 4];
values.iter().cloned()
}
} would need to generate something like enum Enumified_foo_IntoIterator {
A(Vec<u32>),
B(iter::Cloned<slice::Iter<'static>>),
}
enum Enumified_foo_IntoIterator_IntoIter_Iterator {
A(vec::Iter<u32>),
B(iter::Cloned<slice::Iter<'static>>),
}
impl IntoIterator for Enumified_foo {
type Item = u32;
type IntoIter = Enumified_foo_IntoIterator_IntoIter_Iterator;
fn into_iter(self) -> self::IntoIter {
match self {
Enumified_foo_IntoIterator::A(a)
=> Enumified_foo_IntoIterator_IntoIter_Iterator::A(a.into_iter()),
Enumified_foo_IntoIterator::B(b)
=> Enumified_foo_IntoIterator_IntoIter_Iterator::B(b.into_iter()),
}
}
}
impl Iterator for Enumified_foo_IntoIterator_IntoIter_Iterator {
...
} |
@Ixrec Oh indeed, I didn't think of the However, this could be “fixed” by piping This wouldn't change the behaviour for existing code (as adding So I still think that the ease of using However, as a downside of fn bar() -> bool {…}
struct Baz {} fn baz() -> Baz {…}
struct Quux {} fn quux() -> Quux {…}
struct More {} fn more() -> More {…}
trait Trait1 {} impl Trait1 for Baz {} impl Trait1 for Quux {} impl Trait1 for More
trait Trait2 {} impl Trait2 for Baz {} impl Trait2 for Quux {}
fn foo() -> impl Trait1 {
let x = match bar() {
true => MARKER baz(), //: TypeInProgress(Baz)
false => MARKER quux(), //: TypeInProgress(Quux)
}; //: TypeInProgress(Baz, Baz) & TypeInProgress(Quux, Quux)
// = TypeInProgress(Trait1 + Trait2, enum { Baz, Quux })
// (all the traits implemented by both)
if bar() {
MARKER x //: TypeInProgress(Trait1 + Trait2, enum { Baz, Quux })
} else {
MARKER more() //: TypeInProgress(More, More)
} // TypeInProgress(Trait1 + Trait2, enum { Baz, Quux }) & TypeInProgress(More, More)
// = TypeInProgress(Trait1, enum { Baz, Quux, More })
// (all the types implemented by both)
} And, once this forward-running phase has been performed, the actual type can be observed (ie. here On the other hand, for a return-site-level // (skipping the same boilerplate)
fn foo() -> MARKER Trait1 {
let x: MARKER Trait1 = match bar() {
true => baz(),
false => quux(),
}; // Here we need to infer MARKER Trait1.
// Observing all the values that come in, we see it must be enum { Baz, Quux }
if bar() {
x
} else {
more()
} // Here we need to infer MARKER Trait1.
// Observing all the values that come in, it must be enum { enum { Baz, Quux }, More }
} I personally find the syntax of the second example less convenient (it forces writing down exactly which trait(s) we want to have, not letting type inference do its job) and the end-result less clean (two nested What do you think about the idea of having |
For me, the primary use case is returning an I'm not sure I understand the sentiment behind "not letting type inference do its job". Both variations of this idea involve us writing an explicit MARKER to say we want an autogenerated anonymous enum type. In both cases, type inference is only gathering up all the variants for us, and never inferring the need for an anon enum type in the first place. In both cases, the variable
I'm not sure I buy either of these claims. To the extent that "detecting common subtrees" is important, I would expect the existing enum layout optimizations to effectively take care of that for free. We probably need an actual compiler dev to comment here, but my expectation would be that the actual optimization-inhibiting difficulties would come from having "all traits" implemented by the anon enums, instead of just the traits you need. And to me, the autogenerated anonymous enum type implementing more traits than I need/want it to is "less clean". I guess that's one of those loaded terms that's not super helpful. I'm not seeing the significance of the
I think "the idea of having ? return MARKER x.unwrap_err()" is also strictly an implementation detail that's not really relevant to the surface syntax debate, especially since ? is already more than just sugar over a macro. To clarify, I believe the real, interesting issue here is whether we want these anonymous enum types to implement only the traits we explicitly ask for, or all the traits they possibly could implement. Now that this question has been raised, I believe it's the only outstanding issue that really needs to get debated to make a decision on whether MARKER goes at every return site or only once in the signature/binding. My preference is of course for the traits to be listed explicitly, since I believe the primary use case to be function signatures where you have to list them explicitly anyway, and I also suspect that auto-implementing every possible trait could lead to unexpected type inference nuisances, or runtime behavior, though I haven't thought about that much. Let's make the type inference nuisance thing concrete. Say Trait1 and Trait2 both have a foo method, and types A and B both implement both traits. Then you want to write a function that, as in your last two examples, returns |
Well, I added it to answer your concern that it would be painful to have to add
That's true. However, the same could be said with regular types: if I return a single value (so no
Well, apart from the end-result being cleaner (and I don't think Actually, I'd guess that's how fn foo() -> Vec<u8> {
let res = Vec::new; //: TypeInProgress(Vec<_>)
bar();
res // Here we know it must be Vec<u8>, so the _ from above is turned into u8
}
Completely agree with you on this point :) |
Would it be practical to use a procedural macro to derive a specialized iterator for each word? (It seems possible, but a little verbose) #[derive(IntoLetterIter)]
#[IntoLetterIterString="foo"]
struct Foo;
#[derive(IntoLetterIter)]
#[IntoLetterIterString="hello"]
struct Hello;
fn foo(x: bool) -> impl IntoIterator<Item = u8> {
if x {
Foo
} else {
Hello
}
} |
I'm concerned with the degree to which this seems to combine the implementation details of this specific optimization with the code wanting to use that optimization. It seems like, despite I also wonder to what degree we could detect the cases where this makes sense (e.g. cases where we can know statically which impl gets returned) and handle those without needing the hint. If the compiler is already considering inlining a function, and it can see that the call to the function will always result in the same type implementing the Trait, then what prevents it from devirtualizing already? I'd suggest, if we want to go this route, that we need 1) an implementation of this that doesn't require compiler changes, such as via a macro, 2) benchmarks, and 3) some clear indication that we can't already do this with automatic optimization. And even if we do end up deciding to do this, I'd expect it to look less like a marker on the return type or on the return expressions, and more like an |
Just to add my thoughts to this without clutter, here is my version of the optimization: https://internals.rust-lang.org/t/allowing-multiple-disparate-return-types-in-impl-trait-using-unions/7439 I think that automatic sum type generation should be left to procedural macros |
@joshtriplett I don’t believe the only reason to want this is as an optimisation. One of the major reasons I want this is to support returning different implementations of an interface based on runtime decisions without requiring heap allocation, for use on embedded devices. I have been able to avoid needing this by sticking to compile time decisions (via generics) and having a few manually implemented delegating enums, but if this were supported via the language/a macro somehow that would really expand the possible design space. I do agree that experimenting with a macro (limited to a supported set of traits, since it’s impossible for the macro to get the trait method list) would be the way to start. I’ve been meaning to try and throw something together myself, but haven’t found the time yet. |
@joshtriplett to address part of your comment, i.e. benchmarks, I created a repository that uses my method and benchmarks it against Box. Although I only have one test case and it is somewhat naive, it seems that my method is about twice as fast as Box. Repo here: https://github.com/DataAnalysisCosby/impl-trait-opt |
@Nemo157 I don't think you need heap allocation to use But in any case, I would hope that if it's available as an optimization hint, it would have an |
@joshtriplett Let's look at this example (here showing what we want to do): trait Trait {}
struct Foo {} impl Trait for Foo {}
struct Bar {} impl Trait for Bar {}
fn foo(x: bool) -> impl Trait {
if x {
Foo {}
} else {
Bar {}
}
} This doesn't build. In order to make it build, I have a choice: either make it a heap-allocated object: fn foo(x: bool) -> Box<Trait> {
if x {
Box::new(Foo {})
} else {
Box::new(Bar {})
}
} Or I do it with an enum: enum FooBar { F(Foo), B(Bar) }
impl Trait for FooBar {}
fn foo(x: bool) -> impl Trait {
if x {
FooBar::F(Foo {})
} else {
FooBar::B(Bar {})
}
} The aim of this idea is to make the enum solution actually usable without a lot of boilerplate. Is there another way to do this without heap allocation that I'd have missed? As for the idea of making it an optimization, do you mean “just return a Box and have the compiler |
@Ekleog Ah, thank you for the clarification; I see what you're getting at now. |
Regarding the third playground example, you can use derive_more to derive AFAICS a procedural macro on the following form could potentially solve the complete problem #[derive(IntoLetterIter)]
enum FooBar {
#[format="foo"]
Foo,
#[format="hello"]
Hello,
} |
For (historical?) context, way way back when I first got into this topic and I suggested we call this So that's part of the reason the recent discussion of nested things like |
@Ixrec (aside, this is the first time I've noticed that's a capital I not a lowercase l) error types have some of the exact same issues with wanting to have the conversion happen inside closures though, for example you might want to write something like use std::error::Error;
fn foo() -> Result<u32, Error1> where Error1: Error { ... }
fn bar(baz: u32) -> Result<u64, Error2> where Error2: Error { ... }
fn qux() -> Result<u64, marker impl Error> {
let baz = foo().map_err(|e| marker e)?;
let result = bar(baz).map_err(|e| marker e)?;
Ok(result)
} to make the short way to write it work, somehow fn qux() -> Result<u64, marker impl Error> {
Ok(bar(foo()?)?)
} |
Yeah, in older discussions I think it was always assumed that if this feature happened |
@Ixrec Once we have @Ekleog @Pauan Yes, there was some confusion. I'm not in favor of special-casing unwrapping/rewrapping for Iterators or any other type, that's why I prefer "marker at expression" so that we'd write foo(if cond { marker a } else { marker b });
fn foo<T: Bar>(x: T) { ... } // or written as: fn foo(x: impl Bar) { ... } it allows the compiler to infer that the trait to use for unification is Even in cases where it can't be inferred, it should work with a Another argument for keeping the marker away from the But the strongest argument for "marker at expression" IMO is |
I like roughly the Also, there are seemingly many parallels to trait objects here, which we could leverage via improvements in DST handling:
In this, We might need procedural macros to interact with type inference for that exact syntax to work, as the different invocations of
You could still simplify this syntax with a convenience macro though:
|
@Boscop (disclaimer: I'm in favor of marker-at-expression too) IMO, there is still an argument in favor of marker-at-type, and it is the question of The answer to this question is highly non-obvious to me. @burdges The Also, the issues with using only the |
@Ekleog Btw, |
You first do not want We cannot have syntactic ambiguity between the first and second cases here of course since they create different associated types. I only suggested requiring identical associated types, which supports the first case but completely forbids this second case. We might call this second form "associated type erasure safe", especially in the trait object context. We have many object safe traits like Anyways.. An anonymous enum type is seemingly a special case of a trait object, so the syntax We should thus explore type erasure for trait objects before worrying about the enum special cases being discussed here. Is there any serious proposal even for generic tooling to say supersede the |
See also the discussion in #2261, which has been closed in favor of this issue. |
I've wrote a pre-RFC which includes functionality requested in this issue. |
I wrote this feature with procedural macro: auto_enums. #[auto_enum(Iterator)]
fn foo(x: i32) -> impl Iterator<Item = i32> {
match x {
0 => 1..10,
_ => vec![5, 10].into_iter(),
}
} Several other points:
|
@taiki-e Good stuff! This looks like a well-engineered approach. |
For the record, my personal colour of the bikeshed would be: fn foo(do_thing: bool) -> impl Iterator<Item = u32> {
let a: enum = if do_thing {
iter::once(1)
} else {
iter::empty()
};
a
} This syntax can only be used on Upsides:
Downsides:
EDIT: I just realised that this would also be a good testbed for "compact" enums, since that optimisation could trivially be applied to these generated types without affecting stability or compatibility (since the types are only ever touched by generated code) #311 |
@Vurich In that case, I believe it would also be appropriate for the syntax Also, I would be excited to see this syntax move forward. Is now a good time to make an RFC? I would like others to confirm that the |
Can we consider whether @taiki-e's crate is now sufficiently good for this issue to be closed? It's worth the debate at least. |
I think we could do that if it supported arbitrary amounts (more than one, if we even support that now), positions (inside a vec or by itself), and traits (any arbitrary trait) of impl Trait in addition to the ? syntax that started this whole discussion to get clean and easy error enums. If it's possible to do all of this with a macro, then I think that is probably a better way to go about it instead of adding a feature to the language. One counter argument I see would potentially be compilation time. Does anyone else have any objections to this being in proc macro crate? It seems like it could be possible, but I'm not an expert. @taiki-e Do you think if someone took the time that it would be possible to implement all the above features. The main ones are things like Vec where the impl trait exists in some arbitrary place and some spot must exist where the leaf is converted to the underlying enum or an error is raised. It seems like that might require doing some of the work the compiler is already doing to unify the types. Additionally, is the ? syntax with errors doable with the proc macros? Edit: Upon further thought, I don't think its possible for a macro to do it for an arbitrary trait because it cant visit all the things it needs to implement in the trait since it isn't processing that. |
number of variants supported:
see https://docs.rs/auto_enums/0.7.1/auto_enums/#supported-syntax and https://docs.rs/auto_enums/0.7.1/auto_enums/#positions-where-auto_enum-can-be-used proc-macro cannot understand the type, so there are cases where additional annotations are needed:
This is impossible, but unsupported traits can be implemented via proc_macro_derive. (auto_enums passes unsupported traits to #[derive]) |
Seems like it is all possible. It sounds like the more productive course of action is to extend proc macros in some way (if necessary) to assist in avoiding duplicates for the ? operator. I would like to see this issue moved to completion. Do we have potential solutions for inspecting the error types and avoiding generating the different variants for ones we have already seen? If so, issues should probably be opened for those things before this is closed. It does seem to me like this issue will be solved then, since the original motivation was for easier error handling. This ticket has had very little activity recently, but I encourage anyone with any objections to add their part. |
@vadixidav Yes, extending the capabilities of proc macros definitely sounds like a better way to go right now. |
Are there similar crates with similar issues for doing delegation with proc macros? If so, maybe organizing all the related issues from both sounds like the first step? |
Yes, the ambassador crate via #2393 (comment) does delegation, so maybe some common desires for proc macro features there. I'll also note #2587 (comment) was closed as postpone only one year ago. |
I think overloading the term "enum" might make sense to people who are familiar with why it's named that way, but it'd be confusing conceptually. Currently
|
@teohhanhui It would be read as "return an enum that impls the trait". In fact, I think |
The fact that it uses an enum is an implementation detail. That's why I said it's confusing conceptually. Why should the compiler always be tied to returning an enum? |
Actually, with all optimization resulting type might be not an enum at all, thanks to unspecified ABI, |
First, the idea was lain down by @glaebhoerl here.
The idea is basically to have a way to tell the compiler “please auto-generate me a sum trait for my
-> impl Trait
” function, that would automatically deriveTrait
based on the implementations of the individual members.There would, I think, be a need for a syntax specially dedicated to this, so that people are not auto-generating these without being aware of it. Currently, the two syntaxes I can think of are either
|value|
(without a following{}
), running on the idea of “this is auto-generated, just like closures” (I don't like it much, but it could make sense), and the other idea is to make it use a keyword. I'd have gone withauto
, but it appears to not be reserved, andbecome
orenum
would likely be the best choices.(Edit: @Pauan pointed out that the
|…|
syntax would be ambiguous in certain cases, so please disregard it)So the syntax would, I think, look like this:
The major advantage of the
||
syntax is that it doesn't raise the issue of parenthesizing, as it's already made of parenthesis.What do you think about this idea, especially now that
impl Trait
is landing and the need is getting stronger?The text was updated successfully, but these errors were encountered: