New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

RFC: Associated type defaults #2532

Open
wants to merge 23 commits into
base: master
from

Conversation

@Centril
Copy link
Contributor

Centril commented Aug 27, 2018

🖼️ Rendered

📝 Summary

Resolve the design of associated type defaults, first introduced in RFC 192, such that provided methods and other items may not assume type defaults. This applies equally to default with respect to specialization. Finally, dyn Trait will assume provided defaults and allow those to be elided.

💖 Thanks

To @aturon for their work on RFC 192 and RFC 1210 upon which this RFC builds.

To @kennytm, @Havvy, @ubsan, @varkor, @alexreg, and @scottmcm for reviewing the draft version of this RFC.

@ubsan

This comment has been minimized.

Copy link

ubsan commented on text/0000-assoc-default-groups.md in a9ab28b Aug 27, 2018

Gotta love mcbride :)

@Centril Centril added the T-lang label Aug 27, 2018

@Centril Centril self-assigned this Aug 27, 2018

@daboross

This comment has been minimized.

Copy link

daboross commented Aug 28, 2018

Awesome!

I had kind of assumed anything fixing associated defaults would only have one-way-dependence relationship, but I'm pleasantly surprised to see this. The "if you override one, you override all" concept seems easy to understand, teach, and write.

With that said, I am a bit sad to not have one-way dependence. If I have a bunch of default methods all relying on the same associated type default, I would want users to be able to override some default methods without overriding others.

How open would you be to having default inside a default group and nested default groups indicate this?

Something with "more defaults" could be able to depend on something with "fewer defaults", but not the other way around.

What I mean in code
trait Bar {
    default {
        type Foo = &'static str;
        default const MY_FOO: Foo = "Bar's foo";
        default fn get_fooa() -> Foo {
            // can depend on Foo, but not on MY_FOO's value
            "Foo A"
        }
        default fn get_foob() -> Foo {
            "Foo B"
        }
        default {
            fn get_c() -> Foo { "C" }
            fn get_d() -> Foo { "D" }
        }
    }
}

struct Brick;
impl Bar for Brick {
    // Can override get_foob without override get_fooa since they are both different sub groups
    fn get_foob() -> Foo {
        "Brick's Foo B"
    }
}
struct House;
impl Bar for House {
    // Since get_c and get_d are in the innermost group, must override them together
    fn get_c() { "House C" }
    fn get_d() { "House D" }
}
struct Cat {
    type Foo = String;
    // Cat must now override all other default methods, since they all are allowed to depend on Foo.
}

This suggestion probably isn't ideal since it would mean writing default inside of default blocks for a lot of use cases, but it's the simplest I could think of. I like the full interdependence within groups in example traits, but I fear that it would be unwieldy to use for traits with 10+ default methods all depending on the same associated type. The default groups could so easily nest to represent this, too.

@LukasKalbertodt

This comment has been minimized.

Copy link
Contributor

LukasKalbertodt commented Aug 28, 2018

What a great RFC, thanks!

You already list many examples where the default group feature as proposed is super useful, but to give another one: I think with this (and specialization), it would be possible to implement something like C++'s std::remove_reference (related StackOverflow question).


I just wanted to throw another idea out there:

Trait items can never "depend" on a method, i.e. never depend on a specific method implementation, right? So in that case, we could just say "if you override one thing in a default group, you have to override everything -- except methods: you can override these without needing to override anything else in the default group." Of course, that makes the rule more complicated. But I'd rather have this than an unnecessary restriction.

However, I'm not sure if this will be considered legal in the future:

trait Foo {
    const fn size() -> usize { 3 }
    type ARR = [u8; Self::size()];
}

If that's the case, other trait items could depend on specific implementations on methods and the modified rule wouldn't work.


Also, I think that "letting the compiler infer all dependencies between trait items" (which you only mentioned briefly) is not that bad of an idea. Sure, the dangers of implicitness strike again, but I think it would have many benefits.

Dependencies between trait items are a directed graph. The default groups as proposed here make it possible to form cliques (fully connected parts of the graph) from some items. Put like that, that sounds rather limiting. To have the most expressive power (let's assume we want that), the compiler has to know about the exact dependency graph (without adding unnecessary edges). The compiler could infer this graph, so that's not the problem.

To solve the "implicitness-problem" one could:

  • go for full explicitness with something like #[depends_on(Bar, BAZ)] fn foo() -> Self::Bar { ... }, or
  • annotate that items depend on something, without explicitly mentioning what they depend on (#[depending] fn foo() { ... }), or
  • don't do anything because it might not be that big of a problem.
@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Aug 28, 2018

@daboross

The "if you override one, you override all" concept seems easy to understand, teach, and write.

Very much agree with this sentiment :)

How open would you be to having default inside a default group and nested default groups indicate this?

I'm open :) It seems natural.

In this scheme, the way we can understand default $item is by a desugaring to default { $item }.

The type checking rule, if I understand it correctly from your example, is that a sub-group acts as an atomic unit itself and can be overridden independently of its parent but if anything in the parent is overridden, then the sub-group must be as well. On the face of it, I think this scheme would be sound; but I would encourage everyone to double check this.

This seems a bit more complex than "if you override one, you override all", but not by much. It also adds flexibility and value to nesting which was previously lacking.


@LukasKalbertodt

You already list many examples where the default group feature as proposed is super useful, but to give another one: I think with this (and specialization), it would be possible to implement something like C++'s std::remove_reference (related StackOverflow question).

Interesting :) If you have the time, could you perhaps encode this in the style of this RFC so that I can include it?


So in that case, we could just say "if you override one thing in a default group, you have to override everything -- except methods: you can override these without needing to override anything else in the default group."

This seems sound; however, I see some drawbacks:

  • It is now impossible to enforce that two methods must be overridden together;
    While that is not necessary for soundness, it is a useful ability to give to users when they have two or more methods that are unusually co-dependent and not implementing them together will usually lead to logic bugs.

  • It is not a particularly uniform treatment of items; instead, methods are treated specially.
    You do note this increase in complexity yourself. :)

  • It forces an interpretation of fn where the body can't be exposed which you also note.
    In the past, we have considered such schemes for const fn and I think it would be a shame to give that interpretation up.
    AIUI, the mechanism as proposed in the RFC can also be weakened in the future to be compatible with your model (please double check this).

All in all, my view here is that needing to see the underlying type of an associated type won't be too common; in particular, I don't think it will be common to need to see the underlying type and have many methods at the same time. Given this conjecture, I think that the special casing of methods is not particularly justified here and that @daboross's amended mechanism of default groups should cover the needs of users.


Also, I think that "letting the compiler infer all dependencies between trait items" (which you only mentioned briefly) is not that bad of an idea. Sure, the dangers of implicitness strike again, but I think it would have many benefits.

So the reason I haven't gone into greater detail here is because the semantics of this inference have not yet been well defined and because it was only mentioned in passing in rust-lang/rust#29661 (comment).

Dependencies between trait items are a directed graph. The default groups as proposed here make it possible to form cliques (fully connected parts of the graph) from some items.

Does not @daboross's amendment change this? It should be possible to define more refined sub-graphs this way?

To have the most expressive power (let's assume we want that), the compiler has to know about the exact dependency graph (without adding unnecessary edges).

This seems backwards compatible with this RFC in the sense that if you don't use default { .. }, then inferring dependencies will accept strictly more programs and will allow you to use the given definitions where before you could not.

The compiler could infer this graph, so that's not the problem.

I think this is true; but I would like to see a more elaborate argument for why this is the case and how complex such inference would be.

To solve the "implicitness-problem" one could:

First a few words about why implicitness is a problem in the first place. I think here, the problem is not so much that readability would suffer from implicitness, but rather that intuiting the dependencies for a human could be non-trivial wherefore it would be difficult to see what implications a change has for semantic versioning. The fear here is that the user would accidentally make a change that requires upstream users to provide definitions they previously didn't need to.

  • go for full explicitness with something like #[depends_on(Bar, BAZ)] fn foo() -> Self::Bar { ... },

I assume this would also be checked by the compiler?
Ostensibly, one could use default(Bar, BAZ) { .. } as a surface syntax here.
However; I think overly fine-grained graphs would be too complex for many users to deal with.

  • annotate that items depend on something, without explicitly mentioning what they depend on (#[depending] fn foo() { ... }),

This doesn't seem to notably solve the semver problem; the crate author still has to scan the code to reconstruct the graph in their head.

  • don't do anything because it might not be that big of a problem.

I think this is highly likely to be the case. If it turns out to be a big problem (it is not in the Haskell community) then we can solve that problem in the future.

@LukasKalbertodt

This comment has been minimized.

Copy link
Contributor

LukasKalbertodt commented Aug 28, 2018

Does not @daboross's amendment change this? It should be possible to define more refined sub-graphs this way?

Yes, now that I read their comment again, I think it's actually very powerful. Let's use this slightly modified version (I added quux):

trait Bar {
    default {
        type Foo = &'static str;
        fn quux() -> Self::Foo { "quux" }
        default const MY_FOO: Self::Foo = "Bar's foo";
        default fn get_fooa() -> Self::Foo { "Foo A" }
        default fn get_foob() -> Self::Foo { "Foo B" }
        default {
            fn get_c() -> Self::Foo { "C" }
            fn get_d() -> Self::Foo { "D" }
        }
    }
}

The above code would result in the following tree (each node representing one default group):

image

As far as I understand (please correct me if I'm wrong): an item can depend on items in the same node and on items in any ancestor nodes (up the tree). This has the consequence that if an impl block overrides one item $x, it also has to override all items in the same node as $x and all items in all all nodes descendant from the node with $x (down the tree).

The "can depend on" rule sounds exactly like the rule we use to determine if a non-pub item in a module tree is accessible. Which is cool, since then programmers are already familiar with it.

In terms of the dependency graph I was talking about, this would mean that the programmer can define ... "a tree of cliques" if that makes any sense. This looks really powerful to me. I'm not sure if there are actually useful situations where one would need a more powerful system. So I'm all 👍 for that idea!


This seems backwards compatible with this RFC in the sense that if you don't use default { .. }, then inferring dependencies will accept strictly more programs and will allow you to use the given definitions where before you could not.

True. That's always good.

I think this is true; but I would like to see a more elaborate argument for why this is the case and how complex such inference would be.

I assumed it's possible, because that knowledge is already needed to emit compiler errors, right? To check if the trait is well-formed, the compiler has to check that there aren't any items depending on another item.

However; I think overly fine-grained graphs would be too complex for many users to deal with.
[...]
This doesn't seem to notably solve the semver problem; the crate author still has to scan the code to reconstruct the graph in their head.
[...]
This seems sound; however, I see some drawbacks:

Yip, I agree with all of those. In particular, I also dislike special casing methods.


Interesting :) If you have the time, could you perhaps encode this in the style of this RFC so that I can include it?

Mhhh, I'm not quite sure what you mean. But let me just explain some details.

My original motivation was to write something like this:
fn print<T>(x: T)
where
    <T as RemoveRef>::WithoutRef: fmt::Display,
{
    println!("{}", x.single_ref());
}

Of course, for Display this boilerplate is not necessary, because impl<T: Display> Display for &T. But I wanted to avoid writing these kinds of blanket impls for references. So in that case RemoveRef is useful.

To have something like C++'s std::remove_reference we need a trait with an associated type. To be able to obtain an instance of the reference-removed type, a method is necessary. So my idea was to write:

trait RemoveRef {
    type WithoutRef;
    fn single_ref(&self) -> &Self::WithoutRef;
}

default impl<T> RemoveRef for T {
    type WithoutRef = T;
    fn single_ref(&self) -> &Self::WithoutRef {
        self
    }
}

impl<'a, T: RemoveRef> RemoveRef for &'a T {
    type WithoutRef = T::WithoutRef;
    fn single_ref(&self) -> &Self::WithoutRef {
        T::single_ref(*self)
    }
}

But this doesn't work since single_ref can't assume a specific type for WithoutRef (Playground). Wrapping the type and method in a default group would solve this problem and make this trait possible.

@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Aug 28, 2018

@LukasKalbertodt

The above code would result in the following tree (each node representing one default group):

Yep; that seems right. (Also, nicely done on including a tree; I should integrate a similar thing in the RFC eventually).

As far as I understand (please correct me if I'm wrong): an item can depend on items in the same node and on items in any ancestor nodes (up the tree). This has the consequence that if an impl block overrides one item $x, it also has to override all items in the same node as $x and all items in all all nodes descendant from the node with $x (down the tree).

This seems exactly right :)

The "can depend on" rule sounds exactly like the rule we use to determine if a non-pub item in a module tree is accessible. Which is cool, since then programmers are already familiar with it.

That's perfect!

In terms of the dependency graph I was talking about, this would mean that the programmer can define ... "a tree of cliques" if that makes any sense. This looks really powerful to me. I'm not sure if there are actually useful situations where one would need a more powerful system. So I'm all 👍 for that idea!

I think a "tree of cliques" makes perfect sense; and I agree that this should be more than enough power to do anything you need.

I would like to triple-check the soundness implications of @daboross's amendment for a bit.
But I think I'm going to integrate @daboross's idea in the main proposal soon.

I assumed it's possible, because that knowledge is already needed to emit compiler errors, right? To check if the trait is well-formed, the compiler has to check that there aren't any items depending on another item.

Hmm... It might be that to scan a trait definition to infer all dependencies, you must chase some indirections if items don't depend on each other directly. However, I haven't given it much thought.

But let me just explain some details.

Cool :)

Aside: With #2289 you could write:

fn print(x: impl RemoveRef<WithoutRef: fmt::Display>) {
    println!("{}", x.single_ref());
}

Wrapping the type and method in a default group would solve this problem and make this trait possible.

So you would then write:

trait RemoveRef {
    type WithoutRef;
    fn single_ref(&self) -> &Self::WithoutRef;
}

impl<T> RemoveRef for T {
    default {
        type WithoutRef = T;
        fn single_ref(&self) -> &Self::WithoutRef { self }
    }
}

impl<'a, T: RemoveRef> RemoveRef for &'a T {
    default {
        type WithoutRef = T::WithoutRef;
        fn single_ref(&self) -> &Self::WithoutRef { T::single_ref(*self) }
    }
}
@nikomatsakis

This comment has been minimized.

Copy link
Contributor

nikomatsakis commented Aug 28, 2018

Off the top of my head, nested default groups seems like a good idea — however, I also think that we should ask whether functions deserve special treatment or not.

From the point-of-view of "would things compile", I think that (at least with the lang as it is today) overriding a function can't cause any problems.

But it might lead to semantic breakage. e.g., perhaps there is an associated type that is somehow "tied" to the details of what the fn does, and when those details change, a different type would be more appropriate, even though the older type would still compile.

One can easily see this with some sort of associated constant. e.g., you might have a constant like const HAS_SIDE_EFFECTS: bool = false -- and when you override the fn, you might introduce side-effects, without altering the value of this constant.

Using nested groups, we can express the difference, which is good, but of course the notation sort of "defaults" the wrong way -- most of the time we don't need such strict dependencies, right?

(I haven't had time to read the RFC in detail yet, I'm not sure if it discusses any such examples.)

@LukasKalbertodt

This comment has been minimized.

Copy link
Contributor

LukasKalbertodt commented Aug 28, 2018

Hmm... It might be that to scan a trait definition to infer all dependencies, you must chase some indirections if items don't depend on each other directly. However, I haven't given it much thought.

Me neither and I'm far to unfamiliar with the compiler internals to say how feasible this would be. Anyway, I guess the majority of the community would dislike this completely implicit version anyway.

Aside: With #2289 you could write:

I'm already subscribed to the tracking issue and hope that I can play with it on nightly soon ;-)

So you would then write:

Exactly. That's what I meant. I think this should work.


Using nested groups, we can express the difference, which is good, but of course the notation sort of "defaults" the wrong way -- most of the time we don't need such strict dependencies, right?

But if you don't want those strict dependencies, you just don't use a default group, right?

And as I read the RFC, the meaning of default impl ... isn't changed. That is, it still desugars to an impl with default in front of every item and not to impl { default { ... } }.

@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Aug 28, 2018

@nikomatsakis

But it might lead to semantic breakage. e.g., perhaps there is an associated type that is somehow "tied" to the details of what the fn does, and when those details change, a different type would be more appropriate, even though the older type would still compile.

I think that's exactly right; It doesn't even have to be an fn and a type; It could just be two fns which the one who has defined the trait want all users to always define together because they are particularly easy to get wrong if you don't define them together. So my preference at this stage is to not give fns special treatment and give all items more uniform treatment.

Using nested groups, we can express the difference, which is good, but of course the notation sort of "defaults" the wrong way -- most of the time we don't need such strict dependencies, right?

I think most of the time, you won't need to assume the underlying definition of an item at all, and just the signature will be sufficient; i.e. I think most cases will be like the Arbitrary trait in the RFC. In other words, I think most cases will have 0-deep nesting.

When the underlying type is needed to be assumed for some set of items, I think 1-deep will be the next most likely thing.

Having two nest default groups (2+-deep) will probably be even less needed.
Perhaps the most common use case for a 2-deep nesting is @daboross's example with one associated type and a bunch of different methods.

My conjecture about depth here is that the usage of depth decreases exponentially with the depth.

(I haven't had time to read the RFC in detail yet, I'm not sure if it discusses any such examples.)

There is one example in the reference, but it is not a real world example.
I think @daboross's note:

If I have a bunch of default methods all relying on the same associated type default, I would want users to be able to override some default methods without overriding others.

would be the most common real world scenario for a 2-deep nesting.

@LukasKalbertodt

Anyway, I guess the majority of the community would dislike this completely implicit version anyway.

Hehe, yes; I think so.

I'm already subscribed to the tracking issue and hope that I can play with it on nightly soon ;-)

Feel free to implement it =P

Exactly. That's what I meant. I think this should work.

Cool; I'll include it in the RFC as an example at some point ^.^

And as I read the RFC, the meaning of default impl ... isn't changed.

Yep; that's correct.

That is, it still desugars to an impl with default in front of every item and not to impl { default { ... } }.

My understanding from #1210 was that default impl ... was a sort of "partial implementation" that is not yet a full impl; that is, this is what provided items in traits would desugar to.

@burdges

This comment has been minimized.

Copy link

burdges commented Sep 17, 2018

Am I correct that everything here could be expressed with partial impls? And thus default { .. } is just sugar? If so, that makes this a fairly minor change, right?

default impl<T> RemoveRef for T {
    type WithoutRef = T;
    fn single_ref(&self) -> &Self::WithoutRef { self }
}
default impl<'a, T: RemoveRef> RemoveRef for &'a T {
    type WithoutRef = T::WithoutRef;
    fn single_ref(&self) -> &Self::WithoutRef { T::single_ref(*self) }
}

@Centril Centril added the A-syntax label Nov 22, 2018

@alexreg

This comment has been minimized.

Copy link

alexreg commented Dec 28, 2018

Has there been any progress on this lately? Are we ready to FCP it?

@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Dec 28, 2018

Has there been any progress on this lately? Are we ready to FCP it?

Haven't had time to process the recent discussion after our first meeting; I think we may want to scale the RFC back to just drop the groups and postpone that for a future discussion thereby making incremental progress.

@alexreg

This comment has been minimized.

Copy link

alexreg commented Dec 28, 2018

@Centril Yes, possibly. I am happy with the concept of groups as it stands, personally, but if it speeds things up, then lets either mark it as an "unresolved question", or fork it out into a separate RFC (RFC addendum?).

@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Dec 28, 2018

@alexreg

@Centril Yes, possibly. I am happy with the concept of groups as it stands, personally, but if it speeds things up, then lets either mark it as an "unresolved question", or fork it out into a separate RFC (RFC addendum?).

I'd just move it to future possibilities for the time being. ;)

@Centril Centril added I-nominated and removed I-nominated labels Jan 3, 2019

@graydon

This comment has been minimized.

Copy link

graydon commented Jan 12, 2019

As much as I worry about comprehension of defaults and specialization, this is a reasonable and well-designed cleanup of the cognitive ambiguity in defaults already. Approve.

@Centril

This comment has been minimized.

Copy link
Contributor Author

Centril commented Jan 18, 2019

Updates:

  • default { .. } groups have been removed as I suggested in #2532 (comment) so that we can make more incremental progress. I've moved the text to the future possibilities section for posterity. We can consider such proposals or alternatives suggested in the discussion thus far in the future.
  • dyn Trait may assume / infer associated type defaults of Trait if there are such. This is important to actually achieve the API evolution goals set out in the motivation. If such inference isn't allowed, this will make it impossible to add associated types to object safe traits.

@Centril Centril changed the title RFC: Associated type defaults and Default groups RFC: Associated type defaults Jan 18, 2019


```rust
trait Foo {
type Bar = u8;

This comment has been minimized.

@nikomatsakis

nikomatsakis Jan 18, 2019

Contributor

One problem we've encountered in the past around defaults that is perhaps worth pondering: when do we prove that the default is "suitable"?

Example:

trait Foo<T> {
    type Bar: Clone = Vec<T>;
}

Is this an error? We don't know that Vec<T>: Clone, because we don't know that T: Clone. So we could require that the trait declare that T: Clone -- on the other hand, it may be that there are useful impls of the trait where T: Clone does not hold, but those impls do not use this default.

This comment has been minimized.

@nikomatsakis

nikomatsakis Jan 18, 2019

Contributor

Perhaps this is addressed in the RFC, if so, forgive me, I am skimming quickly.

This comment has been minimized.

@Centril

Centril Jan 18, 2019

Author Contributor

Interesting question / problem... I don't think it's currently addressed in the RFC.

Let's give the name A for the set of well-formed programs for which we require that all defaults be well-formed and adhere to the bounds of the trait at the definition site of Foo<T>, assuming only the bounds of the trait (namely Self::Bar: Clone). IOW, this is the set in which Foo<T> as defined above is an error.

Let's give the name B for the set of well-formed programs for which we don't require that and check adherence to the bounds of the trait in the impls instead. In this set Foo<T> as defined above is WF.

Off-hand, it seems to me that A ⊆ B because we get B from A by loosening a restriction. Thus, it should be possible to start with A but extend to B if we need to. However, I don't have a good justification for this other than wanting to be conservative. Given that we don't allow other items in Foo<T> to assume Vec<T> as the default it should be sound.


Possibly related is the following example (playground):

#![feature(associated_type_defaults)]

trait A {
    type B = Self::C;
    type C = Self::B;
}

impl A for () {}

fn _foo() { let _x: <() as A>::B; }

// Removing this function will make the example compile.
fn main() { let _x: <() as A>::B; }

Compiling this today will give you (aside: there seems to be a bug here...):

error[E0275]: overflow evaluating the requirement `<() as A>::B`

thread '<unnamed>' panicked at 'Metadata module not compiled?', src/libcore/option.rs:1038:5
note: Run with `RUST_BACKTRACE=1` environment variable to display a backtrace.
error: aborting due to previous error

However, using the set A, the program:

#![feature(associated_type_defaults)]

trait A {
    type B = Self::C;
    type C = Self::B;
}

should fail to compile since there is a cycle but it should presumably compile under B (as today).

@Centril Centril added I-nominated and removed A-syntax labels Jan 24, 2019

@Centril Centril removed the I-nominated label Jan 31, 2019

@scottmcm

This comment has been minimized.

Copy link
Member

scottmcm commented Jan 31, 2019

I think the latest updates resolved the controversial points, so let's get more eyes on it:

@rfcbot fcp merge

@rfcbot

This comment has been minimized.

Copy link

rfcbot commented Jan 31, 2019

Team member @scottmcm has proposed to merge this. The next step is review by the rest of the tagged team members:

Concerns:

Once a majority of reviewers approve (and none object), this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor

nikomatsakis commented Jan 31, 2019

Discussing briefly in the @rust-lang/lang meeting. Regarding my question here and the cycle here, all present agreed we can move these to an unresolved question.

FWIW, given my current thinking about how Chalk should work -- and specifically lazy normalization -- actually i'm not sure where this error should be reported. :) I know i thought about exactly this case. I think the answer is most likely that it would fail at the impl, which would have the obligation of showing that the values for its associated types can be fully normalized -- but I can't picture the details just now.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor

nikomatsakis commented Jan 31, 2019

@rfcbot concern unresolved-questions

Per my previous comment, I'd like to see these details added as unresolved questions so they are not forgotten.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor

nikomatsakis commented Jan 31, 2019

@rfcbot resolve unresolved-questions

@rfcbot reviewed

@pnkfelix

This comment has been minimized.

Copy link
Member

pnkfelix commented Feb 15, 2019

@rfcbot reviewed

@rfcbot

This comment has been minimized.

Copy link

rfcbot commented Feb 15, 2019

🔔 This is now entering its final comment period, as per the review above. 🔔

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment