New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Revisiting specialization: Complementary traits #1658

Closed
wants to merge 2 commits into
base: master
from

Conversation

Projects
None yet
@ticki
Contributor

ticki commented Jun 26, 2016

We revisit specialization through ‘complementary traits’. This has been considered
previously, but postponed due to mainly coherence issues. The basic idea is to add
negative trait bounds, denoted !Trait, satisfying types not implementing some
traits.

Rendered.

cc @Manishearth, @withoutboats, @nikomatsakis, @glebm, @aturon.

@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Jun 26, 2016

IIRC, a big drawback that came up when this was last discussed was that implementing traits becomes a breaking change. This isn't a problem in specialization (the lattice rule) because you can't use it to "punch holes".

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 26, 2016

@Stebalien No, that's not correct. In this RFC, !Trait isn't a trait, it's a bound. You cannot do, say impl !Trait for Type, just like you cannot do impl TraitA + TraitB for Type.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 26, 2016

Oh, you mean breaking impl<T: !Trait> ...?

@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Jun 26, 2016

For example:

// crate_1:

pub struct Bar;
pub trait Foo {}

// crate_2:

extern crate_1;
use crate_1::{Bar, Foo};

trait Baz {}

impl<A: !Foo> Baz for A {}

If crate_1 adds an impl of Foo for Bar, Bar would no longer implement Baz.

-- Edit --
yes.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 26, 2016

My take is that it is no different from e.g. removing the implementation, but I get the point.

I think I have a solution, but I am heading for sleep now.

@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Jun 26, 2016

My point is that now removing or adding an implementation of a trait is a breaking change. You would have to implement all traits up-front.

One solution is to say "no negative constraints on foreign traits". Another is to say "no hole punching" (if you have a impl with a negative constraint, you have to have a matching impl with the corresponding positive constraint); this is basically specialization.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 27, 2016

@Stebalien I added the requirement of explicitly guaranteeing the absence of some bound.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jun 27, 2016

My take is that it is no different from e.g. removing the implementation, but I get the point.

Its quite different to add an impl than to remove an impl. The general principle is that increasing the information available to the typechecker should never be a breaking change in Rust. This is the same general principle which is the reason the orphan rule exists, and it really defines Rust's coherence system.

Prior to the most recent commit, which adds the requirement of an explicit impl, this RFC was a reiteration of #586, which was closed in part because of this issue. After that commit, its a reiteration of #1148, which was closed because the language team wanted more experience with specialization before adding more coherence rules.

The same issues remain, then. The big ones I recall:

  • The handling of OIBITs / auto traits becomes more complicated with this proposal. Niko wrote about this is in a gist and the implementation rules he proposed I remember led to a result that I found surprising.
  • Do we want to try to merge the semantics of negative auto traits with the semantics of #[fundamental] somehow? What I recall is that #[fundamental]'s semantics are what people would really want a negative auto trait to mean, but that its asymmetrical with positive auto traits, which is confusing.
  • This RFC's text codifies the implicit local assumption of mutual exclusion which the type system currently has (that is, if Foo: Bar is not present, and both Foo and Bar are local, Foo: !Bar is inferred). I think this is a bug, and not a feature, of the current system, and am in favor of fixing it instead of codifying it.

All that said, since the previous RFC was postponed only a few months ago, I don't know if now is the time to raise the issue again.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 27, 2016

On one and two, I have thought about this a lot, but have not yet hit any issues, which isn't fixed by the rules in the current rfc.

On your last point: I believe you are refering to the NegationBound rule, which doesn't state such thing. You cannot infer A: !B from ¬(A: B), but you can infer the later from the former.

All that said, since the previous RFC was postponed only a few months ago, I don't know if now is the time to raise the issue again.

It seems to me that these were postponed for multiple reasons:

  1. Lack of coherence, which this RFC fixes, by adding well-formedness rules.
  2. The lack of a final design for specialization (which we now have).
  3. Lack of implementation detail, which, I believe, is addressed in this RFC.
@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jun 27, 2016

On one and two, I have thought about this a lot, but have not yet hit any issues, which isn't fixed by the rules in the current rfc.

I don't really know what this means; those two points are both design questions, not questions about soundness. There are multiple sound formalisms, the question is about which is the most useful.

Lack of coherence, which this RFC fixes, by adding well-formedness rules.

The issue of Trait + !Trait hasn't been raised before this RFC afaik; having a bound Trait + !Trait be a type error does seem like the obvious solution.

On your last point: I believe you are refering to the NegationBound rule, which doesn't state such thing. You cannot infer A: !B from ¬(A: B), but you can infer the later from the former.

I'm not referring to any formalism in particular (and don't really like discussing things in terms of formalisms). The RFC text contains this statement:

Outside the crate itself, one cannot assume Type: !Trait, unless it is explicitly stated

My read of this statement (and some other comments, but this statement is most direct), is that inside a crate, one can assume that Type: !Trait if there is no impl Trait for Type and Trait and Type are both local. This is actually the case already, and I think it is a bug, because it leads people to introduce code that relies on an implied negative bound they didn't mean to include, possibly forcing them to make breaking changes in the future.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 28, 2016

I don't really know what this means; those two points are both design questions, not questions about soundness. There are multiple sound formalisms, the question is about which is the most useful.

If that's the case, I don't really see how this is messing with those.

The issue of Trait + !Trait hasn't been raised before this RFC afaik; having a bound Trait + !Trait be a type error does seem like the obvious solution.

#586 (comment)

My read of this statement (and some other comments, but this statement is most direct), is that inside a crate, one can assume that Type: !Trait if there is no impl Trait for Type and Trait and Type are both local. This is actually the case already, and I think it is a bug, because it leads people to introduce code that relies on an implied negative bound they didn't mean to include, possibly forcing them to make breaking changes in the future.

Ah, ok. Sorry for misunderstanding.

Well, I view this in a similar manner to the orphan rules, crates "controls" the implementations themself, and might break them. If you add an overlapping implementation in your own crate, you get an error, so this is already an existing today.

@wycats

This comment has been minimized.

Contributor

wycats commented Jun 29, 2016

All that said, since the previous RFC was postponed only a few months ago, I don't know if now is the time to raise the issue again.

@withoutboats I agree. I think it still makes sense to let specialization stabilize, get used in practice, and have idioms emerge before considering the interaction with negative bounds.

Personally, I see the appeal of negative reasoning (and have occasionally wanted the feature), but it's hard for me to predict how specialization will shake out without first getting a chance to use (and write!) libraries that make use of the feature.

@ticki Do you think there's something that we've learned in the meantime that should make it easier to revisit this issue now? Or do you think we made a mistake in holding off until we get more experience with specialization in the first place? (I realize specialization has taken some time to land, and maybe that's a factor?)

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jun 29, 2016

If you add an overlapping implementation in your own crate, you get an error, so this is already an existing today.

Right. There was discussion on the last RFC about modifying these rules, especially because introducing negative bounds makes it more likely for users to run aground here.

@ticki

This comment has been minimized.

Contributor

ticki commented Jun 29, 2016

@wycats

I think it still makes sense to let specialization stabilize, get used in practice, and have idioms emerge before considering the interaction with negative bounds.

The point is exactly to bring up multiple ideas to compete before stabilizing.

I think that stabilizing without investigating orthogonal solutions would be unfortunate.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Jul 22, 2016

I was trying to read some of this RFC and basically found that I don't really understand what it's trying to say. For example, this inference rule:

BoundAntisymmetry:
  A: B
  B: A
  ─────
  A = B

What does this mean? I think of a bound A: B as meaning "the type A implements the trait B", so I don't know what A = B should mean.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Jul 22, 2016

Also, @ticki, have you seen https://gist.github.com/nikomatsakis/d5fe4a4b5b075b3d0ec9? That contains roughly the state of my thinking about negative bounds the last time I thought about them. =) It still roughly describes what I believe to be an implementable and plausible system (and one I might like to see) -- but I'll note that negativity doesn't really replace specialization, due to the limitations imposed by not using classical logic (that is, in particular, !(T: Trait) is not the same as T: !Trait -- put another way, to prove that T: !Trait, one must prove the existence of a negative impl, it's not enough to show the absence of a positive impl).

@ticki

This comment has been minimized.

Contributor

ticki commented Jul 22, 2016

I was trying to read some of this RFC and basically found that I don't really understand what it's trying to say. For example, this inference rule: [...]

The misunderstanding here is the meaning of A: A is a trait, not a type. This RFC describes the inference rules which mainly contains trait bounds on other traits (for avoiding inconsistency). This rule in particular states that a cycle in the trait bound implies that the two traits are the same.

Having A as a type would not make sense, but I will update the RFC to make it very clear that we are addressing trait bounds on other traits here.

but I'll note that negativity doesn't really replace specialization, due to the limitations imposed by not using classical logic (that is, in particular, !(T: Trait) is not the same as T: !Trait -- put another way, to prove that T: !Trait, one must prove the existence of a negative impl, it's not enough to show the absence of a positive impl).

I cannot think of anything where complementary traits isn't sufficient as a replacement for specialization. Do you have a concrete example on hand?

@glaebhoerl

This comment has been minimized.

Contributor

glaebhoerl commented Jul 23, 2016

@ticki Would this be legal?

trait Foo { fn foo(&self) -> bool; }
impl<T:  Bar> Foo for T { ... }
impl<T: !Bar> Foo for T { ... }
fn universal_foo<T>(x: &T) -> bool { x.foo() }
@ticki

This comment has been minimized.

Contributor

ticki commented Jul 23, 2016

Yes. A or !A is covering the typing universe.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jul 26, 2016

Yes. A or !A is covering the typing universe.

I'm very confused by this comment. First, this is the exact backcompat hazard that #586 ran aground on. Second, this seems to me to be contradictory to our prior conversation and this comment in the RFC:

Outside the crate itself, one cannot assume Type: !Trait, unless it is explicitly stated

Necessarily, this means that nonlocal types without any impl of A or !A are neither A nor !A.

@ticki

This comment has been minimized.

Contributor

ticki commented Jul 26, 2016

I was speaking about local items. For non-local ones, it is not true.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jul 26, 2016

@ticki I'm even more confused by that comment. Surely the type parameter of universal_foo is not constrained to local types. @glaebhoerl's code cannot be valid.

@ticki

This comment has been minimized.

Contributor

ticki commented Jul 26, 2016

Yeah, sorry. That example isn't local.

@glaebhoerl

This comment has been minimized.

Contributor

glaebhoerl commented Jul 26, 2016

So does that mean that this RFC would not (in itself) allow violating parametricity? (The example I posted would be a classic example of a violation, and apparently it would not be allowed; but maybe there are other examples?)

As far as I'm aware, this RFC is being pitched as an alternative to the specialization RFC, which has been accepted but not, as of yet, stabilized. If this RFC is in fact a viable replacement for it and preserves parametricity, then I would definitely prefer it; on the other hand, I strongly suspect that if examples like the one I posted are disallowed then @aturon and @nikomatsakis would regard it as not being a viable replacement. (Indeed, I just happened to scroll up and saw that @nikomatsakis wrote roughly the same thing.)

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jul 26, 2016

@glaebhoerl I very strongly want Rust to gain the ability to express that the types which implement two traits are disjoint sets, and I do not see it as an alternative to specialization for exactly the reason you describe. Specialization and disjointness can sometimes sort of imitate one another, but they are different features with different behaviors.

In particular, I think disjointness is useful for the ability to describe a mutually exclusive 'trait hierarchy' defining abstractly the behavior of different scalar and collection types while explicitly declaring which traits cannot coherently be implemented by the same type (classic examples: Sequence and Dictionary are mutually exclusive behaviors for a collection to implement by most definitions of those behaviors). These traits would allow the standard library to be used more effectively as a ratchet between cousin crates. For example, serde could impl<T: Sequence> Serialize for T and impl<T: Dictionary> Serialize for T, allowing any crate defining sequential or associative data structures to have a serialization impl without depending on serde directly.

@glaebhoerl

This comment has been minimized.

Contributor

glaebhoerl commented Jul 26, 2016

@withoutboats Yeah, I wasn't making any comment either way on that aspect of it.

@ticki

This comment has been minimized.

Contributor

ticki commented Jul 27, 2016

@withoutboats I think I get your point, but I really cannot think of an example of specialization which strictly requires LEM.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jul 27, 2016

@ticki How would you implement the specialization of ToString in the standard library with negative impls?

@Stebalien

This comment has been minimized.

Contributor

Stebalien commented Aug 15, 2016

You gave two motivations:

  1. Denying built in traits. I've wanted this but don't like it. That is, it's undeniably useful but means that implementing these traits becomes a breaking change (what if I decide to store something that implements Drop in my struct?).
  2. Inequality. This is a much simpler problem that would, IMO, fix a lot of the blanket impl problems (see this internals thread).

So, IMO, a good first step would be a smaller RFC that introduces negative equality bounds. As an added bonus, that might motivate someone to actually implement equality constraints in where clauses...


By a lot simpler, I mean rust can already do this (although I don't know how robust this is, copied from my post on internals):

#![feature(optin_builtin_traits)]
pub trait NotSame {}
impl NotSame for .. {}
impl<T> !NotSame for (T, T) {}

// Example: usage:
trait Trait<T> {}
impl<T> Trait<T> for T {}
impl<A, B> Trait<A> for B where (A, B): NotSame {}
@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Sep 25, 2016

A reddit user has pointed out that even with the intuitionistic form, this feature could make adding impls a breaking change - specifically, blanket impls. Here's the example

Here's crate A:

// crate A

trait Bar { }
trait Baz { }

Here's crate B, which depends on A:

struct Foo;

impl Bar for Foo { }
impl !Baz for Foo { }

Crate A is updated ot include this impl:

impl<T> Baz for T where T: Bar { }

This is a breaking change.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Sep 26, 2016

@withoutboats

A reddit user has pointed out that even with the intuitionistic form, this feature could make adding impls a breaking change - specifically, blanket impls.

Interesting. I hadn't thought of this before, but seems obvious. In particular, I had been thinking about how the "intuitionistic" form can be modeled as two distinct traits (Bar, NotBar), with the only difference being that Bar and NotBar cannot be simultaneously implemented. But of course that's a big difference in examples like these! =)

I'll have to think on that. Not sure how to bring this together in my mind. It suggests that we may want a more limited form of negative reasoning, more targeted at specific use cases.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Sep 26, 2016

In particular, I had been thinking about how the "intuitionistic" form can be modeled as two distinct traits (Bar, NotBar), with the only difference being that Bar and NotBar cannot be simultaneously implemented. But of course that's a big difference in examples like these! =)

Is it? wouldn't this update give you the same kind of coherence issues if your code was this?

struct Foo;

impl Bar for Foo { }
impl NotBaz for Foo { }
@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Sep 26, 2016

So I believe the basic statement - "adding an impl is not a breaking change" is not compatible with auto traits, no matter what. For example:

struct Foo;
trait Bar { }
auto trait Baz { }

impl<T: Baz> Bar for T { }

Adding impl !Baz for Foo { } is a breaking change.

So we've got to accept that the statement is actually "implementing a non-auto trait is not a breaking change." It might be a worthwhile trade off to modify it further, so that we have an enumerable list of cases in which implementing a trait is a breaking change:

  • When that trait is an auto trait
  • When that impl is a blanket impl involving only existing traits (blanket impl including impls for fundamental types like references to T), with any non-default members.

This adds to the complexity of the rules, but fortunately, at least, it seems like blanket impls like this would be a very rare addition.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Sep 27, 2016

@withoutboats

Is it? wouldn't this update give you the same kind of coherence issues if your code was this?

struct Foo;
impl Bar for Foo { }
impl NotBaz for Foo { }

I don't understand what you are showing here. =(

Similarly this auto trait example feels incomplete?

struct Foo;
trait Bar { }
auto trait Baz { }
impl<T: Baz> Bar for T { }

What is the breaking change exactly?

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Sep 27, 2016

@nikomatsakis The blanket impl presents the same problem if you just use a mutually exclusive marker trait vs having !Trait impls, for example:

// crate A

trait Bar { }

#[exclusive(NotBaz)]
trait Baz { }

#[exclusive(Baz)]
trait NotBaz { }
// crate B

struct Foo;

impl Bar for Foo { }
impl NotBaz for Foo { }

Now adding this impl to crate A is a breaking change:

impl<T> Baz for T where T: Bar { }

I don't know what you mean when you say there's a difference between this and having literal negative impls; they seem the same to me.


The breaking change in the other case is this: If you have an auto trait, negatively implementing it for a type is a breaking change, because people could have been using that type as implementing the auto trait. Negative impls of auto traits are inherently breaking changes.

I think you could construct a case where doing a positive impl of Send for a type that contained a !Send field before could lead to overlapping impls, even under today's rules, but I'm not certain.

@nikomatsakis

This comment has been minimized.

Contributor

nikomatsakis commented Sep 28, 2016

@withoutboats

I don't know what you mean when you say there's a difference between this and having literal negative impls; they seem the same to me.

Are you referring to this comment that I made?

In particular, I had been thinking about how the "intuitionistic" form can be modeled as two distinct traits (Bar, NotBar), with the only difference being that Bar and NotBar cannot be simultaneously implemented. But of course that's a big difference in examples like these! =)

If so, what I was trying to say is that they are the same thing, not that they are different. What I meant is that, if you did not have any #[exclusive] attribute, then negative impls of traits don't really add any more complications than Foo and NotFoo -- but of course they also don't mean what you expect, since you implement both Foo and NotFoo. In other words, I think we're in violent agreement here.

The breaking change in the other case is this: If you have an auto trait, negatively implementing it for a type is a breaking change, because people could have been using that type as implementing the auto trait. Negative impls of auto traits are inherently breaking changes.Y

OK. I agree with this, but it doesn't worry me. That is, it seems obvious. =)

Auto traits are basically the place that we traded explicit opt-in for implicit opt-in. Part of that is that opting out is a breaking change. A more worrisome -- but also known in advance -- case would be that adding a private field can cause you to (e.g.) no longer implement Send, which is a breaking change, despite it being invisible to your callers. This implies that adding a field to a struct that has a trait object in its type is potentially a breaking change, since somebody somewhere may have an auto trait which is now foiled. This is unfortunate, but I am still happy with the trade overall. I may yet live to eat those words. =)

@ubsan

This comment has been minimized.

Contributor

ubsan commented Dec 21, 2016

ping @nikomatsakis @ticki

status?

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Dec 27, 2016

I had a conversation about specialization with @aturon a few weeks ago in which he convinced me of something related to this (well I don't think he even said anything to this effect, but he still convinced me).

So the big issue with the coherence rules and making them more flexible is this - we want very badly to be able to say:

It is not a breaking change to add a new (valid) impl to your library.

This is, unfortunately, quite impossible to say. So we want the next best thing - a small, easy to grok, easy to remember list of exceptions.

With the changes to specialization that Niko has talked about on his blog, I believe we will be able to say:

It is not a breaking change to add a new (valid) impl to your library, with these exceptions:

  • If the receiver is a pre-existing type & the trait is an auto trait, it is a breaking change.
  • If the receiver is a type parameter & only pre-existing traits are involved, every item must be default or it is a breaking change.

The second of these gets wrecked by the introduction of arbitrary negative impls, because of the example in this comment. My position used to be that losing the ability to add blanket impls of pre-existing types was worth this feature, but now my position has changed. :-)

What I realized during the conversation with @aturon was that every use case I've had where I want negative bounds, I've wanted them as the supertrait for a trait. That is, I want this:

trait Rectangle: !Circle { }
trait Circle: !Rectangle { }

I do not ever want this:

struct Square;
impl !Circle for Square { }

In other words, I'm increasingly in favor of taking an approach where we only allow negative bounds as the bounds on the self type of a trait, and nowhere else.

@burdges

This comment has been minimized.

burdges commented Jan 10, 2017

I like this approach where a trait might entail a negative bounds, but impls cannot satisfy negative bounds.

In particular, we could recognize the existing disjointness of Copy and Drop to write zero-on-drop wrappers like

#[derive(Debug, Default)]
pub struct Secret<T>(pub T);

impl<T> Drop for Secret<T> where T: Copy {
    fn drop(&mut self) {
        unsafe { ::std::intrinsics::volatile_set_memory::<Secret<T>>(self, 0, 1); }
    }
}

impl<T> Drop for Secret<T> where T: Drop+Default {
    fn drop(&mut self) {
        unsafe {
            ::std::intrinsics::drop_in_place(&mut self.0);
            ::std::ptr::write_volatile::<Secret<T>>(self, Secret(Default::default()));
        }
    }
}

I have not yet attempted to make this work with specialization, but if that fails then one must currently write two separate types like :

#[derive(Debug, Default)]
pub struct Secret<T>(pub T) where T: Copy;

impl ..

#[derive(Debug, Default)]
pub struct DropSecret<T>(pub T) where T: Drop+Default;

impl ..
@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jan 10, 2017

So if we only allow traits to prove disjointness, we need to provide an orphan rule restriction on that. That is, we can't allow this to compile:

trait Foo: !ToString { }

impl Foo for Vec<char> { }

Because that would make it a breaking change to add impl ToString for Vec<char> to std.

One way we could make this more self-justifying is by just making it incoherent to have Foo: !Bar unless you also have Bar: !Foo, rather than inferring Bar: !Foo from Foo: !Bar. This also makes your code more self-documenting.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jan 10, 2017

Basically the right way to say this is that we can't infer Bar: !Foo from Foo: !Bar for non-local traits, for the same reason we cannot inference non-local non-implementation of traits for types. I kind of prefer presenting it to the user as requiring both bounds, because it has the same practical impact.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Jan 27, 2017

Talked to @nikomatsakis a bit about this today. We talked about - if we only support these super trait negative bounds, it might be more sensible to conceptualize this in terms of a named "mutually exclusive group" of traits, and then you declare your trait is a part of that group.

A problem is that its hard to come up with a natural syntax. Some pretty wild syntaxes follow:

// negative bound syntax (equivalent to subsequent examples)
trait Circle: !Rectangle { }
trait Rectangle: FooBar + !Circle { }
trait enum Shape;

trait Circle: Shape { }
trait Rectangle: Shape + FooBar { }
trait enum Shape {
    trait Circle { }
    trait Rectangle: FooBar { }
}
group Shape;

trait Circle in Shape { }
trait Rectangle in Shape: FooBar { }
@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Feb 23, 2017

@rfcbot fcp postpone

I'm proposing we postpone this and #1672 for a later date, when chalk is up and running, and we can put forward a unified proposal for how to expand the negative reasoning performed by our coherence system.

I still want something like this RFC someday, but keeping this RFC open is not tracking progress toward that goal at all, and I think these two RFCs will want to be majorly revised into a single proposal.

@rfcbot

This comment has been minimized.

rfcbot commented Feb 23, 2017

Team member @withoutboats has proposed to postpone this. The next step is review by the rest of the tagged teams:

No concerns currently listed.

Once these reviewers reach consensus, this will enter its final comment period. If you spot a major issue that hasn't been raised at any point in this process, please speak up!

See this document for info about what commands tagged team members can give me.

@rfcbot

This comment has been minimized.

rfcbot commented Mar 15, 2017

🔔 This is now entering its final comment period, as per the review above. 🔔

@rfcbot

This comment has been minimized.

rfcbot commented Mar 25, 2017

The final comment period is now complete.

@withoutboats

This comment has been minimized.

Contributor

withoutboats commented Mar 26, 2017

Closing as postponed. Thanks for the RFC @ticki, I'm really optimistic that we'll see a feature in this space someday.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment