Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upRFC: impl specialization #1210
Conversation
aturon
changed the title from
RFC: Impl specialization
to
RFC: impl specialization
Jul 13, 2015
aturon
added
the
T-lang
label
Jul 13, 2015
aturon
self-assigned this
Jul 13, 2015
sfackler
reviewed
Jul 13, 2015
text/0000-impl-specialization.md
| } | ||
| partial impl<T: Clone, Rhs> Add<Rhs> for T { | ||
| fn add_assign(&mut self, rhs: R) { |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
alexcrichton
reviewed
Jul 13, 2015
text/0000-impl-specialization.md
| The solution proposed in this RFC is instead to treat specialization of items in | ||
| a trait as a per-item *opt in*, described in the next section. | ||
| ## The `default` keyword |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
alexcrichton
Jul 13, 2015
Member
This mentions that default will be a keyword, but we currently have modules like std::default and methods like Default::default, so I think adding this as a keyword may be a breaking change (same with partial below). Could this RFC perhaps clarify that they'll be contextual keywords? I believe that should be backwards compatible, right?
alexcrichton
Jul 13, 2015
Member
This mentions that default will be a keyword, but we currently have modules like std::default and methods like Default::default, so I think adding this as a keyword may be a breaking change (same with partial below). Could this RFC perhaps clarify that they'll be contextual keywords? I believe that should be backwards compatible, right?
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bstrie
Aug 1, 2015
Contributor
Making them contextual keywords would be BC, but contextual keywords themselves are unprecedented in the language. Unfortunate, but probably inevitable.
bstrie
Aug 1, 2015
Contributor
Making them contextual keywords would be BC, but contextual keywords themselves are unprecedented in the language. Unfortunate, but probably inevitable.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Diggsey
Jul 13, 2015
Contributor
How does the compiler handle code such as this:
struct Foo<T>;
impl<T> Foo<T> {
default fn test(self) { ... }
}
fn bar(Foo<u32> p) {
p.test();
}It has no way of knowing whether a downstream crate will provide a more specialized implementation of test for Foo<u32>. At the moment problems like this are avoided because it can only happen in generic code, which can be monomorphised by the downstream create, whereas in this case it can happen in code with no type parameters. It's as though bar() has a hidden type parameter.
edit:
Maybe this specific example would be prevented by the coherence rules? However, it's not at all obvious that the coherence rules will prevent all such problems.
|
How does the compiler handle code such as this: struct Foo<T>;
impl<T> Foo<T> {
default fn test(self) { ... }
}
fn bar(Foo<u32> p) {
p.test();
}It has no way of knowing whether a downstream crate will provide a more specialized implementation of edit: |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
sfackler
Jul 13, 2015
Member
I'd imagine downstream crates can't specialize inherent methods of a type, just as they can't add inherent methods to a type today.
|
I'd imagine downstream crates can't specialize inherent methods of a type, just as they can't add inherent methods to a type today. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jroesch
Jul 14, 2015
Member
I'm probably gonna come back to this a few times over the next couple days because I feel like there is a lot to chew on here, and even now reading it for the third time I have some things I want to think about more deeply.
Overall I like this approach and think that the specified algorithm is a good point in the design space.
As I mentioned on IRC I think we should also follow up with a proposal for a detailed implementation strategy, that we (the compilers team, core team, me, any other relevant parties) talk about for a period of time. From my perspective (and I hope others too) it is important that we evaluate our implementation strategy and ensure that it isn't going to cause problems down the road in terms of stability, ICEs, our future compiler design (incremental, parallel, etc).
|
I'm probably gonna come back to this a few times over the next couple days because I feel like there is a lot to chew on here, and even now reading it for the third time I have some things I want to think about more deeply. Overall I like this approach and think that the specified algorithm is a good point in the design space. As I mentioned on IRC I think we should also follow up with a proposal for a detailed implementation strategy, that we (the compilers team, core team, me, any other relevant parties) talk about for a period of time. From my perspective (and I hope others too) it is important that we evaluate our implementation strategy and ensure that it isn't going to cause problems down the road in terms of stability, ICEs, our future compiler design (incremental, parallel, etc). |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Stebalien
Jul 14, 2015
Contributor
This isn't relevant to the current RFC but an alternative explicit ordering mechanism would be a match like syntax:
impl<'a, T: ?Sized, U: ?Sized> AsRef<U> for {
&'a T where T: AsRef<U> => {
fn as_ref(&self) -> &T {
<T as AsRef<U>>::as_ref(*self)
}
},
T where T == U => {
fn as_ref(&self) -> &T {
self
}
},
}|
This isn't relevant to the current RFC but an alternative explicit ordering mechanism would be a match like syntax: impl<'a, T: ?Sized, U: ?Sized> AsRef<U> for {
&'a T where T: AsRef<U> => {
fn as_ref(&self) -> &T {
<T as AsRef<U>>::as_ref(*self)
}
},
T where T == U => {
fn as_ref(&self) -> &T {
self
}
},
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
stevenblenkinsop
Jul 14, 2015
Another alternative [edit] to explicit ordering [/edit] would be to break the overlap using negative bounds. Obviously this is contingent on negative bounds being accepted. This seems like a better approach, since it preserves the intuitive rule used in this proposal along with its various properties. Also, allowing I < J when each of apply(I) and apply(J) contains elements the other does not makes adding super down the road more awkward, since it'll reference something different depending on whether the particular types ∈ apply(I) ∪ apply(J) also belongs to the intersection or not.
edit: Clarified that I'm talking about an alternative to explicit ordering, not to specialization.
stevenblenkinsop
commented
Jul 14, 2015
|
Another alternative [edit] to explicit ordering [/edit] would be to break the overlap using negative bounds. Obviously this is contingent on negative bounds being accepted. This seems like a better approach, since it preserves the intuitive rule used in this proposal along with its various properties. Also, allowing edit: Clarified that I'm talking about an alternative to explicit ordering, not to specialization. |
mdinger
reviewed
Jul 14, 2015
text/0000-impl-specialization.md
| ``` | ||
| This partial impl does *not* mean that `Add` is implemented for all `Clone` | ||
| data, but jut that when you do impl `Add` and `Self: Clone`, you can leave off |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
withoutboats
Jul 14, 2015
Contributor
I'm not very comfortable with the idea of introducing a mechanism which can create very tall inheritance trees; if the number of possible default impls is unbounded, it could be a major pain to determine which implementation is actually being run. Inheritance-oriented code structuring is pretty widely thought to be bad design, and an advantage of Rust is that so far it has strongly discouraged that kind of code.
Would it be possible therefore to limit the number of default impls to one? That is, any two default implementations would conflict. This would make it very easy to determine which impl is being executed in all cases. Is this a restriction that would be very limiting in practice? Can someone familiar with servo's inheritance use case say whether it needs n-deep inheritance? Would enforcing this rule require additional orphan rules making special determinations about where default impls could be declared (I'm fairly sure the answer is 'no')?
I don't think negative bounds and specialization are alternatives to one another at all. This RFC doesn't mention PR #1148 which enables negative bounds without the backcompat hazard that troubled PR #586. I think that these would be complementary changes to the coherence rules which address one another's limitations. Not trying to trumpet my own RFC -- just pointing out that is a related proposal.
Haskell has extensions which implement some form of specialization (e.g. OverlappingInstances). It would be a good idea probably to ask in the Haskell community about the pitfalls that implementations of type class specialization have run into. I think this is a situation in which Rust's more OO-influenced heritage makes specialization more useful for us than it was for Haskell though.
We'll probably need a new name for impl Trait for .. { } impls if this RFC is accepted.
Regardless of these, this is an awesome and impressively exhaustive RFC!
|
I'm not very comfortable with the idea of introducing a mechanism which can create very tall inheritance trees; if the number of possible Would it be possible therefore to limit the number of I don't think negative bounds and specialization are alternatives to one another at all. This RFC doesn't mention PR #1148 which enables negative bounds without the backcompat hazard that troubled PR #586. I think that these would be complementary changes to the coherence rules which address one another's limitations. Not trying to trumpet my own RFC -- just pointing out that is a related proposal. Haskell has extensions which implement some form of specialization (e.g. OverlappingInstances). It would be a good idea probably to ask in the Haskell community about the pitfalls that implementations of type class specialization have run into. I think this is a situation in which Rust's more OO-influenced heritage makes specialization more useful for us than it was for Haskell though. We'll probably need a new name for Regardless of these, this is an awesome and impressively exhaustive RFC! |
llogiq
reviewed
Jul 14, 2015
text/0000-impl-specialization.md
| ```rust | ||
| impl<T> Debug for T where T: Display { | ||
| fn fmt(&self, f: &mut fmt::Formatter) -> fmt::Result { |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
stevenblenkinsop
Jul 14, 2015
The text hasn't motivated default at this point. The example is of what the proposal is trying to allow "at the simplest level"—i.e. before the complexity of needing a default keyword is added—not of what the proposed syntax will ultimately look like once this complexity is taken into account. I thought this was clear, but maybe it could be clarified.
It might be a good idea to limit the appearance of motivating examples which don't follow the proposed syntax. One way would just be to add text saying "ignore the default keyword for now, it'll be motivated later". This would be unfortunate though, since I liked the style of exposition used here, and adding these caveats would diminish it somewhat. Perhaps a better option is just to add a comment in the example itself saying:
// Note: This example will not work as written under this proposal.
stevenblenkinsop
Jul 14, 2015
The text hasn't motivated default at this point. The example is of what the proposal is trying to allow "at the simplest level"—i.e. before the complexity of needing a default keyword is added—not of what the proposed syntax will ultimately look like once this complexity is taken into account. I thought this was clear, but maybe it could be clarified.
It might be a good idea to limit the appearance of motivating examples which don't follow the proposed syntax. One way would just be to add text saying "ignore the default keyword for now, it'll be motivated later". This would be unfortunate though, since I liked the style of exposition used here, and adding these caveats would diminish it somewhat. Perhaps a better option is just to add a comment in the example itself saying:
// Note: This example will not work as written under this proposal.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pnkfelix
Jul 16, 2015
Member
At the very least, any example that does not actually follow the expected end-syntax could have an explicit annotation saying so, perhaps with a pointer to an end-appendix that shows each such example in the final expected form.
(Also, the "Motivation" section did at least say default in one example -- so there is at least precedent for using it here as well, if one does not want to go the route of adding an appendix with all of the examples according to their final formulation)
pnkfelix
Jul 16, 2015
Member
At the very least, any example that does not actually follow the expected end-syntax could have an explicit annotation saying so, perhaps with a pointer to an end-appendix that shows each such example in the final expected form.
(Also, the "Motivation" section did at least say default in one example -- so there is at least precedent for using it here as well, if one does not want to go the route of adding an appendix with all of the examples according to their final formulation)
llogiq
reviewed
Jul 14, 2015
text/0000-impl-specialization.md
| - You have to lift out trait parameters to enable specialization, as in the | ||
| `Extend` example above. The RFC mentions a few ways of dealing with this | ||
| limitation -- either by employing inherent item specialization, or by | ||
| eventually generalizing HRTBs. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
llogiq
Jul 14, 2015
Contributor
Overall, I'm quite happy with the proposal; I'm a bit worried about corner cases (especially regarding dropck), but I think we should be able to sort them out.
Of course, once this RFC is accepted, Rust will cease to even be a language of medium complexity. People will misuse this feature to create a maze of twisty little fns, all alike, then complain when they can no longer reason about who calls whom (Case in point: I have a very evil example using 3 very small java classes fitting on half a page in 10-point that I used in an exam once. Of more than 600 CS students, only 1 got it right) (Edit: This blog post shows a reduced, less evil example).
Therefore the only thing I don't like is the use of the default keyword (despite having it in Java interfaces). I want the keyword to be long, outlandish and hard to remember, so folks will have to think twice before writing it. Something like: iknowwhatidosoletmeoverridelater (only partially tongue-in-cheek).
|
Overall, I'm quite happy with the proposal; I'm a bit worried about corner cases (especially regarding dropck), but I think we should be able to sort them out. Of course, once this RFC is accepted, Rust will cease to even be a language of medium complexity. People will misuse this feature to create a maze of twisty little Therefore the only thing I don't like is the use of the default keyword (despite having it in Java interfaces). I want the keyword to be long, outlandish and hard to remember, so folks will have to think twice before writing it. Something like: |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bill-myers
Jul 14, 2015
Requiring that apply(I) and apply(J) are either disjoint or one contained in the other seems excessively restrictive.
A less restrictive rule could be this set of equivalent rules:
- The set of impls I_j that apply to any given type T and trait R has a minimum element relative to the I <= J ordering in the RFC (which is the impl that would be chosen)
- The set of apply(I_j) that contain any given type-tuple T has a minimum element relative to the set inclusion ordering
- The intersection of the sets of apply(I_j) that contain any given type-tuple T is equal to apply(I) for some I
- For any two impls I and J, the intersection of apply(I) and apply(J) is equal to the union of apply(I_k) for all apply(I_k) that are subsets of both apply(I) and apply(J)
This allows sets of impls like this:
impl<T, U> Foo for (T, U) // A
impl<T> Foo for (T, int) // B
impl<U> Foo for (int, U) // C
impl Foo for (int, int) // D
This would be allowed since the intersection of apply(B) and apply(C) is equal to apply(D), the only apply-set contained in both, and all other apply-set pairs are contained in each other.
Not sure about the algorithmic complexity of checking for this though. It appears to be equivalent to SAT solving, but this also applies to checking the exhaustiveness of match patterns, so it's not necessarily an issue.
A possible middle ground is to require that the intersection of apply(I) and apply(J) is equal to apply(K) for just one K, which should eliminate the SAT equivalency, and might still be expressive enough.
bill-myers
commented
Jul 14, 2015
|
Requiring that apply(I) and apply(J) are either disjoint or one contained in the other seems excessively restrictive. A less restrictive rule could be this set of equivalent rules:
This allows sets of impls like this:
This would be allowed since the intersection of apply(B) and apply(C) is equal to apply(D), the only apply-set contained in both, and all other apply-set pairs are contained in each other. Not sure about the algorithmic complexity of checking for this though. It appears to be equivalent to SAT solving, but this also applies to checking the exhaustiveness of match patterns, so it's not necessarily an issue. A possible middle ground is to require that the intersection of apply(I) and apply(J) is equal to apply(K) for just one K, which should eliminate the SAT equivalency, and might still be expressive enough. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
tbu-
Jul 14, 2015
Contributor
Another motivating example is the ToString trait which should really be specialized for &str.
|
Another motivating example is the |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Kimundi
Jul 14, 2015
Member
I like it! My only worry is the that the language is getting more complex, but arguable thats not avoidable in this case.
Am I right in thinking that this RFC would enable backwards-compatibly solving the Copy => Clone situation like this?
partial impl<T> Clone for T where T: Copy {
default fn clone(&self) -> Self {
*self
}
}|
I like it! My only worry is the that the language is getting more complex, but arguable thats not avoidable in this case. Am I right in thinking that this RFC would enable backwards-compatibly solving the partial impl<T> Clone for T where T: Copy {
default fn clone(&self) -> Self {
*self
}
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
Wow, this basically destroys dropck - we may have to go back to a (maybe compiler-guaranteed) form of #[unsafe_destructor].
No. That requires "lattice impls", which is explicitly not part of this RFC.
|
Wow, this basically destroys dropck - we may have to go back to a (maybe compiler-guaranteed) form of No. That requires "lattice impls", which is explicitly not part of this RFC. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
@arielb1 please elaborate how this 'destroys dropck'. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
It completely destroys parametricity, and therefore dropck. I think it may even be a better option to make specialization unsafe for that reason.
To see how it destroys parametricity:
Suppose you have a completely innocent blanket impl for a #[fundamental] type:
impl<'a, T> Clone for &'a T { fn clone(&self) -> Self { *self } }It can be trivially called by a destructor:
struct Zook<T>(T);
fn innocent<T>(t: &T) { <&T as Clone>::clone(&t) /* completely innocent, isn't it? */; }
impl<T> Drop for Zook<T> { fn drop(&mut self) { innocent(&self.0); } }However, this can be abused:
struct Evil<'a>(&'b OwnsResources /* anything that is invalid after being dropped */);
impl<'a,'b> Clone for &'a Evil<'b> {
fn clone(&self) -> Self {
println!("I can access {} even after it was freed! muhahahaha", self.0);
loop {}
}
}
fn main() {
let (zook, owns_resources);
owns_resources = OwnsResources::allocate();
zook = Zook(Evil(&owns_resources));
}|
It completely destroys parametricity, and therefore dropck. I think it may even be a better option to make specialization unsafe for that reason. To see how it destroys parametricity: Suppose you have a completely innocent blanket impl for a impl<'a, T> Clone for &'a T { fn clone(&self) -> Self { *self } }It can be trivially called by a destructor: struct Zook<T>(T);
fn innocent<T>(t: &T) { <&T as Clone>::clone(&t) /* completely innocent, isn't it? */; }
impl<T> Drop for Zook<T> { fn drop(&mut self) { innocent(&self.0); } }However, this can be abused: struct Evil<'a>(&'b OwnsResources /* anything that is invalid after being dropped */);
impl<'a,'b> Clone for &'a Evil<'b> {
fn clone(&self) -> Self {
println!("I can access {} even after it was freed! muhahahaha", self.0);
loop {}
}
}
fn main() {
let (zook, owns_resources);
owns_resources = OwnsResources::allocate();
zook = Zook(Evil(&owns_resources));
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Diggsey
Jul 14, 2015
Contributor
It completely destroys parametricity
Isn't this problem what the Reflect trait was intended to solve? ie. you're guaranteed parametricity so long as you don't have a Reflect bound. Default items in an impl block could be allowed only if the trait inherits from Reflect?
Isn't this problem what the |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
Wow, this basically destroys dropck
@nikomatsakis, @pnkfelix and I had discussed the dropck issue a while back (note the brief discussion in Unresolved Questions). There are a few avenues for preventing the interaction you're describing -- note, for example, that in the RFC proposal you need a default qualifier to allow overriding on such a blanket impl. We have been considering rules that would disallow use of default (and therefore specialization) in circumstances where it's possible to apply the relevant impl starting with a type T with no bounds -- basically, the other side of the parametricity requirement for dropck. I left it as an Unresolved Question in the RFC mainly because I'd like to prototype first before proposing a firm rule. But this question must be resolved before we could move forward with this RFC.
Am I right in thinking that this RFC would enable backwards-compatibly solving the
Copy => Clonesituation like this?partial impl<T> Clone for T where T: Copy { default fn clone(&self) -> Self { *self } }@Kimundi
No. That requires "lattice impls", which is explicitly not part of this RFC.
The RFC talks a bit about how we could handle this case -- in particular, the overlap/specialization requirements for partial impl need not be as stringent as for full impls. I'm not sure whether it's worth pushing a full design through as part of this RFC, or loosening the rules later after we've gained some experience.
@nikomatsakis, @pnkfelix and I had discussed the dropck issue a while back (note the brief discussion in Unresolved Questions). There are a few avenues for preventing the interaction you're describing -- note, for example, that in the RFC proposal you need a
The RFC talks a bit about how we could handle this case -- in particular, the overlap/specialization requirements for |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
Note that ruling out bad dropck interaction is nontrivial because of examples like the following:
trait Marker {}
trait Bad {
fn foo(&self);
}
impl<T: Marker> Bad for T {
default fn foo(&self) {}
}
impl<'a, T> Marker for &'a T {}Here, given an arbitrary T you cannot deduce that T: Bad, but if you have an &T you can do so. Thus, the "parametricity check" for use of default would have to consider ways you could build a type around a type parameter that results in an applicable blanket impl. (Basically, we want to say that for there to be a specializable impl, you need to have a "nontrivial" bound on T, which then means that the parametricity dropck relies on still holds good.)
UPDATE: the main questions here are: can we convince ourselves that such a restriction retains the needed parametricity for dropck? And does such a restriction still support the main use cases for specialization? This is why I wanted to experiment a bit more before laying out a detailed proposal for the restriction.
|
Note that ruling out bad dropck interaction is nontrivial because of examples like the following: trait Marker {}
trait Bad {
fn foo(&self);
}
impl<T: Marker> Bad for T {
default fn foo(&self) {}
}
impl<'a, T> Marker for &'a T {}Here, given an arbitrary UPDATE: the main questions here are: can we convince ourselves that such a restriction retains the needed parametricity for dropck? And does such a restriction still support the main use cases for specialization? This is why I wanted to experiment a bit more before laying out a detailed proposal for the restriction. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
Wouldn't partial impl<T:Copy> Clone for T be useless? It would also still conflict with the likes of impl<U:Clone,V:Clone> Clone for (U,V).
Also, "nontrivial bound": the absence of #[fundamental] basically saves us if we restrict to the single-constructor case, but I wouldn't want to have a rule that relies on that.
|
Wouldn't Also, "nontrivial bound": the absence of |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
Even more fun from Unsound Labs:
this useful-ish and rather innocent code:
use std::fmt;
pub struct Zook<T>(T);
impl<T> Drop for Zook<T> { fn drop(&mut self) { log(&self.0); } }
fn log<T>(t: &T) {
let obj = Object(Box::new(t));
println!("dropped object is {:?}", obj);
}
struct Object<T>(Box<T>);
impl<T> fmt::Debug for Object<T> {
default fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(f, "<instance at {:?}>", (&*self.0) as *const T)
}
}
impl<T: fmt::Debug> fmt::Debug for Object<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
self.0.fmt(f)
}
}can be exploited by this wolf in sheep's clothing (not even a single evil structure in sight!):
fn main() {
let (zook, data);
data = vec![1,2,3];
zook = Zook(Object(Box::new(&data)));
}enjoy!
|
Even more fun from Unsound Labs: this useful-ish and rather innocent code: use std::fmt;
pub struct Zook<T>(T);
impl<T> Drop for Zook<T> { fn drop(&mut self) { log(&self.0); } }
fn log<T>(t: &T) {
let obj = Object(Box::new(t));
println!("dropped object is {:?}", obj);
}
struct Object<T>(Box<T>);
impl<T> fmt::Debug for Object<T> {
default fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
write!(f, "<instance at {:?}>", (&*self.0) as *const T)
}
}
impl<T: fmt::Debug> fmt::Debug for Object<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
self.0.fmt(f)
}
}can be exploited by this wolf in sheep's clothing (not even a single evil structure in sight!): fn main() {
let (zook, data);
data = vec![1,2,3];
zook = Zook(Object(Box::new(&data)));
}enjoy! |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
Jul 14, 2015
@arielb1 Do you have any idea for how we can provide simple specialization? An example is PartialEq<&[U]> for &[T]. For the special case of PartialEq<&[u8]> for &[u8], we'd like to call memcmp. It's infeasible to use T: Reflect for this.
bluss
commented
Jul 14, 2015
|
@arielb1 Do you have any idea for how we can provide simple specialization? An example is |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
tbu-
Jul 14, 2015
Contributor
@bluss That sounds like one of the things the optimizer should be able to do reliably.
|
@bluss That sounds like one of the things the optimizer should be able to do reliably. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
Jul 14, 2015
@tbu-: it does not (LLVM has no loop idiom recognize for memcmp in this case), and it's one of the example motivations for specialization.
bluss
commented
Jul 14, 2015
|
@tbu-: it does not (LLVM has no loop idiom recognize for memcmp in this case), and it's one of the example motivations for specialization. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
That should probably also be filed for LLVM, then. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
@arielb1 The second example would also be ruled out by the draft restriction (in particular, the blanket impl would not be allowed to use default due to the lack of any constraint on T). This is obviously a loss in expressiveness, since such overrideable blankets can be useful, but doesn't appear to affect the primary use cases for specialization.
|
@arielb1 The second example would also be ruled out by the draft restriction (in particular, the blanket impl would not be allowed to use |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
"any constraint"? You could have Zook create a MyWrapper<T> and lift debug + blanket-impl all relevant traits to it, from a crate foreign to the one that declares Object.
|
"any constraint"? You could have |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
@arielb1 I don't know what you have in mind with MyWrapper, but to be clear "any constraint" wasn't a paraphrasing of the rule, it was a fact about the blanket impl as written.
|
@arielb1 I don't know what you have in mind with |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
@bill-myers That's an interesting suggestion; it'd be helpful to see whether it helps with the examples given in the "Limitations" section, or whether there are other compelling examples it helps with.
Personally, my inclination is still to start with the relatively simple subset rule proposed here, which should be forwards-compatible with extensions based on more subtle apply rules, like yours.
|
@bill-myers That's an interesting suggestion; it'd be helpful to see whether it helps with the examples given in the "Limitations" section, or whether there are other compelling examples it helps with. Personally, my inclination is still to start with the relatively simple subset rule proposed here, which should be forwards-compatible with extensions based on more subtle |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
arielb1
Jul 14, 2015
Contributor
You could of course have
fn log<T>(t: &T) {
struct MyWrapper<T>(T);
impl<T: fmt::Debug> fmt::Debug for MyWrapper<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
self.0.fmt(f)
}
}
impl<T> Marker for MyWrapper<T> { /* for every marker */
fn witness_fn(/*..*/) { loop {} }
}
let obj = Object(Box::new(MyWrapper(t)));
println!("dropped object is {:?}", obj);
}We specifically require the set of impls a trait-ref matches to be a chain, not a lattice, for the moment.
|
You could of course have fn log<T>(t: &T) {
struct MyWrapper<T>(T);
impl<T: fmt::Debug> fmt::Debug for MyWrapper<T> {
fn fmt(&self, f: &mut fmt::Formatter) -> Result<(), fmt::Error> {
self.0.fmt(f)
}
}
impl<T> Marker for MyWrapper<T> { /* for every marker */
fn witness_fn(/*..*/) { loop {} }
}
let obj = Object(Box::new(MyWrapper(t)));
println!("dropped object is {:?}", obj);
}We specifically require the set of impls a trait-ref matches to be a chain, not a lattice, for the moment. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Jul 14, 2015
Member
@arielb1 I'm sorry, I'm not sure what the most recent comment is getting at. What I was saying before is that the blanket impl of Debug for Object<T>, using default, would by itself be disallowed.
Regarding wrappers and markers, my earlier comment was trying to get at some of the subtlety. In particular, what I have in mind is a check for whether an impl with a default method can ever apply to a type containing a skolemized type variable, basically. Such a restriction should prevent you from calling specializable methods on "fully parametric" type parameters, i.e., should restore parametricity in the cases dropck requires it. (On the dropck side, of course the requirement is that type parameters have no bounds involving traits with any methods.)
So, in your earlier example, Object<T> where T is skolemized is covered by the blanket impl (since there is no additional bound given by a where clause).
Similarly, in your first example, the impl
impl<'a, T> Clone for &'a T { default fn clone(&self) -> Self { *self } }is not allowed since it can apply to &'a T where T is a skolemized variable.
And in my example,
trait Marker {}
trait Bad {
fn foo(&self);
}
impl<T: Marker> Bad for T {
default fn foo(&self) {}
}
impl<'a, T> Marker for &'a T {}the blanket impl of Bad is not allowed since it applies to &'a T where T is skolemized.
Hopefully that helps clarify the sort of rule I have in mind. As I said before, I'd like to play around a bit more before trying to formalize the rule; I hope to do so soon.
|
@arielb1 I'm sorry, I'm not sure what the most recent comment is getting at. What I was saying before is that the blanket impl of Regarding wrappers and markers, my earlier comment was trying to get at some of the subtlety. In particular, what I have in mind is a check for whether an impl with a So, in your earlier example, Similarly, in your first example, the impl impl<'a, T> Clone for &'a T { default fn clone(&self) -> Self { *self } }is not allowed since it can apply to And in my example, trait Marker {}
trait Bad {
fn foo(&self);
}
impl<T: Marker> Bad for T {
default fn foo(&self) {}
}
impl<'a, T> Marker for &'a T {}the blanket impl of Hopefully that helps clarify the sort of rule I have in mind. As I said before, I'd like to play around a bit more before trying to formalize the rule; I hope to do so soon. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
huonw
Jul 14, 2015
Member
Do you have any idea for how we can provide simple specialization? An example is PartialEq<&[U]> for &[T]. For the special case of PartialEq<&[u8]> for &[u8], we'd like to call memcmp. It's infeasible to use T: Reflect for this.
I believe this is OK, because the PartialEq<&[U]> for &[T] impl has a where clause (T: PartialEq<U>).
I believe this is OK, because the |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
Jul 14, 2015
I don't think this feature is worth introducing two new keywords (regardless if they are scoped or not).
I do think it's a very welcome feature, and that it solves real problems. My main focus is performance, and I think we can use it for cases where we really can't convince llvm to compile the generic code well.
We've got some core performance issues with our most basic types (slices and vectors, often with bytes or integers as elements). To not have closer to “optimal” performance is almost embarrassing. Specialization lets us tweak these cases by themselves so that we get there.
==for byte slices could be up to an order of magnitude faster by using the platform's memcmp or similar- The
Zipiterator adaptor compiles to suboptimal code for slice iterator, so a common operation like iterating two pieces of data in lockstep has more overhead than it should. - Extend / FromIterator with slices and vectors -- writing bytes to a vector has much more overhead than it should.
bluss
commented
Jul 14, 2015
|
I don't think this feature is worth introducing two new keywords (regardless if they are scoped or not). I do think it's a very welcome feature, and that it solves real problems. My main focus is performance, and I think we can use it for cases where we really can't convince llvm to compile the generic code well. We've got some core performance issues with our most basic types (slices and vectors, often with bytes or integers as elements). To not have closer to “optimal” performance is almost embarrassing. Specialization lets us tweak these cases by themselves so that we get there.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Gankro
Jul 14, 2015
Contributor
@bluss I believe those cases would be better handled by
rust-lang/rust#26902, as this would in principle
benefit all code uniformly; not just code written in the standard library
on a case-by-case basis.
On Tue, Jul 14, 2015 at 3:32 PM, bluss notifications@github.com wrote:
I don't think this feature is worth introducing two new keywords
(regardless if they are scoped or not).I do think it's a very welcome feature, and that it solves real problems.
My main focus is performance, and I think we can use it for cases where we
really can't convince llvm to compile the generic code well.We've got some core performance issues with our most basic types (slices
and vectors, often with bytes as elements). To not have closer to “optimal”
performance is almost embarrassing. Specialization lets us tweak these
cases by themselves so that we get there.
- == for byte slices could be up to an order of magnitude faster by
using the platform's memcmp or similar- The Zip iterator adaptor compiles to suboptimal code for slice
iterator, so a common operation like iterating two pieces of data in
lockstep has more overhead than it should.- Extend / FromIterator with slices and vectors -- writing bytes to a
vector has much more overhead than it should.—
Reply to this email directly or view it on GitHub
#1210 (comment).
|
@bluss I believe those cases would be better handled by On Tue, Jul 14, 2015 at 3:32 PM, bluss notifications@github.com wrote:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
Jul 14, 2015
@Gankro As an interesting counterposition, specialization benefits all users of Rust uniformly, because they can specialize to resolve the specific optimizations they need. Improving llvm / rustc is an option only the rustc vendor has. In general Rust as a language is already great at giving power to the users.
bluss
commented
Jul 14, 2015
|
@Gankro As an interesting counterposition, specialization benefits all users of Rust uniformly, because they can specialize to resolve the specific optimizations they need. Improving llvm / rustc is an option only the rustc vendor has. In general Rust as a language is already great at giving power to the users. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
target-san
Feb 23, 2016
Sorry for intervening. But could you please put in some more details on why do we need default keyword at all, and cannot make everything default by default? There's some explanation for final by default in RFC, but I couldn't grasp it.
If it's about explicitness, then it makes things too verbose.
If it's about dynamicity, then comparing to C++ etc. isn't quite correct. Because Rust has dynamic polymorphism outside types (trait objects). So, basically, what we have:
- Traits' default methods are
defaultby default (yikes). - Impl methods are
finalby default.
I'd personally prefer openness and opt-out, instead of closedness and opt-in. Though, it seems everything is already decided.
target-san
commented
Feb 23, 2016
|
Sorry for intervening. But could you please put in some more details on why do we need If it's about explicitness, then it makes things too verbose. If it's about dynamicity, then comparing to C++ etc. isn't quite correct. Because Rust has dynamic polymorphism outside types (trait objects). So, basically, what we have:
I'd personally prefer openness and opt-out, instead of closedness and opt-in. Though, it seems everything is already decided. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
stevenblenkinsop
Feb 23, 2016
It's about backwards compatibility. Allowing people to specialize existing impls could cause Bad Things to happen. Currently, unsafe code is allowed to trust other methods in the same impl body to maintain necessary invariants, since it can know for sure that that's the code which will actually execute. If this feature allowed downstream code to specialize those impls, they could swap in a method that doesn't maintain those invariants without needing to write "unsafe", so this could allow for safety violations in safe code, which is a huge no no. Default methods in trait bodies can already be specialized by any impl, so they both can and have to remain specializable without any additional annotation.
stevenblenkinsop
commented
Feb 23, 2016
|
It's about backwards compatibility. Allowing people to specialize existing impls could cause Bad Things to happen. Currently, unsafe code is allowed to trust other methods in the same impl body to maintain necessary invariants, since it can know for sure that that's the code which will actually execute. If this feature allowed downstream code to specialize those impls, they could swap in a method that doesn't maintain those invariants without needing to write "unsafe", so this could allow for safety violations in safe code, which is a huge no no. Default methods in trait bodies can already be specialized by any impl, so they both can and have to remain specializable without any additional annotation. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Feb 23, 2016
Contributor
There is also a separate, stronger backwards compatibility concern. In particular, if an impl declares an item as default, that triggers more conservative type-checking rules to account for the fact that it may be specialized. So if we made everything defaultable by default, then existing impls which currently type check would not.
|
There is also a separate, stronger backwards compatibility concern. In particular, if an impl declares an item as default, that triggers more conservative type-checking rules to account for the fact that it may be specialized. So if we made everything defaultable by default, then existing impls which currently type check would not. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
target-san
Feb 23, 2016
@nikomatsakis @stevenblenkinsop Uh, that's a strong argument. Will re-read RFC.
target-san
commented
Feb 23, 2016
|
@nikomatsakis @stevenblenkinsop Uh, that's a strong argument. Will re-read RFC. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Feb 23, 2016
Member
@target-san (There's also a slightly more detailed answer to the same question earlier in the thread.)
|
@target-san (There's also a slightly more detailed answer to the same question earlier in the thread.) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Feb 23, 2016
Contributor
Huzzah! The @rust-lang/lang team has decided to accept this RFC, with the following unresolved questions to be firmly settled before stabilization:
- Should associated type be specializable at all?
- Current answer: yes
- When should projection reveal a
default type? Never during typeck? Or when monomorphic?- Current answer: never during typeck
- Should default trait items be considered
default(i.e. specializable)?- Current answer: yes
- Should we have
default impl(where all items aredefault) orpartial impl(wheredefaultis opt-in)- Current answer:
default impl
- Current answer:
- How should we deal with lifetime dispatchability?
- Current answer: detect when it's happening with an error-by-default lint; select the most specific impl whose application is not lifetime-dependent.
Since entering FCP, conversation has primarily focused on parametricity. Parametricity is a property that, intuitively, means that "if a function foo has a generic type parameter T: Bar, then calling foo can only invoke methods from Bar and nothing else". The Rust language today roughly preserves parametricity, with some special-casing around sizeof, zero-sized types, and a few other nitty gritty details. Adopting specialization will meant that we no longer preserve parametricity: one can define specialized impls (of Bar or possibly other traits) that can then "reveal" the underlying type T and which then call methods outside of Bar.
The conclusion of the lang team was that, on balance, we should adopt the RFC as is, even though it means we no longer have parametricity. The overall summary is this:
- achieving true zero-cost abstractions ultimately requires some sort of special-casing -- for example, writing a customized variant on
extendthat uses memcpy for slices of copy types -- since you want to bypass the "generic" implementation when you can write a more efficient one that is tailored to specific types or capabilities- this goal is fundamentally at odds with parametricity
- we haven't found a compelling need for parametricity in reasoning about Rust code, whereas having effective, zero-cost abstractions is crucial
- the main concrete use case of parametricity (
dropck) encountered thus far is, in fact, ill served by parametricity and perhaps better suited to a different analysis
- the main concrete use case of parametricity (
- when it comes to unsafe code, any guarantees that one might have wanted from parametricity can also be achieved -- and can be achieved more robustly -- with privacy
- although the ergonomics here could admittedly be improved
- gross violations of parametricity can be surprising, but this largely reflects a failure of API design and conventions rather than something which must be forbidden outright; establishing strong conventions can help avoid such mistakes
- if we find we need parametricity for some specific purpose, we can add an "opt-in" form as a backwards compatible extension
- but see some thoughts below
The primary alternative proposal that retains parametricity while still allowing for zero-cost abstractions is to make specialization unsafe. The idea would be that if you specialize an impl unsafely, you are required to behave in an "equivalent" fashion. There are several concerns with this approach:
- defining precisely what kinds of specialized behavior is "equivalent" is subtle at best and will often depend on context
- confusing requirements for when to use unsafe may be worse than no unsafe at all
- finally, there are good use cases for violating parametricity
- this can be accommodated by adding optional
T: Reflectbounds into the system to enable non-parametric use-cases, but that carries downsides as well (covered in the next paragraph)
- this can be accommodated by adding optional
Finally, it is possible to use the Reflect bound to make parametricity explicit. This could be either an "opt-in" proposal, as I outlined, or an "opt-out" variant. The difference is basically one of defaults. If I write fn foo<T>(), am I allowed to use specialized impls for T or not? Whichever default one chooses, there are some general concerns with this idea:
- the opt-out approach is really only practical if you build on the unsafe impls described in the previous paragraph, since otherwise even performance-oriented specialization could not be added backwards compatibly
- there is the risk of splitting the ecosystem into "parametric" and "non-parametric" functions
- parametric functions cannot call into non-parametric ones unless the concrete types are known
- the only way to permit a parametric fn to call into a non-parametric one without some form of newtype is to accept incoherence -- that is, that the same trait applied to the same types may resolve to different impls depending on context
In conclusion, thanks everyone for an edifying RFC discussion. This thread has consistently been of very high quality, with many interesting twists and turns. At this point, the primary focus turns to the implementation (which @aturon has been working on). Hopefully experience will help us feel more secure in our answers to the unresolved questions listed initially, as well as the decision on parametricity. Please do leave your thoughts and comments on the tracking issue.
|
Huzzah! The @rust-lang/lang team has decided to accept this RFC, with the following unresolved questions to be firmly settled before stabilization:
Since entering FCP, conversation has primarily focused on parametricity. Parametricity is a property that, intuitively, means that "if a function The conclusion of the lang team was that, on balance, we should adopt the RFC as is, even though it means we no longer have parametricity. The overall summary is this:
The primary alternative proposal that retains parametricity while still allowing for zero-cost abstractions is to make specialization unsafe. The idea would be that if you specialize an impl unsafely, you are required to behave in an "equivalent" fashion. There are several concerns with this approach:
Finally, it is possible to use the
In conclusion, thanks everyone for an edifying RFC discussion. This thread has consistently been of very high quality, with many interesting twists and turns. At this point, the primary focus turns to the implementation (which @aturon has been working on). Hopefully experience will help us feel more secure in our answers to the unresolved questions listed initially, as well as the decision on parametricity. Please do leave your thoughts and comments on the tracking issue. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
aturon
Feb 23, 2016
Member
I want to reiterate @nikomatsakis's thanks for the comments on this RFC. I'm really proud of the discussion we've had here -- the opposite of the bikeshed parable :)
I also wanted to mention that we hope to continue exploring extensions beyond the simple "chain" rule of specialization in this RFC. @nikomatsakis has a blog post in preparation on the topic, so stay tuned!
|
I want to reiterate @nikomatsakis's thanks for the comments on this RFC. I'm really proud of the discussion we've had here -- the opposite of the bikeshed parable :) I also wanted to mention that we hope to continue exploring extensions beyond the simple "chain" rule of specialization in this RFC. @nikomatsakis has a blog post in preparation on the topic, so stay tuned! |
nikomatsakis
referenced this pull request
Feb 23, 2016
Open
Tracking issue for specialization (RFC 1210) #31844
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
Tracking issue: rust-lang/rust#31844 |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
golddranks
Feb 23, 2016
About the strong conventions (#1210 (comment)), I'd love to hear ideas how we are going to encourage the culture of documenting these things clearly.
Many things in Rust are being automatically enforced by strong type system, lints, and on the build engineering level, easy built-in support for testing, CI&testing bots etc.
These things are great because they are largely automatic – they are opt-out rather than opt-in. How we are going to make the clear communication and documentation about the expected "variety" of particular specializations a similarly "opt-out"-kind of thing?
golddranks
commented
Feb 23, 2016
|
About the strong conventions (#1210 (comment)), I'd love to hear ideas how we are going to encourage the culture of documenting these things clearly. Many things in Rust are being automatically enforced by strong type system, lints, and on the build engineering level, easy built-in support for testing, CI&testing bots etc. These things are great because they are largely automatic – they are opt-out rather than opt-in. How we are going to make the clear communication and documentation about the expected "variety" of particular specializations a similarly "opt-out"-kind of thing? |
nikomatsakis
merged commit d7441bf
into
rust-lang:master
Feb 23, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
burdges
Feb 23, 2016
I think default trait items being considered default sounds kinda confusing. Aren't these just completely orthogonal matters? I'd imagine "default trait items" could be renamed in the documentation since they require no keywords, maybe "trait proposed" or similar.
I do not fully understand the issue with parametric functions calling into non-parametric functions. Yes, incoherence arises, but you'll retain more parametricity than otherwise. It's not enough for your dropck issues, but it still aids humans reading code. I think more so than privacy :
"Any [mental tools] that let you offload [the informal proofs involved in reading code] is a huge win" - Yaron Minsky
I do not fully understand the problem with splitting the ecosystem into parametric and non-parametric functions. Is it simply that some existing APIs must not change but should become specializable? If not, why would splitting the ecosystem be bad? It worked great splitting it into safe and unsafe. :)
burdges
commented
Feb 23, 2016
|
I think default trait items being considered I do not fully understand the issue with parametric functions calling into non-parametric functions. Yes, incoherence arises, but you'll retain more parametricity than otherwise. It's not enough for your "Any [mental tools] that let you offload [the informal proofs involved in reading code] is a huge win" - Yaron Minsky I do not fully understand the problem with splitting the ecosystem into parametric and non-parametric functions. Is it simply that some existing APIs must not change but should become specializable? If not, why would splitting the ecosystem be bad? It worked great splitting it into safe and unsafe. :) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Diggsey
Feb 23, 2016
Contributor
it still aids humans reading code.
In that regard I'd choose coherency over parametricity any day! Having different implementations used for the same type in different situations would be terrible. (Also it would make using specialization for low-level, unsafe optimizations impossible, because at any time the more general implementation might be called instead, which might not even be valid for the specific case)
In that regard I'd choose coherency over parametricity any day! Having different implementations used for the same type in different situations would be terrible. (Also it would make using specialization for low-level, unsafe optimizations impossible, because at any time the more general implementation might be called instead, which might not even be valid for the specific case) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
burdges
Feb 23, 2016
@golddranks I'd the Cargo IANAL issue #1396 in mind when I wrote that "strong conventions" comment, so maybe cargo giving warnings when using crates with undocumented specializations or something.
burdges
commented
Feb 23, 2016
|
@golddranks I'd the Cargo IANAL issue #1396 in mind when I wrote that "strong conventions" comment, so maybe cargo giving warnings when using crates with undocumented specializations or something. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
dylanede
Feb 27, 2016
@aturon @nikomatsakis
Here's something to think about. This is using the current implementation. This shows a compile-time dependency on the hidden type of a defaulted associated type. Whether this is relying on behaviour not valid according to the RFC is another matter.
fn main() {
enum Undefined {}
trait Opaque {
type Out;
}
impl<T> Opaque for T {
default type Out = Undefined;
}
impl Opaque for () {
type Out = bool;
}
trait Same<A, B> {
type T;
fn cast(a: A) -> B;
}
impl<A> Same<A, A> for () {
type T = A;
fn cast(a: A) -> A {
a
}
}
fn cast<X, Y>(x: X) -> Y where (): Same<X, Y> {
<() as Same<X, Y>>::cast(x)
}
// Bool is an "opaque" type that is secretly bool
// This will not work: let x: Bool = true;
type Bool = <() as Opaque>::Out;
// But the following does work:
// compilation dependency:
{
fn foo<X, Y>(y: Y) where (): Same<X, Y> {
}
let x: bool = true;
foo::<Bool, _>(x); // only compiles when Bool is bool
// also, if Bool is not bool, the compilation error message will reported the normalised type
}
// getting a bool from a Bool works
{
let x: Bool = unimplemented!(); // a Bool
let x: bool = cast(x); // x is now a bool, at compile time, requires Bool == bool
}
// getting a Bool from a bool won't compile though
{
let x: bool = true;
let x: Bool = cast(x); // doesn't compile
}
}
dylanede
commented
Feb 27, 2016
|
@aturon @nikomatsakis fn main() {
enum Undefined {}
trait Opaque {
type Out;
}
impl<T> Opaque for T {
default type Out = Undefined;
}
impl Opaque for () {
type Out = bool;
}
trait Same<A, B> {
type T;
fn cast(a: A) -> B;
}
impl<A> Same<A, A> for () {
type T = A;
fn cast(a: A) -> A {
a
}
}
fn cast<X, Y>(x: X) -> Y where (): Same<X, Y> {
<() as Same<X, Y>>::cast(x)
}
// Bool is an "opaque" type that is secretly bool
// This will not work: let x: Bool = true;
type Bool = <() as Opaque>::Out;
// But the following does work:
// compilation dependency:
{
fn foo<X, Y>(y: Y) where (): Same<X, Y> {
}
let x: bool = true;
foo::<Bool, _>(x); // only compiles when Bool is bool
// also, if Bool is not bool, the compilation error message will reported the normalised type
}
// getting a bool from a Bool works
{
let x: Bool = unimplemented!(); // a Bool
let x: bool = cast(x); // x is now a bool, at compile time, requires Bool == bool
}
// getting a Bool from a bool won't compile though
{
let x: bool = true;
let x: Bool = cast(x); // doesn't compile
}
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pczarn
Mar 2, 2016
Does specialization allow us to impl Fn<((A, B),)> for T: Fn<(A, B)> etc.? With these impls, redundant parentheses around closure arguments could be omitted, e.g. when iterating over zipped iterators..
pczarn
commented
Mar 2, 2016
|
Does specialization allow us to impl |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
llogiq
Mar 2, 2016
Contributor
I'd rather add a .pairmap(self, F) -> T where F: Fn(A, B) -> T. Less magic needed, and it'd be backwards compatible, too.
|
I'd rather add a |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
tikue
Mar 2, 2016
Sounds like a separate RFC.
On Wed, Mar 2, 2016 at 8:09 AM, llogiq notifications@github.com wrote:
I'd rather add a .pairmap(self, F) -> T where F: Fn(A, B) -> T. Less
magic needed, and it'd be backwards compatible, too.—
Reply to this email directly or view it on GitHub
#1210 (comment).
tikue
commented
Mar 2, 2016
|
Sounds like a separate RFC. On Wed, Mar 2, 2016 at 8:09 AM, llogiq notifications@github.com wrote:
|
aturon commentedJul 13, 2015
•
edited by mbrubeck
Edited 1 time
-
mbrubeck
edited Jun 2, 2016 (most recent)
This RFC proposes a design for specialization, which permits multiple
implblocks to apply to the same type/trait, so long as one of the blocks is clearly
"more specific" than the other. The more specific
implblock is used in a caseof overlap. The design proposed here also supports refining default trait
implementations based on specifics about the types involved.
Altogether, this relatively small extension to the trait system yields benefits
for performance and code reuse, and it lays the groundwork for an "efficient
inheritance" scheme that is largely based on the trait system (described in a
forthcoming companion RFC).
Rendered