Join GitHub today
GitHub is home to over 28 million developers working together to host and review code, manage projects, and build software together.
Sign upTracking Issue for RFC 213: Default Type Parameter Fallback #27336
Comments
jroesch
self-assigned this
Jul 27, 2015
steveklabnik
added
the
B-RFC-approved
label
Jul 29, 2015
alexcrichton
added
the
T-lang
label
Aug 11, 2015
nikomatsakis
added
the
B-unstable
label
Aug 14, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
What is the status of this? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
pnkfelix
added
the
I-nominated
label
Dec 1, 2015
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
mahkoh
Dec 11, 2015
Contributor
This doesn't seem to be working properly.
#![crate_type = "lib"]
#![feature(default_type_parameter_fallback)]
trait A<T = Self> {
fn a(t: &T) -> Self;
}
trait B<T = Self> {
fn b(&self) -> T;
}
impl<U, T = U> B<T> for U
where T: A<U>
{
fn b(&self) -> T {
T::a(self)
}
}
struct X(u8);
impl A for X {
fn a(x: &X) -> X {
X(x.0)
}
}
fn f(x: &X) {
x.b(); // ok
}
fn g(x: &X) {
let x = x.b();
x.0; // error: the type of this value must be known in this context
}|
This doesn't seem to be working properly. #![crate_type = "lib"]
#![feature(default_type_parameter_fallback)]
trait A<T = Self> {
fn a(t: &T) -> Self;
}
trait B<T = Self> {
fn b(&self) -> T;
}
impl<U, T = U> B<T> for U
where T: A<U>
{
fn b(&self) -> T {
T::a(self)
}
}
struct X(u8);
impl A for X {
fn a(x: &X) -> X {
X(x.0)
}
}
fn f(x: &X) {
x.b(); // ok
}
fn g(x: &X) {
let x = x.b();
x.0; // error: the type of this value must be known in this context
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jroesch
Dec 15, 2015
Member
@mahkoh there is a necessary patch that hasn't gotten rebased since I stopped my summer internship. I've been unfortunately busy with real life stuff, looks like @nikomatsakis has plans for landing a slightly different version according to a recent post of his on the corresponding documentation issue for this feature.
|
@mahkoh there is a necessary patch that hasn't gotten rebased since I stopped my summer internship. I've been unfortunately busy with real life stuff, looks like @nikomatsakis has plans for landing a slightly different version according to a recent post of his on the corresponding documentation issue for this feature. |
nagisa
referenced this issue
Jan 8, 2016
Merged
Revamp the support for "future incompatible" lints #30787
gkoz
referenced this issue
Jan 13, 2016
Open
Type parameters better have defaults in the presence of Option #143
bluss
referenced this issue
Jan 18, 2016
Merged
Feature gate defaulted type parameters appearing outside of types #30724
aturon
removed
the
I-nominated
label
Jan 28, 2016
aturon
assigned
nikomatsakis
and unassigned
jroesch
Jan 28, 2016
This was referenced Jan 28, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
Feb 22, 2016
Contributor
@nikomatsakis I know the lang team didn't see any future in this feature, will you put that on record in the issue
One example where this feature seems to be the only way out is the following concrete example of API evolution in libstd.
Option<T> implements PartialEq today, but we would like to extend it to PartialEq<Option<U>> where T: PartialEq<U>. It appears this feature can solve the type inference regressions that would otherwise occur (and might block us from doing this oft-requested improvement of Option).
|
@nikomatsakis I know the lang team didn't see any future in this feature, will you put that on record in the issue One example where this feature seems to be the only way out is the following concrete example of API evolution in libstd.
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Feb 23, 2016
Contributor
@bluss I HAVE been dubious of this feature, but I'm been slowly reconsidering. @aturon is supposed to be doing some exploration of this whole space and writing up some detailed thoughts. I actually started rebasing @jroesch's dead branch to implement the desired semantics and making some progress there too, but I've been distracted.
One advantage of finishing up the impl is that it would let us experiment with extensions like the one you describe to see how backwards compatible they truly are -- one problem with fallback is that it is not ACTUALLY backwards compatible, because of the possibility of competing incompatible fallbacks.
|
@bluss I HAVE been dubious of this feature, but I'm been slowly reconsidering. @aturon is supposed to be doing some exploration of this whole space and writing up some detailed thoughts. I actually started rebasing @jroesch's dead branch to implement the desired semantics and making some progress there too, but I've been distracted. One advantage of finishing up the impl is that it would let us experiment with extensions like the one you describe to see how backwards compatible they truly are -- one problem with fallback is that it is not ACTUALLY backwards compatible, because of the possibility of competing incompatible fallbacks. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
|
That said I still have my doubts :) |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Feb 23, 2016
Contributor
Another example where this could be useful -- basically the same example as petgraph -- is adding allocators to collections in some smooth way.
|
Another example where this could be useful -- basically the same example as petgraph -- is adding allocators to collections in some smooth way. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
durka
Feb 24, 2016
Contributor
What are the drawbacks to turning this on? It seems to mainly make things compile that otherwise cannot infer enough type information.
|
What are the drawbacks to turning this on? It seems to mainly make things compile that otherwise cannot infer enough type information. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
abonander
Feb 25, 2016
Contributor
I have a pretty good use for this too. It's basically what @bluss mentioned, adding new types to an impl while avoiding breaking inference on existing usage.
|
I have a pretty good use for this too. It's basically what @bluss mentioned, adding new types to an |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
withoutboats
Apr 5, 2016
Contributor
Is the only issue with this the interaction with numeric fallback? I like default type parameters a lot. I often use them when I parameterize a type which has only one production instantiation, for mocking and to enforce bondaries. Its inconsistent and for me unpleasant that defaults don't work for the type parameters of functions.
|
Is the only issue with this the interaction with numeric fallback? I like default type parameters a lot. I often use them when I parameterize a type which has only one production instantiation, for mocking and to enforce bondaries. Its inconsistent and for me unpleasant that defaults don't work for the type parameters of functions. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
joshtriplett
Apr 6, 2016
Member
I have a use case for this feature as well. Consider the following code:
#![feature(default_type_parameter_fallback)]
use std::path::Path;
fn func<P: AsRef<Path> = String>(p: Option<P>) {
match p {
None => { println!("None"); }
Some(path) => { println!("{:?}", path.as_ref()); }
}
}
fn main() {
func(None);
}Without default_type_parameter_fallback, the call in main would require a type annotation: func(None::<String>);.
Along similar lines, consider a function accepting an IntoIterator<Item=P>, where P: AsRef<Path>. If you want to pass iter::empty(), you have to give it an explicit type. With default_type_parameter_fallback, you can just pass iter::empty().
|
I have a use case for this feature as well. Consider the following code: #![feature(default_type_parameter_fallback)]
use std::path::Path;
fn func<P: AsRef<Path> = String>(p: Option<P>) {
match p {
None => { println!("None"); }
Some(path) => { println!("{:?}", path.as_ref()); }
}
}
fn main() {
func(None);
}Without Along similar lines, consider a function accepting an |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
tikue
May 29, 2016
Contributor
I've found this feature immensely helpful and would love to see it stabilized.
|
I've found this feature immensely helpful and would love to see it stabilized. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Ericson2314
Jun 30, 2016
Contributor
I think this feature is very useful for an ergonomic Read/Write with an associated error type. See QuiltOS/core-io@4296d87 for how this was formerly done.
|
I think this feature is very useful for an ergonomic Read/Write with an associated error type. See QuiltOS/core-io@4296d87 for how this was formerly done. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
withoutboats
Jun 30, 2016
Contributor
If we had this feature, possibly we could have slice::sort take a type parameter to provide the sorting algorithm.
|
If we had this feature, possibly we could have slice::sort take a type parameter to provide the sorting algorithm. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Jul 1, 2016
Contributor
So @eddyb floated an interesting idea for how to have this feature without the forwards compatibility hazards. I'm not honestly sure how much what has been written about and where -- but in general there is an obvious problem when you introduce "fallback" that it could happen that a given type variable has multiple fallbacks which apply. This means that introducing a new type parameter with a fallback can easily still be a breaking change, no matter what else we do. This is (one of) the reasons that my enthusiasm for this feature has dulled a bit.
You can see this problem immediately when you consider the interaction with the i32 fallback we have for integers. Imagine you have foo(22) where the function foo is defined as fn foo<T>(t: T). Currently T will be i32. But if you change foo to fn foo<T=u32>(t: T), then what should T be? There are now two potentially applicable defaults: i32 and u32.
The idea that @eddyb had was basically to be more conservative around defaults. In particular, the idea was that we would limit defaults to type declarations (iirc) and not to fns. I'm having trouble recalling the precise plan he put forward -- it was quickly and over IRC -- but iirc the idea was that it would be an error if to have a type variable that had no default mix with one that had some other default. All the type variables would have to have the same default.
So e.g. you could add an allocator parameter A to various data-structures like Vec and HashMap:
struct Vec<T, A=GlobalAlocator> { ... }
struct HashMap<K, V, A=GlobalAllocator> { ... }so long as you are consistent about using the same default for allocators in every other place that you add them, since otherwise you risk having defaults that disagree. Don't have time to do a detailed write-up, and I'm probably getting something a bit wrong. Perhaps @eddyb can explain it better.
|
So @eddyb floated an interesting idea for how to have this feature without the forwards compatibility hazards. I'm not honestly sure how much what has been written about and where -- but in general there is an obvious problem when you introduce "fallback" that it could happen that a given type variable has multiple fallbacks which apply. This means that introducing a new type parameter with a fallback can easily still be a breaking change, no matter what else we do. This is (one of) the reasons that my enthusiasm for this feature has dulled a bit. You can see this problem immediately when you consider the interaction with the The idea that @eddyb had was basically to be more conservative around defaults. In particular, the idea was that we would limit defaults to type declarations (iirc) and not to fns. I'm having trouble recalling the precise plan he put forward -- it was quickly and over IRC -- but iirc the idea was that it would be an error if to have a type variable that had no default mix with one that had some other default. All the type variables would have to have the same default. So e.g. you could add an allocator parameter struct Vec<T, A=GlobalAlocator> { ... }
struct HashMap<K, V, A=GlobalAllocator> { ... }so long as you are consistent about using the same default for allocators in every other place that you add them, since otherwise you risk having defaults that disagree. Don't have time to do a detailed write-up, and I'm probably getting something a bit wrong. Perhaps @eddyb can explain it better. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
eddyb
Jul 1, 2016
Member
The gist of the idea is that before applying defaults, we ensure that everything which could have defaults added in the future was already inferred (i.e. as we error now for unbound inference variables).
So if you had struct Foo<T, A>(Box<T, A>); (and Box had a default for A), let _ = Foo(box x); would always be an error, as the result of inference would change if a default for A was added to Foo.
Your options would be struct Foo<T>(Box<T>); or struct Foo<T, A=GlobalAllocator>(Box<T, A>);: any other default would be useless during inference because it would conflict with Box's default.
This scheme works with both allocators and hashers AFAICT, since you can just use the "global default" everywhere you want to make it configurable, and there are likely more usecases out there like that.
The catch is that you have to limit from the start the possible locations of defaults you'll take into consideration to "complete" type inference, and allowing them on more than type definitions would require a lot of duplication of the same default everywhere, but there may be a reason to do that.
|
The gist of the idea is that before applying defaults, we ensure that everything which could have defaults added in the future was already inferred (i.e. as we error now for unbound inference variables). So if you had Your options would be This scheme works with both allocators and hashers AFAICT, since you can just use the "global default" everywhere you want to make it configurable, and there are likely more usecases out there like that. The catch is that you have to limit from the start the possible locations of defaults you'll take into consideration to "complete" type inference, and allowing them on more than type definitions would require a lot of duplication of the same default everywhere, but there may be a reason to do that. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
eddyb
Jul 1, 2016
Member
@nikomatsakis We could "inherit" defaults from Self's type parameters in inherent impls and allow defaults everywhere else, what do you think?
It seems rare that you have fully parametric free fns working with a container type that wants defaults.
We could also "inherit" defaults everywhere, from user type definitions and forbid having your own defaults for type parameters that end up being used in types which could have defaults in the future.
Such an adjustment would make this viable even in @withoutboats' <[T]>::sort situation.
|
@nikomatsakis We could "inherit" defaults from We could also "inherit" defaults everywhere, from user type definitions and forbid having your own defaults for type parameters that end up being used in types which could have defaults in the future. Such an adjustment would make this viable even in @withoutboats' |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
withoutboats
Jul 1, 2016
Contributor
You can see this problem immediately when you consider the interaction with the i32 fallback we have for integers. Imagine you have foo(22) where the function foo is defined as
fn foo<T>(t: T). Currently T will be i32. But if you change foo tofn foo<T=u32>(t: T), then what should T be? There are now two potentially applicable defaults: i32 and u32.
I don't know, it seems to me like adding a default to an existing type parameter just ought to be a breaking change (and it also seems to me that T should be u32)?
I don't know, it seems to me like adding a default to an existing type parameter just ought to be a breaking change (and it also seems to me that |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nrc
Aug 8, 2016
Member
@eddyb to clarify some points about your suggestion:
- it must be enforced at usage sites, not in definitions? In your example, it is
let _ = Foo(box x)that is the problem, specifically if the type ofxcannot be fully inferred? - do you intend to forbid defaults for functions entirely? Could you give an example of functions with defaults which are more problematic than types?
- could you explain why using Self's types makes the sort example work?
- adding a default to an existing type parameter could still be a breaking change?
|
@eddyb to clarify some points about your suggestion:
|
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
eddyb
Aug 8, 2016
Member
- it must be enforced at usage sites, not in definitions? In your example, it is
let _ = Foo(box x)that is the problem, specifically if the type ofxcannot be fully inferred?
Yes, however, the enforcement on uses is more permissive than today's type-checking, i.e. it would only allow more code to compile. And it's not the type of x, but the fact that box x could have any A, and the lack of a default on Foo represents a hazard.
- do you intend to forbid defaults for functions entirely? Could you give an example of functions with defaults which are more problematic than types?
It's not functions with defaults, it's functions with any type parameters that would cause problems in such as scheme. Without their own defaults, matching everything else (or automatically deduced), they would prevent any inference variable they come into contact from getting its defaults from anywhere.
- could you explain why using Self's types makes the sort example work?
Not just Self, but everything. The sort example would be helped by taking into account defaults of type parameters of types used in the signature and the fact that the type parameter would not be used in the signature, e.g.:
impl<T> [T] {
pub fn sort<A: Algorithm = MergeSort>(&mut self) {...}
}In such a definition, A can either be explicitly provided or defaulted to MergeSort.
It's very important that A can't be inferred from anywhere else, which results in 0 hazards.
- adding a default to an existing type parameter could still be a breaking change?
I don't see how. All the code that had compiled without the default succeeded in inference without applying any defaults, so it couldn't ever see the new default.
Yes, however, the enforcement on uses is more permissive than today's type-checking, i.e. it would only allow more code to compile. And it's not the type of
It's not functions with defaults, it's functions with any type parameters that would cause problems in such as scheme. Without their own defaults, matching everything else (or automatically deduced), they would prevent any inference variable they come into contact from getting its defaults from anywhere.
Not just impl<T> [T] {
pub fn sort<A: Algorithm = MergeSort>(&mut self) {...}
}In such a definition,
I don't see how. All the code that had compiled without the default succeeded in inference without applying any defaults, so it couldn't ever see the new default. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nrc
Aug 8, 2016
Member
Thanks for the explanations!
It's not functions with defaults, it's functions with any type parameters that would cause problems in such as scheme. Without their own defaults, matching everything else (or automatically deduced), they would prevent any inference variable they come into contact from getting its defaults from anywhere.
So, we apply the same rules to type parameters on both functions and types?
And to summarise your rule, is it accurate to say "wherever we perform inference, if an inference variable has a default, then it is an error if that variable unifies with any other inference variables, unless they have the same default or the variable also unifies with a concrete type"?
|
Thanks for the explanations!
So, we apply the same rules to type parameters on both functions and types? And to summarise your rule, is it accurate to say "wherever we perform inference, if an inference variable has a default, then it is an error if that variable unifies with any other inference variables, unless they have the same default or the variable also unifies with a concrete type"? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
eddyb
Aug 8, 2016
Member
@nrc Well, we could apply the same rules, we could deny/ignore defaults on functions, or we could make functions go somewhere in between, using defaults that we can gather from their signature, combined with their own defaults, in cases where they would be unambiguous.
I tried to list the options, and I prefer the hybrid rule for functions, but it's just one option of several.
As for your formulation, if you apply that rule after inference stops at a fix-point, it would amount in "for all remaining unassigned inferences variables, first error for any without a default" (because they'd never unify with anything that's not caused by applying a default), "and then apply all defaults at the same time" (with conflicting defaults causing type mismatch errors).
So yes, I believe that is correct, it seems to be equivalent to the algorithm I had in mind.
However, if you try to apply defaults before inference has given up, or your "unifies with a concrete type" can use types that were a side-effect of applying defaults, the compatibility hazards remain.
|
@nrc Well, we could apply the same rules, we could deny/ignore defaults on functions, or we could make functions go somewhere in between, using defaults that we can gather from their signature, combined with their own defaults, in cases where they would be unambiguous. As for your formulation, if you apply that rule after inference stops at a fix-point, it would amount in "for all remaining unassigned inferences variables, first error for any without a default" (because they'd never unify with anything that's not caused by applying a default), "and then apply all defaults at the same time" (with conflicting defaults causing type mismatch errors). However, if you try to apply defaults before inference has given up, or your "unifies with a concrete type" can use types that were a side-effect of applying defaults, the compatibility hazards remain. |
nrc
added
the
B-RFC-implemented
label
Aug 29, 2016
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jsen-
Sep 3, 2016
Contributor
I have this convenient command executor which deals with all the things that might go wrong including checking the exit code.
Obviously in 99% of cases non-zero exit code means an error, but I'd like to cover this 1% without resorting to always requiring some exitcode validator.
I'm reading that default_type_parameter_fallback is not likely to be stabilized, is there some other way to model those 99/1 use cases without resorting to something like fn execute_command_with_custom_exit_code_validator(...)?
pub trait ExitCodeValidator {
fn validate(exit_code: i32) -> bool;
}
pub struct DefaultExitCodeValidator;
impl ExitCodeValidator for DefaultExitCodeValidator {
fn validate(exit_code: i32) -> bool {
return exit_code == 0;
}
}
pub fn execute_command<T, V=DefaultExitCodeValidator>(cmd: &mut Command) -> Result<T>
where V: ExitCodeValidator,
{
let output = ...;
let exit_code = ...;
if V::validate(exit_code) {
Ok(output)
} else {
Err(InvalidExitCode(exit_code))
}
}|
I have this convenient command executor which deals with all the things that might go wrong including checking the exit code. I'm reading that pub trait ExitCodeValidator {
fn validate(exit_code: i32) -> bool;
}
pub struct DefaultExitCodeValidator;
impl ExitCodeValidator for DefaultExitCodeValidator {
fn validate(exit_code: i32) -> bool {
return exit_code == 0;
}
}
pub fn execute_command<T, V=DefaultExitCodeValidator>(cmd: &mut Command) -> Result<T>
where V: ExitCodeValidator,
{
let output = ...;
let exit_code = ...;
if V::validate(exit_code) {
Ok(output)
} else {
Err(InvalidExitCode(exit_code))
}
} |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cramertj
Sep 13, 2016
Member
Correct me if I'm wrong, but the behavior being discussed here seems to be available on both stable and nightly (without a warning). Even though defaulted type parameters for functions are feature-flagged, it's possible to emulate this behavior by making a function impl on a struct with a default type parameter.
This behavior can be seen here. Am I mistaken in thinking that this is exactly the behavior this conversation is intending to prevent? I've changed i8 from a concrete type to a default, and it resulted in a change of the type of the resulting value.
Edit: I was in fact mistaken! I managed to confuse myself-- what's actually happening here is that none of the default type parameters are being applied. The only reason the line with Withi8Default::new compiles is that integers default to i32. In this case, all Withi8Default::new and Withi64Default::new are just the identity function. The default type parameters on the struct only apply when using struct literal syntax, not when calling associated functions (as they should).
|
Correct me if I'm wrong, but the behavior being discussed here seems to be available on both stable and nightly (without a warning). Even though defaulted type parameters for functions are feature-flagged, it's possible to emulate this behavior by making a function impl on a struct with a default type parameter. This behavior can be seen here. Am I mistaken in thinking that this is exactly the behavior this conversation is intending to prevent? I've changed i8 from a concrete type to a default, and it resulted in a change of the type of the resulting value. Edit: I was in fact mistaken! I managed to confuse myself-- what's actually happening here is that none of the default type parameters are being applied. The only reason the line with |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jsen-
Sep 13, 2016
Contributor
I see. I guess this does not clutter the API that much and still monomorphises the call, so you don't have to pay for indirection as with builder pattern alternative. Cool, thanks for pointing this out.
|
I see. I guess this does not clutter the API that much and still monomorphises the call, so you don't have to pay for indirection as with builder pattern alternative. Cool, thanks for pointing this out. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
cramertj
Sep 13, 2016
Member
@jsen- I think you misunderstood my point. I had thought I had discovered a bug-- default type parameters shouldn't be usable on functions (currently). However, I was just mistaking numeric defaults for something more sinister.
|
@jsen- I think you misunderstood my point. I had thought I had discovered a bug-- default type parameters shouldn't be usable on functions (currently). However, I was just mistaking numeric defaults for something more sinister. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Sep 13, 2016
Contributor
@jsen- it doesn't directly help you, but it occurs to me that if we had default fn parameters, one could (maybe?) write:
pub trait ExitCodeValidator {
fn validate(&self, exit_code: i32) -> bool;
}
pub struct DefaultExitCodeValidator;
impl ExitCodeValidator for DefaultExitCodeValidator {
fn validate(&self, exit_code: i32) -> bool {
return exit_code == 0;
}
}
pub fn execute_command<T, V>(
cmd: &mut Command,
validator: V = DefaultExitCodeValidator)
-> Result<T>
where V: ExitCodeValidator,
{
let output = ...;
let exit_code = ...;
if validator.validate(exit_code) {
Ok(output)
} else {
Err(InvalidExitCode(exit_code))
}
}This seems better than your existing code because it allows an exit code validator to carry some state. But of course it relies on a future that's not implemented, and in particular on the ability to instantiate a default parameter of generic type.
|
@jsen- it doesn't directly help you, but it occurs to me that if we had default fn parameters, one could (maybe?) write: pub trait ExitCodeValidator {
fn validate(&self, exit_code: i32) -> bool;
}
pub struct DefaultExitCodeValidator;
impl ExitCodeValidator for DefaultExitCodeValidator {
fn validate(&self, exit_code: i32) -> bool {
return exit_code == 0;
}
}
pub fn execute_command<T, V>(
cmd: &mut Command,
validator: V = DefaultExitCodeValidator)
-> Result<T>
where V: ExitCodeValidator,
{
let output = ...;
let exit_code = ...;
if validator.validate(exit_code) {
Ok(output)
} else {
Err(InvalidExitCode(exit_code))
}
}This seems better than your existing code because it allows an exit code validator to carry some state. But of course it relies on a future that's not implemented, and in particular on the ability to instantiate a default parameter of generic type. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
jsen-
Sep 13, 2016
Contributor
@nikomatsakis right, I just shortened my current implementation, which relies on default_type_parameter_fallback.
I'd sure love to see default function arguments, but I'm not even aware of any plans to include them in the language (I'll sure look around in the RFC section
I posted the question because we need to move to stable some day
Edit: looking again at your example, this time with with both eyes open (it's after midnight here already Validator at all in the usual case. But I'm probably overthinking it, because I agree with explicit is better than explicit
@cramertj yeah, I was confused. Based on your comment I got an idea, that I could emulate it with struct defaulted type params.
Something like this:
// defaulted:
Exec::execute_command(cmd);
// explicit:
Exec::<MyExitCodeValidator>::execute_command(cmd);...but I'm finding this is not be possible as well. I'm not yet fully shifted away from C++'s meta-programming. Definitely still need to learn a lot about rust.
|
@nikomatsakis right, I just shortened my current implementation, which relies on @cramertj yeah, I was confused. Based on your comment I got an idea, that I could emulate it with struct defaulted type params. // defaulted:
Exec::execute_command(cmd);
// explicit:
Exec::<MyExitCodeValidator>::execute_command(cmd);...but I'm finding this is not be possible as well. I'm not yet fully shifted away from C++'s meta-programming. Definitely still need to learn a lot about rust. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
eddyb
Sep 13, 2016
Member
@jsen- Try <Exec>::execute_command(cmd) (that way the path is in a type context, not expression context, and the defaults are forcefully applied).
|
@jsen- Try |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
bluss
referenced this issue
Nov 22, 2016
Merged
Use Borrow for binary_search and contains methods in the standard library #37761
nrc
referenced this issue
Nov 28, 2016
Open
Tracking issue for `invalid_type_param_default` compatibility lint #36887
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
Nov 28, 2016
Contributor
adding a link to this (old) internals thread for posterity:
it discusses the interaction of RFC 213 with integral fallback, though similar issues can also arise without using integers, if you just have multiple functions with competing fallback.
|
adding a link to this (old) internals thread for posterity: it discusses the interaction of RFC 213 with integral fallback, though similar issues can also arise without using integers, if you just have multiple functions with competing fallback. |
nikomatsakis
referenced this issue
Jan 4, 2017
Closed
Type parameter default not respected with enums #24857
Gankro
referenced this issue
Apr 10, 2017
Merged
Use shorter synonym for Hashmap::with_hasher(Default::default()) #1098
added a commit
to servo/webrender
that referenced
this issue
Apr 10, 2017
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Gankro
May 11, 2017
Contributor
It's really unfortunate that this is blocking #32838. Is there anything I can do to help push this forward, outside of implementation work?
|
It's really unfortunate that this is blocking #32838. Is there anything I can do to help push this forward, outside of implementation work? |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
Ericson2314
May 11, 2017
Contributor
I don't think it's blocking that. We can always just newtype all of collections in std, and move hashmap into collections while we are at it.
It's an ugly solution, but only a temporary once. The allocator traits are far enough behind schedule that we should seriously consider it.
|
I don't think it's blocking that. We can always just newtype all of collections in std, and move hashmap into collections while we are at it. It's an ugly solution, but only a temporary once. The allocator traits are far enough behind schedule that we should seriously consider it. |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
nikomatsakis
May 11, 2017
Contributor
@Gankro I think we're still sort of in need of a good survey of the space of ideas and a nice summary. That would be helpful.
|
@Gankro I think we're still sort of in need of a good survey of the space of ideas and a nice summary. That would be helpful. |
Mark-Simulacrum
added
C-tracking-issue
and removed
C-enhancement
C-feature-request
labels
Jul 22, 2017
petrochenkov
referenced this issue
Aug 17, 2017
Open
Is automatic insertion of type inference placeholders possible? #43942
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
mitsuhiko
Sep 23, 2017
Contributor
I have a few more cases where this is really useful. In particular I have a situation where I have a generic function that requires type inference for the return value. However if there is no user of the return value it requires annotations even though I would have perfectly sensible defaults that then get optimized away.
Think of it like this:
// currently required
let _: () = my_generic_function()?;
// with a unit default
my_generic_function()?;|
I have a few more cases where this is really useful. In particular I have a situation where I have a generic function that requires type inference for the return value. However if there is no user of the return value it requires annotations even though I would have perfectly sensible defaults that then get optimized away. Think of it like this: // currently required
let _: () = my_generic_function()?;
// with a unit default
my_generic_function()?; |
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
leodasvacas
Nov 13, 2017
Contributor
I'm no expert but I'll try to summarize and contribute.
We want to use type defaults to inform inference. However we may get multiple defaults that could apply to an inference variable, resulting in a conflict. Also new defaults may be added in the future, causing conflicts where there was none.
@eddyb proposes the conservative approach of erroing on conflict and erroing when trying to apply a default to a type with no default, for future-proofing. The consequence is that all defaults must be present and must agree to be applied.
Let's take the reasonable and useful example by @joshtriplett. It would not work because it's trying to unify T in struct Option<T> with P in fn func<P: AsRef<Path> = String>(p: Option<P>). Even if Option<T> were to gain a default for T, intuitively func should not care because it's default is more local and should take precedence.
So I propose that local defaults trump type defaults. So fn and impl defaults would take precedence over the default on the type, meaning that any future conflict between T and P above would always be solved by using the default on P, making it future-proof to apply P's default which is String.
Note that this is limited to literals, if the value came from another fn as in:
fn noner<T>() -> Option<T> { None }
fn main() {
// func is same as in original example.
func(noner());
}Then we are back to an error.
This a small yet useful extension, together with the slice::sort example this makes a case for allowing type defaults in impls and fns. Hopefully we are coming to a minimal but useful version of this feature that we could stabilize.
Edit: If we give fns and impls preference on the defaults, then we should not inherit the default from the type as that would defeat the point of adding the default being non-breaking.
Edit 2:
no universal priority ordering can really exist
Given a set of type variables in context that could be unified by a default, can't we order those variables by the order they are introduced? And then consider them in order, trying their defaults and failing at the first variable with no default.
|
I'm no expert but I'll try to summarize and contribute. We want to use type defaults to inform inference. However we may get multiple defaults that could apply to an inference variable, resulting in a conflict. Also new defaults may be added in the future, causing conflicts where there was none. @eddyb proposes the conservative approach of erroing on conflict and erroing when trying to apply a default to a type with no default, for future-proofing. The consequence is that all defaults must be present and must agree to be applied. Let's take the reasonable and useful example by @joshtriplett. It would not work because it's trying to unify So I propose that local defaults trump type defaults. So fn and impl defaults would take precedence over the default on the type, meaning that any future conflict between Note that this is limited to literals, if the value came from another fn as in: fn noner<T>() -> Option<T> { None }
fn main() {
// func is same as in original example.
func(noner());
}Then we are back to an error. This a small yet useful extension, together with the Edit: If we give fns and impls preference on the defaults, then we should not inherit the default from the type as that would defeat the point of adding the default being non-breaking. Edit 2:
Given a set of type variables in context that could be unified by a default, can't we order those variables by the order they are introduced? And then consider them in order, trying their defaults and failing at the first variable with no default. |
This was referenced Nov 23, 2017
leodasvacas
referenced this issue
Jan 13, 2018
Closed
Generalise slice::contains over PartialEq #46934
leodasvacas
referenced this issue
Feb 3, 2018
Closed
Default type parameter fallback revisited #2321
varkor
referenced this issue
Apr 10, 2018
Merged
Move Range*::contains to a single default impl on RangeBounds #49130
eddyb
referenced this issue
Jul 13, 2018
Open
Type inference fails in light of generic type aliases with default parameter. #50822
This comment has been minimized.
Show comment
Hide comment
This comment has been minimized.
DoumanAsh
Jul 17, 2018
Found this issue when I was looking into RFC.
I noticed recently a problem for myself when I have default type paramer, but my impl is generic.
So currently I have two options:
- impl for struct with default type parameter
- Ask users to use a bit ugly
<Struct>syntax to guarantee default type paramer.
I wonder if it would be sensible to allow impl blocks to have default type parameters while retaining ability to specify trait boundry.
e.g.
pub trait MyTrait {}
struct MyStruct;
impl MyTrait for MyStruct;
struct API<M: MyStruct> {
....
}
impl<M: MyTrait=MyStruct> for API<M> {
pub fn new() -> Self<M> {
}
}I may understand reasoning for impl blocks to overshadow default parameter when invoking API::new() but it is a bit sad when you cannot clearly set default type parameter for impl block alongside your struct.
DoumanAsh
commented
Jul 17, 2018
|
Found this issue when I was looking into RFC. I noticed recently a problem for myself when I have default type paramer, but my impl is generic.
I wonder if it would be sensible to allow impl blocks to have default type parameters while retaining ability to specify trait boundry. e.g. pub trait MyTrait {}
struct MyStruct;
impl MyTrait for MyStruct;
struct API<M: MyStruct> {
....
}
impl<M: MyTrait=MyStruct> for API<M> {
pub fn new() -> Self<M> {
}
}I may understand reasoning for impl blocks to overshadow default parameter when invoking |
jroesch commentedJul 27, 2015
This is a tracking issue for RFC 213.
The initial implementation of this feature has landed.
cc @nikomatsakis