Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Finalize defaulted type parameters #213

Merged
merged 2 commits into from Feb 4, 2015

Conversation

Projects
None yet
9 participants
@nikomatsakis
Copy link
Contributor

nikomatsakis commented Aug 26, 2014

This RFC proposes finalizing the design of defaulted type parameters with two changes:

  • Using _ to explicitly use a default
  • Integrating defaults and inference

Rendered view.

}

Using this definition, a call like `range(0, 10)` is perfectly legal.
If it turns out that the type argument is not other constraint, `uint`

This comment has been minimized.

@lilyball

lilyball Aug 26, 2014

Contributor

s/is not other constraint/is not otherwise constrained/

@glaebhoerl

This comment has been minimized.

Copy link
Contributor

glaebhoerl commented Sep 13, 2014

My only concern here is about whether this feature carries its weight. Having defaults drive inference is a nice touch! But overall, it's a not insignificant amount of complexity, likely to be in conflict (or at the very least tension) with bigger and more important type system features in the future, and the benefit is not that large.

The goal of backwards compatibly extending types can be accomplished with just modules and typedefs, if I'm not missing something. As merely a strawman example:

Before:

pub struct HashMap<K, V> { ... }

After:

pub mod custom_hasher {
    pub struct HashMap<H, K, V> { ... }
}
pub type HashMap<K, V> = custom_hasher::HashMap<RandomSipHasher, K, V>;

Existing code should keep working. Code which wants to specify a different hasher has to be modified either way, and in the course of that modification, the only additional change that has to be made would be to import custom_hasher::HashMap rather than HashMap.

The conflict with HKT we've discussed before, but there was never any satisfactory resolution proposed that I can remember. To be able to implement most of the useful HKT traits for a generic collection type, the type parameter representing the contained type needs to be the last one. For example, one would write:

impl Functor for Option { ... }
impl Functor for Vec { ... }
impl<K> Functor for HashMap<K> { ... }

If we were to add HKT later on, and then discover that we can implement Functor (or Mappable, Map, whatever we choose to call it) for essentially none of the existing types in the standard library, because they all have hasher and allocator type parameters at the end rather than the beginning for the sake of the small convenience of default type arguments, then I wouldn't blame people for having some mirth at our expense.

(As a side note, for forwards compatibility with HKT, we should also change the order of the type parameters of Result.)

@eddyb

This comment has been minimized.

Copy link
Member

eddyb commented Sep 13, 2014

@glaebhoerl We can implement HKT without relying on currying, there is no need to give anyone the wrong impression about this. Defaulted type params/VG and HKT are orthogonal (and I would like to see Haskell handling ZipTypes<..T><..U> or MapTypes<F<*>><..T> which basic VG, AI and HKT in Rust allow, when not restricted by currying, of course).

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Sep 15, 2014

On Sat, Sep 13, 2014 at 03:54:22PM -0700, Gábor Lehel wrote:

The conflict with HKT we've discussed before, but there was never any satisfactory resolution that I can remember.

I don't see a conflict with HKT, though clearly there is a conflict
with Haskell-style implicit currying. I think that's not a very Rust-y
paradigm though, so I am not especially concerned.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Sep 15, 2014

On Sat, Sep 13, 2014 at 03:54:22PM -0700, Gábor Lehel wrote:

If we were to add HKT later on, and then discover that we can implement Functor (or Mappable, Map, whatever we choose to call it) for essentially none of the existing types in the standard library, because they all have hasher and allocator type parameters at the end rather than the beginning for the sake of the small convenience of default type arguments, then I wouldn't blame people for laughing at us.

Sorry, I didn't address this point specifically. First, let me point
out you are right to be concerned and wise to look ahead. That said, I
think we can address this in multiple ways. In the limit, of course,
newtyped wrappers are an option, though I'd personally like to see
something closer to "type lambdas". Still, I know that these are
something Haskell intentionally avoided and I am not 100% clear on why
(perhaps you can enlighten me). It could be that there are large
problems that arise.

@carllerche

This comment has been minimized.

Copy link
Member

carllerche commented Sep 26, 2014

👍 I've been wanting defaults to work with inference for a while. A number of APIs that I have been working on are currently required to expose generics to the user that they often shouldn't care about. Being able to set defaults and having the inferencer use that would make these APIs much nicer.

@mitsuhiko

This comment has been minimized.

Copy link
Contributor

mitsuhiko commented Sep 27, 2014

Just discovered this RFC from a hint on IRC. I have an API with a function like this:

fn execute<T: FromSomething>(&self) -> Result<FromSomething, Error> {
    FromSomething::value_from_something(...)
}

The problem comes up in cases where unwrap() is used and the value is unconstrained. Primarily this happens in cases where I only care about the error and i don't are about the result. In that case I could use FromSomething for () but you need to write this right now:

let _: () = execute(...).unwrap()

Would it be feasible to default unwrap of results to ()?

@mitsuhiko

This comment has been minimized.

Copy link
Contributor

mitsuhiko commented Sep 27, 2014

Actually i presume I could just define the function like this to achieve what I want:

fn execute<T: FromSomething=()>(&self) -> Result<FromSomething, Error> {
    FromSomething::value_from_something(...)
}
@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Sep 29, 2014

On Sat, Sep 27, 2014 at 06:34:24AM -0700, Armin Ronacher wrote:

Actually i presume I could just define the function like this to achieve what I want:

fn execute<T: FromSomething=()>(&self) -> Result<FromSomething, Error> {
    FromSomething::value_from_something(...)
}

Yes, this. Though I think the result type would be Result<T,Error>

@glaebhoerl

This comment has been minimized.

Copy link
Contributor

glaebhoerl commented Sep 30, 2014

@nikomatsakis Sorry for the late response.

I don't see a conflict with HKT, though clearly there is a conflict with Haskell-style implicit currying. I think that's not a very Rust-y paradigm though, so I am not especially concerned.

The direct conflict is indeed only syntactical as far as I can tell. As I've written before I wouldn't mind requiring an explicit syntax such as HashMap<int, ..> for partial type application; that might be preferable even on its own merits. But the indirect conflict is huge. Basically default type arguments play well with HKT and PTA just as long as you don't use them anywhere you would want to use them.

(As an aside, I don't really have any conception of what is "Rust-y" in this space, other than whatever works the best. We're not constrained the way we sometimes are at the value level.)

In the limit, of course, newtyped wrappers are an option, though I'd personally like to see something closer to "type lambdas". Still, I know that these are something Haskell intentionally avoided and I am not 100% clear on why (perhaps you can enlighten me). It could be that there are large problems that arise.

Newtype wrappers would of course work, but then we'd be paying back the convenience we got from default type arguments, most likely with interest.

The problem with type lambdas is that they mess up type inference: see this comment by rwbarton. They also figure prominently in Edward Kmett's criticism of Scala.1 In short, they don't play well with other features and do not appear to have much of a countervailing benefit. I would rather have fewer features which work well together, than more features which are always stepping on each others' toes and negating each others' benefits. In particular, if we are to have HKTs, we should have HKTs with useful inference which makes them a practical tool that's pleasant to use, rather than HKTs as window dressing.

(And even if we were to decide that we want type lambdas, that's a big decision which should be made on its own merits; getting backed into it by default type arguments would be the tail wagging the elephant.)

(@eddyb: Sorry. I don't like being in the position of pushing back on someone else's pet feature, I've been on the other end and know that it sucks. But that's life. People have opinions, and they're often different ones.)

1 In case anyone's not familiar with Edward Kmett: Among many other things, he's the mastermind behind Haskell's lens library as well one of the principal contributors to scalaz, co-wrote a state-of-the-art compiler for an advanced Haskell-like language first in Scala and then in Haskell, and is quite possibly the most brilliant and prolific Haskeller on the planet. (Arguably for either taken alone; without a doubt for the combination.) I am inclined to take his opinions very seriously.

@zwarich

This comment has been minimized.

Copy link
Contributor

zwarich commented Oct 5, 2014

@glaebhoerl I have no opinion on type lambdas as a Rust feature, but those arguments leave me unconvinced. Rust bans overlapping impls of a trait in the first place, so the same problem doesn't come up. The inability to define overlapping impls in Rust might make type lambdas less useful; I'm not sure. And even with overlapping impls, there is a good reply from winterkoininkje that went unanswered.

@glaebhoerl

This comment has been minimized.

Copy link
Contributor

glaebhoerl commented Oct 5, 2014

Haskell doesn't allow overlap either, without some GHC extensions which are highly discouraged. I don't believe rwbarton was assuming them in his comment.

As far as the basic issue is concerned, type classes don't even enter into it. You have a type (going with more rustic notation) M<String> where M is a type variable and want to unify it with (String, String). What is M? In the absence of type lambdas, there is a unique solution: M = (String, _). If type lambdas exist, there are other possibilities: M = |T| (T, String), M = |T| (T, T), M = |T| (String, String). This issue comes up everywhere you'd want to infer a higher-kinded type, whether or not type classes are also involved. It just also happens to mean that if you can't infer the type, you can't go on to select an instance either, which makes type lambdas as a way to enable greater flexibility for instances self-defeating (which was the suggestion that had set off the discussion there).

And in any case... (from winterkoninkje's comment)

Yes, it would greatly complicate the inferencer. And, sure, there may be uglier cases which can refute this argument. But the presence of the argument suggests that the problem is not, in fact, known to be impossible. I agree that it's hardly a "not implemented yet" ---there's a lot of theory to be done yet to prove confluence of type-class resolution--- but I'm not convinced that it's impossible.

"Not known to be impossible" is not exactly a vote of confidence as far as adding things to Rust goes.

@zwarich

This comment has been minimized.

Copy link
Contributor

zwarich commented Oct 6, 2014

@glaebhoerl The concerns raised by rwbarton are specifically framed in the context of choosing between multiple type class instances for a single type, which requires OverlappingInstances to be an issue in the first place. The related issue for Rust would be that impls are only allowed in modules that define either the type or the trait, so as it stands you would only be able to make impls for type lambdas in modules that define the trait, rather than those that define the type lambda.

As for the more basic issue, what would be lost by just never inferring a (type-)polymorphic type? Rust already doesn't infer polymorphism when it could, e.g. this program fails to type check:

fn main() {
    let f = |a| a;
    let _x = f(0u);
    let _y = f(0i);
}

Why would we expect polymorphism to be introduced at the type level when we don't introduce it at the value level? Languages that use Hindley-Milner inference and perform polymorphic generalization have this problem, but Rust doesn't.

@glaebhoerl

This comment has been minimized.

Copy link
Contributor

glaebhoerl commented Oct 6, 2014

The concerns raised by rwbarton are specifically framed in the context of choosing between multiple type class instances for a single type, which requires OverlappingInstances to be an issue in the first place.

No, I don't believe that to be the case. The issue is confused by the fact that the prospect of type class instances for type lambdas was the suggestion being responded to, and so that's what the response also concerned itself with. But the two are logically separate. (Type inference in Haskell also precedes instance resolution.)

Let's consider another example:

struct Pair<A, B>(A, B) // just to avoid arity confusion w/ tuples

struct WrapInt<type<type> W> { wrapped: W<int> }

let foo = WrapInt { wrapped: Pair(1, 2)  };

What's W there? If there aren't type lambdas, it can only be Pair<int, ..>. If there are, it can also be |T| Pair<T, int>, |T| Pair<int, int>, or |T| Pair<T, T>. Exactly the same as the previous example. No type classes anywhere in sight.

Why would we expect polymorphism to be introduced at the type level when we don't introduce it at the value level? Languages that use Hindley-Milner inference and perform polymorphic generalization have this problem, but Rust doesn't.

I don't see why it makes any sense to tangle these things together. You can have a language with higher-kinded type variables but no first-class polymorphism, and you could also have the reverse. First-class polymorphism is difficult to reconcile with Rust's compilation model, but even C++ has higher-kinded types! (Haskell 98 is also closer to that extreme, apart from a couple of things like let generalization and polymorphic recursion.)

@zwarich

This comment has been minimized.

Copy link
Contributor

zwarich commented Oct 6, 2014

What's W there? If there aren't type lambdas, it can only be Pair<int, ..>. If there are, it can also be |T| Pair<T, int>, |T| Pair<int, int>, |T| Pair<T, T>. Exactly the same as the previous example. No type classes anywhere in sight.

I was suggesting that the type checker never introduce polymorphism on the user's behalf. What's wrong with that? It would exclude all but the first choice.

I don't see why it makes any sense to tangle these things together. You can have a language with higher-kinded type variables but no first-class polymorphism, and you could also have the reverse. First-class polymorphism is difficult to reconcile with Rust's compilation model, but even C++ has higher-kinded types! (Haskell 98 is also closer to that extreme, apart from a couple of things like let generalization and polymorphic recursion.)

The example I gave isn't first-class polymorphism; it is plain old boring prenex polymorphism. First-class polymorphism (albeit only rank-2) would be an example like this:

fn main() {
    let f = |g| {
        g(0u);
        g(0i);
    };
    let g = |a| a;
    f(g);
}

which also doesn't work in Rust.

@glaebhoerl

This comment has been minimized.

Copy link
Contributor

glaebhoerl commented Oct 6, 2014

I was suggesting that the type checker never introduce polymorphism on the user's behalf. What's wrong with that? It would exclude all but the first choice.

Ah, okay. I've thought of that as well. It seems like a reasonable idea, I don't know if anyone's done it before. But as far as using type lambdas to bridge the gap between default type parameters and HKTs is concerned, it doesn't help at all. If the presence of default type parameters on common types forces impls of HKT traits for those types to be written for type lambdas, and type lamdbas are never inferred, then you're back in the same place, which is that inference doesn't work when you want it to.

The example I gave isn't first-class polymorphism; it is plain old boring prenex polymorphism.

Yes, I was imprecise, sorry. They are connected in that whether, when, and how to infer polymorphism is also the big issue in systems with first-class polymorphism, and let generalization is an instance of inferring polymorphism (even if first-rank), so I feel that these things are along a spectrum.

@zwarich

This comment has been minimized.

Copy link
Contributor

zwarich commented Oct 7, 2014

Ah, okay. I've thought of that as well. It seems like a reasonable idea, I don't know if anyone's done it before.

From what I read on the Internet, it appears to be what Scala does.

But as far as using type lambdas to bridge the gap between default type parameters and HKTs is concerned, it doesn't help at all. If the presence of default type parameters on common types forces impls of HKT traits for those types to be written for type lambdas, and type lamdbas are never inferred, then you're back in the same place, which is that inference doesn't work when you want it to.

The problem you posed above with WrapInt is a special case of higher-order unification. Higher-order unification of terms in the simply typed lambda calculus is unification up to alpha/beta(/eta)-equivalence. In general, higher-order unification is undecidable. The particular instance you give is an example of higher-order matching, where one of the terms has no free variables. This problem is (surprisingly?) decidable, but there is no guarantee of a most general unifier. Your example has a most general unifier, so I'll modify it a bit:

struct Pair<A>(A, A)
struct WrapInt<W> { wrapped: W<int> }

trait Confused { fn bar(&self) }
impl<type<type> W> Confused for WrapInt<W<Pair>> { ... }

let foo = WrapInt { wrapped: Pair(Pair(0i, 0i), Pair(0i, 0i)) };
foo.bar();

What should W be at the call to bar? There are two separate solutions that are type lambdas, one being (excuse the abuse of notation) |T| T<Pair<int>> and another being |T| Pair<T<int>>. Neither of these solutions is more general than the other.

This combined with the fact that checking for instance overlap seemingly requires higher-order unification is probably a sign that type lambdas are not compatible with language features that attempt to infer terms from types with any sort of coherence.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Dec 5, 2014

@aturon and I had a pretty detailed discussion about default type
parameters and higher-ranked trait bounds. Our conclusion is that we
believe we will be able to combine the two without great difficulty,
and that the problems Haskell is encountering -- while real -- will
not be a significant impediment in Rust. We believe that the right HKT design for Rust will be a restricted form of type lambdas -- partial type application, but not "curried", as in Haskell.

On inference

The monad example

Let me begin by translating the example that @glaebhoerl pointed out into theoretical Rust syntax:

trait Monad { // Self :: type -> type
    ...
    fn ret<T>(t: T) -> Self<T>;
}

struct Pair<A,B>(A,B);

fn foo() -> Pair<uint,uint> {
    Monad::ret(22u)
}

Here we wind up with a (higher-kinded) type variable $0 and a constraint $0<uint> <: Pair<uint,uint>. As [the comment] pointed out, if you assume "curried partial type application", this is enough to deduce that $0 = Pair<uint,_>.

There is also a trait obligation that $0 : Monad. In principle, if we combined type inference and trait resolution, and took advantage of the set of impls when making deductions, we might be able to decide that $0 = Pair<uint,_> because there is no other impl that fits, but I am not sure. It's not worth thinking about because I have no intention of complicating the compiler's type inference by tethering
it to trait resolution anyhow.

This implies that to write an example like that in Rust, presuming we don't have curried partial application, would require a type annotation. For example, using UFCS, one could write Pair::ret(22u) or, more explicitly, <Pair as Mondad>::ret(22u). This particular example doesn't seem so bad to me.

A struct example

However, let's poke a bit further into other examples. Here the idea is to drill into things we actually expect to use HKT for in Rust. One such thing would be the ability to reason independently about pointer types. For example, I might want to write:

struct Foo<PTR> { // PTR :: type -> type
    something: PTR<int>,
    something_else: PTR<uint>,
}

Now I would like to write:

let x = Foo { something: Rc::new(22), something_else: Rc::new(23) };

and I would like Rust to infer that the type of x is Foo<Rc>. If we're not careful, though, this could be tricky: after all, we would replace PTR with a variable $0 :: type -> type, and then have the constraint that Rc<int> <: $0<int>. From this we need to deduce that $0 == Rc, but as before, if we are faced with fully general type lambdaas, we cannot do so.

There is however an interesting compromise. We might restrict type lambadas to always be a partially applied type, though not necessarily a curried one. In other words, a value of kind type -> type might be Rc<_> or Pair<_, int> or Pair<int, _>, but not |T| (some type that references T), and not |_| Rc<int>. In that case, we could deduce that $0 is Rc because there would be no other way to satisfy that constraint.

Note that we still cannot infer the Monad case, since if we know that $0<int> <: Pair<int,int>, $0 could be either Pair<int,_> or Pair<_,int>.

In a more complex case, we might therefore require annotations:

let x = Foo::<Pair<uint,_>> { something: Pair::new(22,22), ... }

Problems with curried partial type application

At some point I made a statement that curried partial type application a la Haskell is not very "Rusty". What I meant mostly is that we don't do currying in general (i.e., not for ordinary functions), so it seems
strange to write something like Pair<int> and not Pair<int,_> or |x| Pair<int,x> etc. However, there is a "deeper" problem as well -- currying is a bad fit for the & type constructor.

Based on the syntax alone, & has the kind lifetime -> type -> type. And we will frequently want to use it as a kind of "smart pointer" type, in which case we want &'a _ (which has kind type -> type). However, another very common use will be wanting a reference whose lifetime is not specified: &'_ T (with kind lifetime -> type). Using a currying approach does not permit us to select between these two alternatives, and would require perhaps some sort of newtype.

Other anticipated uses for HKT

The other major use for HKT that we anticipate is on associated types. For example, a Iterable trait
might look like:

trait Iterable {
    type Elem;
    type Iterator<'a>;

    fn iter<'a>(&'a self) -> Iterator<'a>;
}

In cases like these, the limitations on inference don't apply at all, because we are propagating forward rather than backward (that is, we don't have to deduce the function from its output, as in the other examples).

Conclusion

In conclusion, it seems like default type parameters have a lot to offer in terms of convenience and have proven very useful. They do interact poorly with curried partial type application a la Haskell. However, curried partial type application is a poor fit for the Rust due to the kind of &. The alternative of a pure type lambda does seem like it will require undue annotation and prevent the compiler from deducing a lot of types, however there are less flexible alternatives that can ameliorate this problem significantly. Moreover, in many important scenarios, this is a non-issue.

@mitsuhiko

This comment has been minimized.

Copy link
Contributor

mitsuhiko commented Dec 24, 2014

Is this something that will make it into 1.0?

@aturon

This comment has been minimized.

Copy link
Member

aturon commented Dec 27, 2014

@mitsuhiko I would expect this to happen before 1.0 final, but not necessarily for the alpha in two weeks.

bors added a commit to rust-lang/rust that referenced this pull request Feb 1, 2015

Auto merge of #21805 - nikomatsakis:closure-inference-refactor-1, r=e…
…ddyb

Currently, we only infer the kind of a closure based on the expected type or explicit annotation. If neither applies, we currently report an error. This pull request changes that case to defer the decision until we are able to analyze the actions of the closure: closures which mutate their environment require `FnMut`, closures which move out of their environment require `FnOnce`.

This PR is not the end of the story:

- It does not remove the explicit annotations nor disregard them. The latter is the logical next step to removing them (we'll need a snapshot before we can do anything anyhow). Disregarding explicit annotations might expose more bugs since right now all closures in libstd/rustc use explicit annotations or the expected type, so this inference never kicks in.
- The interaction with instantiating type parameter fallbacks leaves something to be desired. This is mostly just saying that the algorithm from rust-lang/rfcs#213 needs to be implemented, which is a separate bug. There are some semi-subtle interactions though because not knowing whether a closure is `Fn` vs `FnMut` prevents us from resolving obligations like `F : FnMut(...)`, which can in turn prevent unification of some type parameters, which might (in turn) lead to undesired fallback. We can improve this situation however -- even if we don't know whether (or just how) `F : FnMut(..)` holds or not for some closure type `F`, we can still perform unification since we *do* know the argument and return types. Once kind inference is done, we can complete the `F : FnMut(..)` analysis -- which might yield an error if (e.g.) the `F` moves out of its environment. 

r? @nick29581

@aturon aturon merged commit 1662214 into rust-lang:master Feb 4, 2015

@aturon

This comment has been minimized.

Copy link
Member

aturon commented Feb 4, 2015

After some fairly extensive discussion on this RFC, the core team is convinced that this feature will pose no serious problems for type inference around a future HKT extension. See this comment for details. Other than those concerns, this feature is a frequently-requested one, and a natural extension given integer fallback.

I have merged the RFC; the tracking issue is here.

@glaebhoerl glaebhoerl referenced this pull request Jul 1, 2015

Closed

Higher-kinded types #1185

@eddyb eddyb referenced this pull request Aug 4, 2015

Closed

RFC: remove weak pointers #1232

wycats added a commit to wycats/rust-rfcs that referenced this pull request Mar 5, 2019

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.