New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

`impl_trait_in_bindings` fails with `Option<impl Trait>` #54600

Open
nikomatsakis opened this Issue Sep 26, 2018 · 8 comments

Comments

Projects
None yet
4 participants
@nikomatsakis
Copy link
Contributor

nikomatsakis commented Sep 26, 2018

This example ICEs:

#![feature(nll)]
#![feature(impl_trait_in_bindings)]

use std::fmt::Debug;

fn main() {
    let x: Option<impl Debug> = Some(44_u32);
    println!("{:?}", x);
}

The reason is that the code which chooses when to "reveal" opaque types looks only for opaque types at the top level (and, oddly, it does so only if ordinary unification fails; having code that branches based on whether unification succeeded is in general a bad idea, so we should fix that too):

if let Err(terr) = self.sub_types(sub, sup, locations, category) {
if let TyKind::Opaque(..) = sup.sty {
return self.eq_opaque_type_and_type(sub, sup, locations, category);
} else {
return Err(terr);
}
}

More deeply, though, this code is taking a somewhat flawed approach. In particular, it is looking at the results of inference, but -- at least presently -- opaque types can end up in the inferred type in one of two ways. (As shown by #54593.)

For example, one might have a mixture of opaque types that came from a recursive call (and which ought not to be revealed) and opaque types that the user wrote:

#![feature(nll)]
#![feature(impl_trait_in_bindings)]

use std::fmt::Debug;

fn foo() -> impl Copy {
    if false {
        let x: (_, impl Debug) = (foo(), 22);
    }
    ()
}

fn main() { }

The correct way to do this, I suspect, is to leverage the user-type annotations that we are tracking for NLL.

cc @alexreg @cramertj @eddyb

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Sep 26, 2018

Another relevant example:

#![feature(nll)]
#![feature(impl_trait_in_bindings)]

use std::fmt::Debug;

fn foo() -> String {
    let x: impl Debug = 22;
    format!("{}", x)
}

fn main() { }

This fails to compile, which seems correct, though I'm not sure precisely what semantics we want.

Alternatively, we may want to encode the impl Debug sort of thing as a kind of cast that "obscures" the underlying type? Have to think about it.

@alexreg

This comment has been minimized.

Copy link
Contributor

alexreg commented Sep 26, 2018

@nikomatsakis I think the second example works exactly as intended (and as the RFC specifies). It's intuitive to me, at least.

You're right about the first example; that's the wrong way to go about it. I didn't see a better approach at the time though, and I forgot about it while solving other issues. The lowering and type checking is basically okay, that said... it's the NLL borrow checking that's the problem. Which I think is something that requires your expertise, to be honest.

@Centril Centril added the T-lang label Sep 27, 2018

@Centril

This comment has been minimized.

Copy link
Contributor

Centril commented Sep 27, 2018

Tagging T-lang due to Niko's notes in #54600 (comment).

@mikeyhew

This comment has been minimized.

Copy link
Contributor

mikeyhew commented Oct 5, 2018

@nikomatsakis

another relevant example:

Shouldn't format!("{}", x) should be format!("{:?}", x)?

@alexreg

This comment has been minimized.

Copy link
Contributor

alexreg commented Oct 5, 2018

@mikeyhew Nope, I think he meant that... since he follows it by saying "This fails to compile, which seems correct". With {:?} it actually compiles, as it should. :-)

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Oct 25, 2018

So @alexreg has been asking me to leave some notes on how to do this properly. The high-level strategy that I think we should be pursuing is changing impl Foo into types into a kind of coercion (at least in let position). So this:

let x: impl Foo = y

would sort of compile "as if" the user had written let x = y as impl Foo -- in fact, the latter syntax feels like something we could support. Basically, we're adding an explicit operation that "forgets" the true type of y and coerces it into the "opaque" type. This is sort of analogous to what happens if you do let x: &dyn Foo = y-- there is a coercion that takes place.

In fact, it might be illustrative to see what happens there. Given this example program (playground):

fn main() {
    let x: &dyn std::fmt::Debug = &3;
}

we get the following MIR (excerpted and simplified):

fn main() -> (){
    ...
    let _1: &dyn std::fmt::Debug;    // "x" in scope 2 at src/main.rs:2:9: 2:10
    let mut _2: &i32;
    let mut _3: &i32;

    bb0: {                              
        ...
        _3 = &(promoted[0]: i32);
        _2 = _3;
        _1 = move _2 as &dyn std::fmt::Debug (Unsize);
        ...
    }
}

The key bit is that Unsize coercion. I imagine we'll wind up with something similar, in which we have something like

_1 = hide(move _2, opaque-type)

I'm not sure yet if this should be an "ordinary" cast -- it seems potentially different, since the "opaque type" is treated in rather a particular way. But maybe it's ok to frame it as a cast.

OK, I'll stop there for this comment. I'm not yet sure the best way to implement so I'll probably leave a few comments exploring a few options.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Oct 25, 2018

Hmm before going any further I also want to raise up a few interesting questions. The possibility of ref patterns on the left-hand-side with impl types in the type is sort of "curious" -- I suspect we should just disallow it, so that we can treat the initializer as a "value" (and not a place).

Right now, the initializers can kind of act in two ways. e.g., here, y winds up acting as a place expresion and not a value expression:

let y = 22;
let ref x: u32 = y; // equivalent to `let x = &y;`

In principle you could have:

let ref x: impl Debug = y;

and wind up with x having the type &<opaque Debug>. This may or may not add some complexity — I guess we'll see. If so, we could always disallow ref bindings in the patterns when an impl Trait type is used as a type ascription.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Oct 25, 2018

So what happens when you have a &dyn Trait binding today? The answer is that the you wind up in the "coerce types" code:

fn coerce(&self, a: Ty<'tcx>, b: Ty<'tcx>) -> CoerceResult<'tcx> {

Here a (the "source") would be the type of the initializer and b is some type that includes a dyn Trait. This will wind up considering an "unsize coercion":

let unsize = self.commit_if_ok(|_| self.coerce_unsized(a, b));

This routine is pretty complex and I'll skip over the details (reminds me though that #50753 would be really nice to fix). It will however ultimately produce an Adjustment, which will include an AdjustKind::Unsize step.

This adjustment is stored alongside the source expression (here, y) and is taken into account when producing MIR:

fn apply_adjustment<'a, 'gcx, 'tcx>(cx: &mut Cx<'a, 'gcx, 'tcx>,
hir_expr: &'tcx hir::Expr,
mut expr: Expr<'tcx>,
adjustment: &Adjustment<'tcx>)
-> Expr<'tcx> {

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment