Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

rework the queries for the MIR pipeline #41625

Merged
merged 35 commits into from
May 3, 2017

Conversation

nikomatsakis
Copy link
Contributor

@nikomatsakis nikomatsakis commented Apr 29, 2017

This PR refashions the MIR pipeline. There are a number of changes:

  • We no longer have "MIR passes" and the pass manager is completely reworked. Unless we are doing the interprocedural optimization (meaning, right now, the inline pass), we will process a single MIR from beginning to finish in a completely on-demand fashion; i.e., when you request optimized_mir(D), that will trigger the MIR for D to actually be built and optimized, but no other functions are built or touched.
  • We no longer use &'tcx RefCell<Mir<'tcx>> as the result of queries, since that spoils the view of queries as "pure functions". To avoid however copying the MIR, we use a &'tcx Steal<Mir<'tcx>> -- this is something like a ref-cell, in that you can use borrow() to read it, but it has no borrow_mut(). Instead, it has steal(), which will take the contents and then panic if any further read attempt occurs.
  • We now support [multi] queries, which can optionally yield not just one result but a sequence of (K, V) pairs. This is used for the inlining pass. If inlining is enabled, then when it is invoked on any def-id D, it will go and read the results for all def-ids and transform them, and then return the results for all of them at once. This isn't ideal, and we'll probably want to rework this further, but it seems ok for now (note that MIR inlining is not enabled by default).

Tips for the reviewer: The commits here are meant to build individually, but the path is a bit meandering. In some cases, for example, I introduce a trait in one commit, and then tweak it in a later commit as I actually try to put it to use. You may want to read the README in the final commit to get a sense of where the overall design is headed.

@eddyb I did not wind up adding support for queries that produce more than one kind of result. Instead, I decided to just insert judicious use of the force() command. In other words, we had talked about e.g. having a query that produced not only the MIR but also the const_qualif result for the MIR in one sweep. I realized you can also have the same effect by having a kind of meta-query that forces the const-qualif pass and then reads the result. See the README for a description. (We can still do these "multi-query results" later if we want, I'm not sure though if it is necessary.)

r? @eddyb

cc @michaelwoerister @matthewhammer @arielb1, who participated in the IRC discussion.

@eddyb
Copy link
Member

eddyb commented Apr 29, 2017

by having a kind of meta-query that forces the const-qualif pass and then reads the result

You can't really do this, not if you have linear queries, which is what I wanted to do.
Right now we don't have a way to borrow the result of a query, it's either clone or steal, so you probably still have a similar problem, but instead you're doing a useless Mir clone.

@nikomatsakis
Copy link
Contributor Author

nikomatsakis commented Apr 29, 2017

@eddyb

You can't really do this, not if you have linear queries, which is what I wanted to do.

That's true, I don't have linear queries, I have stealable queries, so you can read the result until they are stolen.

@nikomatsakis
Copy link
Contributor Author

but instead you're doing a useless Mir clone.

There are no clones, though, or at least not deep ones.

@eddyb
Copy link
Member

eddyb commented Apr 29, 2017

Yeah, I meant, without introducing a potentially fragile abstraction like Steal.

@nikomatsakis
Copy link
Contributor Author

@eddyb and I had a long conversation on IRC. Just to leave some record of some of what we said:

  • the way I did steal here isn't quite what he had in mind, primarily because I allow you to borrow() the steal thing before it is stolen (and hence the query isn't "execute once"). He wasn't thrilled with the use of force to ensure that dependent computations occur before a result is stolen, because it seemed fragile. I am not sure that I agree -- have to ponder it -- but I feel roughly that this PR is worth landing and we can iterate.
    • the reason I'm not sure I agree: the use of force() is a "local abstraction", in the sense that nobody outside of "const qualification" has to be aware of it. In particular, the pattern is this: if you will steal a result, and there are queries that might use the result, you have to force them. In contrast, in @eddyb's original proposal, you would have to combine those queries into one meta-query that produced all the results (and then knit them together when building the Providers struct). These two don't seem very far apart, so I think we can evolve if we want to, but also neither feels more fragile than the other to me. Either way you have to know to bundle together the "interconnected" computations, but nothing else has to know about it outside of those.
  • he was not happy with the query setup I devised; over-abstracted, to be short. I am inclined to agree. The main reason for the current setup was to satisfy the following constraints:
    • trivial to write a mir pass, and to ensure that it gets full support for dumping intermediate MIR etc
    • to support "interprocedural" passes that want to read from everything -- in particular, to have the ability for inlining to read everything that had happened before it, without requiring that to be a distinct suite.

What we agreed was I should remove support for inlining, which would let me simplify to one query per suite (they would yield stealable results, except the last one). Within a suite, we can run the passes (and trigger the dump mir etc) in a simple loop. This was the design I was heading for before I decided to try and support inlining. I am happy though to do something simple and then think about how to best support inlining and other such things in follow-up work.

@eddyb
Copy link
Member

eddyb commented Apr 29, 2017

To add to that: I'm not even sure we need a complex IPO manager for inlining: querying optimize_mir for callees of a MIR being optimized might just be enough.

@bors
Copy link
Contributor

bors commented Apr 30, 2017

☔ The latest upstream changes (presumably #41593) made this pull request unmergeable. Please resolve the merge conflicts.

@carols10cents carols10cents added the S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author. label May 1, 2017
@nikomatsakis
Copy link
Contributor Author

OK, I spent some more time thinking about this over the weekend. I just want to lay out the various designs that are under discussion a bit more as well as their pros/cons. @eddyb I'd really like if you can double-check that my description of your proposed plans around linear maps is accurate.

On the topic of the MIR optimization manager

To add to that: I'm not even sure we need a complex IPO manager for inlining: querying optimize_mir for callees of a MIR being optimized might just be enough.

This is true only if we do not plan to do any further optimization after inlining (which we certainly do -- part of the point of inlining is to unlock other optimizations). That is, optimized_mir(), in this PR anyway, produces an immutable bit of MIR, as it is a final stage. I think what you are saying, with which I largely agree, is that we could move the "IPO portions" of this PR out of the pass manager and force them to occur between suites. I considered this but I wanted the flexibility to add passes "anywhere". Perhaps it's better to just break those IPO things out into a distinct suite; I'm not sure yet.

If we adopted the approach of having "IPO" being a distinct suite of optimizations, then I envision it working like this. Every "suite" of optimizations would have a query named after it. There aren't many so I'd probably just write the queries by hand. There would probably be 4 suites, then:

  • mir_const(D) (MIR_CONST, in the PR)
  • mir_validated(D) (MIR_VALIDATED, in the PR)
  • mir_opt1(D) (this is new, and represents the opt we do with "mir-opt level 1")
  • mir_opt2(D) (MIR_OPTIMIZED, in the PR)

The idea would be that if the MIR optimization level is not 2, then mir_opt2(D) just steals from mir_opt1(D) and returns the result. Otherwise, mir_opt2(D) could work roughly as inlining does in this PR. That is, it would basically steal all of the results from mir_opt1(D), storing them in a temporary hashmap, do inlining, and then walk over them all and apply any further passes we wish. It would then (at once) return the entire set of mir_opt2(D) (i.e., this requires some support for [multi] queries).

If we wish to get better incremental re-use with opt-level 2, there are two known approaches: we could apply the partitionings we do for LLVM earlier or we could refactor the optimization passes to produce intermediate steps to which we can apply the red-green algorithm (e.g., one might use this to check if the SCCs have changed, and if not we could perhaps reuse the post-inlining MIR for an entire SCC). There may yet be better approaches here, although I suspect that a lot of that work will be factoring things well so that red-green applies. Seems like an area we will need to explore over time.

On the topic of "multi" queries

Over the weekend, @eddyb and I were chatting about the multi-queries that I added here. He is worried (iiuc) that the current system does not guarantee "coherence". In particular, the way I have it setup, if you query foo(A), you can return results for a bunch of other keys (e.g., foo(B) and foo(C)) at the same time. But his concern is that nothing ensures that if you query foo(B), that will generate the same value as would have been generated by querying foo(A).

He would prefer some kind of system where you don't just return multiple values, but instead express (somehow) the "dependencies" between keys, so that the system can ensure that, regardless of which query you ask for first (A, B, or C), it will generate the results in the same way.

This seems like a nice thing to guarantee, but I am not sure of its importance. It'd be good (I think) to drill into more specific use cases. I know of various few uses for multi-queries:

  • Variance. We generate variance for an entire crate at a time. In principle we could break things into a DAG, but it doesn't seem worthwhile, as computing variance is very cheap -- but it we had some system that made it easy, that'd be great. I have a branch where variance is reworked into two queries: crate_variance returns a map {D -> V} of variance V for each def-id D, and item_variance(D) = V that reads from the map and returns a specific result. This is a red-green friendly algorithm, but works poorly in the current system, because the crate_variance() query depends on the whole crate.
  • Inherent impls. I am reminded that inherent impls curently work the same way as variance: there is a "process entire crate" query and then "per item" queries that read from the resulting map. We are using some ignore hacks to handle the deps that result.
  • Associated items. To read out the associated_item(D), we have to traverse the enclosing impl. Right now we just re-do that traversal for every item in an impl, but we could as well generate multiple results at once.
  • Inlining and potentially other future IPOs. In this PR, whenever we ask for the "post-inlining result" for any def-id, we produce the result for all def-ids. This is simple in concept, but it's certainly plausible that there could be a bug causing us to fail to produce a result for all def-ids.
    • Still, this is somewhat unlikely: we're using the mir_keys() query I added to fully enumerate the keys that have MIR; if that query is wrong, many things in the compiler will break too.

Looking at these, I see two overall patterns:

  • To find the result for X, we have to search from some parent item, which means we will also find results for other items. This covers inherent impls (where the parent item is the whole crate) and associated items (where the parent item is the impl). It's hard for me to see how we could fail to get this right, except by being wrong about the premise.
  • There is a complex (potentially cyclic) set of interdependencies; ideally we would walk convert this graph into a DAG of SCCs and walk via a topological sort. This covers variance and inlining. It does seem like some infrastructure here would be useful. It's not clear that this infrastructure should live in the query engine to me, but it might make sense.

On the topic of "linear" queries

I modeled linear queries as ordinary queries that return a &'tcx Steal<D>, which supports two operations ("borrow" and "steal"). Once a value is stolen, any further operation is a bug!(). This means that all parts of the compiler which invoke a query with a steal result must be aware of any other parts that might use that same query and coordinate with them. (This is a performance optimization: that is, we could make "stolen" queries simply regenerate the resulting value, and everything would work, but we don't want to be regenerating the results of these queries multiple times, so instead we make this error a bug!.)

In all the MIR cases, at least, there really isn't much to intercoordinate. For the most part, each MIR optimization is just used by one other query: the next optimization in the sequence. (In the current PR, each optimization pass has its own query, but if we convert to just having a query-per-suite, then this would apply at the level of suites.) Other parts of the compiler should use one of the queries (e.g., optimized_mir()) that do not return a Steal result.

However, there are a few exceptions. One example is const qualification. This is a lightweight scan of the contents of a const item, and it needs to take place relatively early in the pipeline (before optimizations are applied and so forth). The way I handled this is (a) to have it borrow() from the steal and then (b) to use force before we steal the relevant MIR, so that we know that it has executed. If we forgot to add the force() call, then the result would be dependent on the order in which queries were issued; that is, if one requested const-qualif(D) first, it would execute successfully, but if one requested optimized_mir(D) first, then const-qualif(D) afterwards, you would get a bug! because const-qualif would be trying to read stolen data. (However, if compilation succeeds, we are always assured of a consistent result.)

To make the example a bit more abstract, what we have here is a "stealable" query A that needs to be read by query B but stolen by query C. Under my system, if you steal a query, you must be aware of all possible readers and ensure that they are forced before you do so.

I believe @eddyb had in mind a different system, which I will call linear queries. In that scheme, once you execute a query once, and further attempt to request the same query is a bug. So we can't have a query A that is read by B and consumed by C, as I had, since both B and C must consume, and that would violate linearity. To support this scenario, then, eddyb wanted to introduce "conjoined" providers (name is mine). Basically, when setting up the Providers struct, I can use a bit of magic to knit together the functions that will process the result from A. In this case, we want to specify that, to produce B, produce_b needs a borrow copy of A (there can be any number of such queries). Then some final query gets to actually consume the A. That might look something like this:

fn produce_b(tcx: TyCtxt, def_id: DefId, a: &A) -> (B, C) { }
fn produce_c(tcx: TyCtxt, def_id: DefId, a: A) -> (B, C) { }

// this method will initialize `providers.b` and `providers.c`
// with generated glue functions:
providers.conjoin().take_a().read(produce_b).consume(produce_c);

Under the hood, the conjoin() function will initialize providers.b and providers.c with some
glue functions, such that whenever either one is requested, they (a) execute the A query and then (b) invoke produce_b and produce_c in turn. These glue functions will also store the results into the appropriate maps.

The major advantage of the linear scheme that I see is that it will more robustly fail if you screw it up. That is, to compare:

  • Stealable scheme:
    • How you mess it up: forget to call force() when stealing
    • What happens when you do: compilation succeeds with some orders, get a bug! for others
  • Linear scheme:
    • How you mess it up: forget to use conjoin() when setting up the providers
    • What happens when you do: if both queries are used, you get a bug!

In particular, in the stealable scheme, if you screw it up, things may still work when both queries are used, as long as they are used in the right order. In the linear case, if you fail to use conjoin, it will fail. This is definitely good. However, it's also true that the linear scheme requires more machinery, and that the bug! we do get will fairly clearly pinpoint the cause (that is, you know that a stealable query was used after being stolen, this can only happen if there is a missing force).

Therefore, I at least feel pretty good about going forward with the stealable scheme. I'd be happy to see the linear scheme come into being as well at some point.

TL;DR

OK, sorry for the detailed notes. Just wanted to jot down my current thoughts in detail. To my mind, the question of linear-vs-stealable can be deferred (as I noted), but it still remains to decide how to handle inlining. I feel... not great about having inlining in the codebase but not executable. I feel better about adopting the "suite-based" approach that I described in the first section. I can tinker with that today, I don't think it would be all that hard and it should let me pare down the amount of "framework" that the MIR optimization stuff contains, while still keeping all the benefits -- in fact, it improves on some of them, in that there would never be a need to deal directly with the concept of "stealable" MIR, all the tricky cases would be bound up in the suites.

However, that would require keeping "multi-queries". Hmm, or perhaps I can use a different hack, i.e., having optimized_mir check the opt-level for local MIR. It is >=2, then we could invoke mir_opt2(), which can yield up a map of optimized MIR, and we can fetch the MIR from the map (i.e., do the usual indirection we've been doing elsewhere). This feels pretty reasonable to me for the time being.

@nikomatsakis
Copy link
Contributor Author

This is true only if we do not plan to do any further optimization after inlining (which we certainly do -- part of the point of inlining is to unlock other optimizations).

So I wrote this, but I forgot about the query cycle detection. After some discussion on IRC, we were thinking that a good way to handle inlining (maybe in this PR, maybe not...) would be to have it request the fully optimized form of callees using try_get. This may yield a cycle error, in which case we can just ignore that callee, basically. This gets us bottom-up inlining of the fully optimized form of callees for free, which is actually pretty cool. Clever trick, @eddyb.

We will probably want to be using stacker or something else to permit deeper stacks here.

@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch from 09aded8 to 7df756c Compare May 1, 2017 20:03
@nikomatsakis
Copy link
Contributor Author

@eddyb take a look at the latest edits and see what you think; not fully tested etc but all I have time for now.

@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch from 7df756c to 74a93da Compare May 1, 2017 20:53
@bors
Copy link
Contributor

bors commented May 1, 2017

☔ The latest upstream changes (presumably #41611) made this pull request unmergeable. Please resolve the merge conflicts.

@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch from 6ec92d9 to be32805 Compare May 1, 2017 23:16
Copy link
Member

@eddyb eddyb left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I've left some comments, but this is already leaner and inlining might just work with no further changes.

pub fn item_mir(self, did: DefId) -> Ref<'gcx, Mir<'gcx>> {
self.mir(did).borrow()
/// Given the did of an item, returns its (optimized) MIR, borrowed immutably.
pub fn item_mir(self, did: DefId) -> &'gcx Mir<'gcx> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can we remove this now in favor of just calling the query directly?

@@ -685,6 +697,7 @@ impl<'a, 'gcx, 'tcx> TyCtxt<'a, 'gcx, 'tcx> {
pub fn create_and_enter<F, R>(s: &'tcx Session,
local_providers: ty::maps::Providers<'tcx>,
extern_providers: ty::maps::Providers<'tcx>,
mir_passes: Rc<Passes>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why are they an input and not hardcoded in librustc_mir? My intuition is that they will have to be interacting with some sort of scheduler within a function, interleaving "passes" with inlining.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, for one thing, only drive knows all the names (e.g., elaborate drops is in borrowck). We could make librustc_mir depend on borrowck, but I thought it was nice to have the set of mir passes that will execute be defined in the same place that we are generally defining the overall flow of the compiler code. (I guess that, more and more, that overall flow will be diffused through the system.)

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Oh that is annoying, I did say something to @pnkfelix along the lines of moving the MIR code in borrowck in rustc_mir. Unlike you I don't think we could sustain a more diffuse flow.
At least not on transformations, maybe only on analysis checks.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

My point about 'diffuse flow' was that as we move to queries, you can't really expect the driver source to tell you much about the order in which things happen. So in that sense we might as well define the passes in librustc_mir.

As far as passes coming from "outside", I think eventually we will want to support this, but I am happy to put that day off for the time being. Mostly I like having the MIR passes execute in a common framework so we get uniform dumping of IR between passes and so forth (and a common numbering, etc).

/// - ready for constant evaluation
/// - unopt
/// - optimized
pub const MIR_SUITES: usize = 3;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I really hope we don't really need these.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The constants? We don't need them, we could use distinct variables instead of a Vec<Vec<>>. But it just seems a bit silly to do.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I mean, having them configurable at all.

mod hair;
mod shim;
pub mod mir_map;
mod queries;
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I would really avoid calling a module that.

tcx.alloc_steal_mir(mir)
}

fn optimized_mir<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>, def_id: DefId) -> &'tcx Mir<'tcx> {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

These 3 queries should go in transform IMO.

@@ -119,8 +117,6 @@ fn make_shim<'a, 'tcx>(tcx: ty::TyCtxt<'a, 'tcx, 'tcx>,
debug!("make_shim({:?}) = {:?}", instance, result);

let result = tcx.alloc_mir(result);
// Perma-borrow MIR from shims to prevent mutation.
mem::forget(result.borrow());
result
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can remove the temporary variable.

// inline.
//
// We use a queue so that we inline "broadly" before we inline
// in depth. It is unclear if this is the current heuristic.
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

s/current/previous, perhaps?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I'm not sure what I meant by that sentence. I think best heuristic. It certainly is the current (and previous) heuristic, whether or not it was intended that way.

};
}

pub(crate) fn run_suite<'a, 'tcx>(tcx: TyCtxt<'a, 'tcx, 'tcx>,
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If you moved those 3 entry points here, you'd definitely not need to make this public.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

👍 seems good.

@nikomatsakis
Copy link
Contributor Author

@eddyb Should I interpret your "approval" as r=me?

I was considering whether to remove the pass manager from the Session, so that the passes are not externally configurable, but otherwise leave things as they are. (If you like, I could also open a FIXME to discuss whether we should move everything into crates visible to MIR.)

I was planning on also opening an issue to discuss the best design for linear queries (and probably adding a FIXME referencing it to Steal). I have some further thoughts but I think I'd rather discuss them "elsewhere", not on this PR.

@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch from b48f547 to 1a20104 Compare May 2, 2017 15:39
@eddyb
Copy link
Member

eddyb commented May 2, 2017

@nikomatsakis I would prefer it if Session didn't have access to the passes - ideally not even TyCtxt but if that's not easily doable for now I can live with it.
Other than that, once it stops failing build/tests, you can just r=me.

@nikomatsakis
Copy link
Contributor Author

@eddyb ok, I'll remove access from session for now and open up a FIXME. I just opened up #41710 to discuss the linear story. I'm curious as to your take on "mapping providers" (the third proposal).

@bors
Copy link
Contributor

bors commented May 2, 2017

☔ The latest upstream changes (presumably #41702) made this pull request unmergeable. Please resolve the merge conflicts.

Each MIR key is a DefId that has MIR associated with it
Overall goal: reduce the amount of context a mir pass needs so that it
resembles a query.

- The hooks are no longer "threaded down" to the pass, but rather run
  automatically from the top-level (we also thread down the current pass
  number, so that the files are sorted better).
  - The hook now receives a *single* callback, rather than a callback per-MIR.
- The traits are no longer lifetime parameters, which moved to the
  methods -- given that we required
  `for<'tcx>` objecs, there wasn't much point to that.
- Several passes now store a `String` instead of a `&'l str` (again, no
  point).
Also, store the completed set of passes in the tcx.
this temporary disables `inline`
@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch 2 times, most recently from 68f7821 to af2c59d Compare May 2, 2017 20:11
@nikomatsakis nikomatsakis force-pushed the incr-comp-dep-tracking-cell-mir branch from af2c59d to 488b2a3 Compare May 2, 2017 20:22
@nikomatsakis
Copy link
Contributor Author

@bors r=eddyb

@bors
Copy link
Contributor

bors commented May 2, 2017

📌 Commit 488b2a3 has been approved by eddyb

frewsxcv added a commit to frewsxcv/rust that referenced this pull request May 3, 2017
…-cell-mir, r=eddyb

rework the queries for the MIR pipeline

This PR refashions the MIR pipeline. There are a number of changes:

* We no longer have "MIR passes" and the pass manager is completely reworked. Unless we are doing the interprocedural optimization (meaning, right now, the inline pass), we will process a single MIR from beginning to finish in a completely on-demand fashion; i.e., when you request `optimized_mir(D)`, that will trigger the MIR for `D` to actually be built and optimized, but no other functions are built or touched.
* We no longer use `&'tcx RefCell<Mir<'tcx>>` as the result of queries, since that spoils the view of queries as "pure functions". To avoid however copying the MIR, we use a `&'tcx Steal<Mir<'tcx>>` -- this is something like a ref-cell, in that you can use `borrow()` to read it, but it has no `borrow_mut()`. Instead, it has `steal()`, which will take the contents and then panic if any further read attempt occurs.
* We now support `[multi]` queries, which can optionally yield not just one result but a sequence of (K, V) pairs. This is used for the inlining pass. If inlining is enabled, then when it is invoked on **any** def-id D, it will go and read the results for **all** def-ids and transform them, and then return the results for all of them at once. This isn't ideal, and we'll probably want to rework this further, but it seems ok for now (note that MIR inlining is not enabled by default).

**Tips for the reviewer:** The commits here are meant to build individually, but the path is a *bit* meandering. In some cases, for example, I introduce a trait in one commit, and then tweak it in a later commit as I actually try to put it to use. You may want to read the README in the final commit to get a sense of where the overall design is headed.

@eddyb I did not wind up adding support for queries that produce more than one *kind* of result. Instead, I decided to just insert judicious use of the `force()` command. In other words, we had talked about e.g. having a query that produced not only the MIR but also the `const_qualif` result for the MIR in one sweep. I realized you can also have the same effect by having a kind of meta-query that forces the const-qualif pass and then reads the result. See the README for a description. (We can still do these "multi-query results" later if we want, I'm not sure though if it is necessary.)

r? @eddyb

cc @michaelwoerister @matthewhammer @arielb1, who participated in the IRC discussion.
bors added a commit that referenced this pull request May 3, 2017
Rollup of 7 pull requests

- Successful merges: #41217, #41625, #41640, #41653, #41656, #41657, #41705
- Failed merges:
@bors bors merged commit 488b2a3 into rust-lang:master May 3, 2017
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
S-waiting-on-author Status: This is awaiting some action (such as code changes or more information) from the author.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

4 participants