Join GitHub today
GitHub is home to over 31 million developers working together to host and review code, manage projects, and build software together.
Sign upsupport transient inference contexts in the SLG solver #97
+408
−303
Conversation
nikomatsakis
added some commits
Mar 14, 2018
nikomatsakis
requested a review
from
scalexm
Mar 16, 2018
This comment has been minimized.
This comment has been minimized.
|
See also this internals thread for more discussion. |
nikomatsakis
merged commit 7cf55a6
into
rust-lang:master
Mar 19, 2018
1 check passed
continuous-integration/travis-ci/pr
The Travis CI build passed
Details
This comment has been minimized.
This comment has been minimized.
|
After some discussion with @scalexm, gonna land this |
Sign up for free
to join this conversation on GitHub.
Already have an account?
Sign in to comment
Add this suggestion to a batch that can be applied as a single commit.
This suggestion is invalid because no changes were made to the code.
Suggestions cannot be applied while the pull request is closed.
Suggestions cannot be applied while viewing a subset of changes.
Only one suggestion per line can be applied in a batch.
Add this suggestion to a batch that can be applied as a single commit.
Applying suggestions on deleted lines is not supported.
You must change the existing code in this line in order to create a valid suggestion.
Outdated suggestions cannot be applied.
This suggestion has been applied or marked resolved.
Suggestions cannot be applied from pending reviews.
nikomatsakis commentedMar 16, 2018
This PR shifts the way that the SLG trait works to support "transient" inference contexts. In the older code, each strand in the SLG solver had an associated inference context that it would carry with it. When we created new strands, we would fork this inference context (using
clone). (My intention was to move eventually to a persistent data structure for this, although it's not obvious that this would be faster.)However, the approach of storing an inference context per stand was pretty incompatible with how rustc's inference contexts work. They are always bound to some closure, so you can't store them in the heap, return, and then come back and use them later. The reason for this is that rustc creates an arena per inference context to store the temporary types, and then throws this arena away when you return from the closure.
Originally, I hoped to address this by changing Rust's API to use a "Rental-like" interface. But this proved more challenging than I anticipated. I made some progress, but I kept hitting annoying edge cases in the trait system: notably, the lack of generic associated types and the shortcomings of normalization under binders. Ironically, two of the bugs I most hope to fix via this move to Chalk! It seemed to me that this route was not going anywhere.
So this PR takes a different tack. The SLG solver trait now never gives ownership of an inference context away; instead, it invokes a closure with a
&mut dyn InferenceTabledyn-trait. This means that the callee can only use that inference table during the closure and cannot allow it to "escape" onto the heap (the anonymous lifetime ensures that).The solver in turn then distinguishes between strands that are "at rest", waiting in some table, and the current strand. A strand-at-rest is called a
CanonicalStrand, and it is stored in canonical form. When we pick up a strand to run with it, we create a fresh inference context, instantiate its variables, and then start using it.As implemented, this is probably fairly inefficient. We have to do a lot of substitution on a pretty regular basis. But I'm mostly interested in pushing through until we have something that works, then I think we can come back and revisit some of these integration questions. See for example the last commit, which suggests that it might be worthwhile to push hard on just making inference contexts potentially really light weight (and also optimizing canonicalization).