Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

cache the results of type projection and normalization #20304

Open
nikomatsakis opened this Issue Dec 29, 2014 · 10 comments

Comments

Projects
None yet
8 participants
@nikomatsakis
Copy link
Contributor

nikomatsakis commented Dec 29, 2014

Both in trans and in typeck. Must be somewhat careful around type parameters and so forth. Probably we want to introduce a cache onto the fulfillment context to use for normalization as well.

@freebroccolo

This comment has been minimized.

Copy link
Contributor

freebroccolo commented Feb 6, 2015

cc me

@arielb1

This comment has been minimized.

Copy link
Contributor

arielb1 commented May 19, 2015

cc me (this seems to take 10% of no-opt time).

@DemiMarie

This comment has been minimized.

Copy link
Contributor

DemiMarie commented Oct 31, 2015

Which functions in the compiler need to be memoized?

@jonas-schievink

This comment has been minimized.

@Marwes

This comment has been minimized.

Copy link
Contributor

Marwes commented Mar 7, 2016

I was thinking about implementing this but I have run into some trouble. From what I can gather it is possible for type inference to try multiple different alternatives before it finds the correct typing. This means that types returned after normalizing and selecting associated types may actually be wrong and thus cannot be cached.

Is there some way/place where it is known that the result of normalizing is actually the correct way so that caching can be done correctly? The other way I thought of doing it is to rely on the snapshoting in the InferCtxt so that bad types can be rolled back but my implementation still seems to cache bad types in that implementation.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Mar 7, 2016

It's worth nothing that over the last week or so I've been drawing up plans
for overhauling this part of the compiler. Under the new design I am
currently considering, there isn't even a notion of normalization per se
(rather the compiler tracks "congruent" types using a congruence closure
algorithm). I intend to try and write this up over the next few days.

On Mon, Mar 7, 2016 at 12:10 PM, Markus Westerlind <notifications@github.com

wrote:

I was thinking about implementing this but I have run into some trouble.
From what I can gather it is possible for type inference to try multiple
different alternatives before it finds the correct typing. This means that
types returned after normalizing and selecting associated types may
actually be wrong and thus cannot be cached.

Is there some way/place where it is known that the result of normalizing
is actually the correct way so that caching can be done correctly? The
other way I thought of doing it is to rely on the snapshoting in the
InferCtxt so that bad types can be rolled back but my implementation still
seems to cache bad types in that implementation.


Reply to this email directly or view it on GitHub
#20304 (comment).

@Marwes

This comment has been minimized.

Copy link
Contributor

Marwes commented Mar 7, 2016

I will hold off working on this then. Looking forward to seeing this issue resolved.

@nikomatsakis

This comment has been minimized.

Copy link
Contributor Author

nikomatsakis commented Mar 16, 2016

So, actually, I've been rethinking my "rethinking". That is, I now think the work on congruence closure is of secondary importance and can be deferred. I've implemented a simple cache for projection -- I have plans for a more elaborate one -- but it seems to be effective e.g. for the example in #31849.

@Mark-Simulacrum

This comment has been minimized.

Copy link
Member

Mark-Simulacrum commented May 2, 2017

@nikomatsakis Is this still a problem today?

@ishitatsuyuki

This comment has been minimized.

Copy link
Member

ishitatsuyuki commented Feb 18, 2018

This is still a thing today, although I've improved the situation in #48296 by reducing the complexity of recursion itself.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
You can’t perform that action at this time.