New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

incr.comp.: Load cached diagnostics lazily and allow more things in the cache. #46338

Merged
merged 7 commits into from Dec 1, 2017

Conversation

Projects
None yet
5 participants
@michaelwoerister
Contributor

michaelwoerister commented Nov 28, 2017

This PR implements makes two changes:

  1. Diagnostics are loaded lazily from the incr. comp. cache now. This turned out to be necessary for correctness because diagnostics contain Span values and deserializing those requires that the source file they point to is still around in the current compilation session. Obviously this isn't always the case. Loading them lazily allows for never touching diagnostics that are not valid anymore.
  2. The compiler can now deal with there being no cache entry for a given query invocation. Before, all query results of a cacheable query were always expected to be present in the cache. Now, the compiler can fall back to re-computing the result if there is no cache entry found. This allows for caching things that we cannot force from dep-node (like the symbol_name query). In such a case we'll just have a "best effort" caching strategy.

This PR is based on #46301 (=first 2 commits), so please don't merge until that has landed. The rest of the commits are ready for review though.

r? @nikomatsakis

@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Nov 30, 2017

Contributor

☔️ The latest upstream changes (presumably #46299) made this pull request unmergeable. Please resolve the merge conflicts.

Contributor

bors commented Nov 30, 2017

☔️ The latest upstream changes (presumably #46299) made this pull request unmergeable. Please resolve the merge conflicts.

@nikomatsakis

This comment has been minimized.

Show comment
Hide comment
@nikomatsakis

nikomatsakis Dec 1, 2017

Contributor

r=me

Contributor

nikomatsakis commented Dec 1, 2017

r=me

@michaelwoerister michaelwoerister changed the title from [DO NOT MERGE YET] incr.comp.: Load cached diagnostics lazily and allow more things in the cache. to incr.comp.: Load cached diagnostics lazily and allow more things in the cache. Dec 1, 2017

@michaelwoerister

This comment has been minimized.

Show comment
Hide comment
@michaelwoerister

michaelwoerister Dec 1, 2017

Contributor

OK, travis passed.

@bors r=nikomatsakis p=1

Contributor

michaelwoerister commented Dec 1, 2017

OK, travis passed.

@bors r=nikomatsakis p=1

@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Dec 1, 2017

Contributor

📌 Commit 966eead has been approved by nikomatsakis

Contributor

bors commented Dec 1, 2017

📌 Commit 966eead has been approved by nikomatsakis

@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Dec 1, 2017

Contributor

⌛️ Testing commit 966eead with merge 6805b01...

Contributor

bors commented Dec 1, 2017

⌛️ Testing commit 966eead with merge 6805b01...

bors added a commit that referenced this pull request Dec 1, 2017

Auto merge of #46338 - michaelwoerister:lazy-diagnostics, r=nikomatsakis
incr.comp.: Load cached diagnostics lazily and allow more things in the cache.

This PR implements makes two changes:
1. Diagnostics are loaded lazily from the incr. comp. cache now. This turned out to be necessary for correctness because diagnostics contain `Span` values and deserializing those requires that the source file they point to is still around in the current compilation session. Obviously this isn't always the case. Loading them lazily allows for never touching diagnostics that are not valid anymore.
2. The compiler can now deal with there being no cache entry for a given query invocation. Before, all query results of a cacheable query were always expected to be present in the cache. Now, the compiler can fall back to re-computing the result if there is no cache entry found. This allows for caching things that we cannot force from dep-node (like the `symbol_name` query). In such a case we'll just have a "best effort" caching strategy.

~~This PR is based on #46301 (=first 2 commits), so please don't merge until that has landed. The rest of the commits are ready for review though.~~

r? @nikomatsakis
@bors

This comment has been minimized.

Show comment
Hide comment
@bors

bors Dec 1, 2017

Contributor

☀️ Test successful - status-appveyor, status-travis
Approved by: nikomatsakis
Pushing 6805b01 to master...

Contributor

bors commented Dec 1, 2017

☀️ Test successful - status-appveyor, status-travis
Approved by: nikomatsakis
Pushing 6805b01 to master...

@bors bors merged commit 966eead into rust-lang:master Dec 1, 2017

2 checks passed

continuous-integration/travis-ci/pr The Travis CI build passed
Details
homu Test successful
Details
@Mark-Simulacrum

This comment has been minimized.

Show comment
Hide comment
@Mark-Simulacrum
Member

Mark-Simulacrum commented Dec 3, 2017

@michaelwoerister

This comment has been minimized.

Show comment
Hide comment
@michaelwoerister

michaelwoerister Dec 4, 2017

Contributor

Bummer, I would have hoped that the slightly more efficient span hashing implementation would make things faster. But it seems that also encoding and decoding expansion contexts adds some noticeable overhead for some crates. Fortunately it doesn't seem too bad -- except for crates.io again. This one seems to be rather sensitive to span related changes.

Contributor

michaelwoerister commented Dec 4, 2017

Bummer, I would have hoped that the slightly more efficient span hashing implementation would make things faster. But it seems that also encoding and decoding expansion contexts adds some noticeable overhead for some crates. Fortunately it doesn't seem too bad -- except for crates.io again. This one seems to be rather sensitive to span related changes.

bors added a commit that referenced this pull request Dec 14, 2017

Auto merge of #46562 - michaelwoerister:faster-span-hashing, r=eddyb
incr.comp.: Speed up span hashing by caching expansion context hashes.

This PR fixes the performance regressions from #46338.

r? @nikomatsakis
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment