New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Invalidate cache when previously missing stubs are added #5465

Merged
merged 1 commit into from Aug 13, 2018

Conversation

Projects
None yet
2 participants
@ilevkivskyi
Collaborator

ilevkivskyi commented Aug 13, 2018

Fixes #1910
Fixes #5101

@ilevkivskyi ilevkivskyi requested a review from msullivan Aug 13, 2018

@msullivan

LGTM, though I have one question

for dep in st.ancestors + dependencies + st.suppressed:
# We don't want to recheck imports marked with '# type: ignore'
# so we ignore any suppressed module not explicitly re-included
# from the command line.
ignored = dep in st.suppressed and dep not in entry_points
if ignored:
if ignored and dep not in added:

This comment has been minimized.

@msullivan

msullivan Aug 13, 2018

Collaborator

I thought there was already logic to pick up added modules? Does it not work for these purposes?

@msullivan

msullivan Aug 13, 2018

Collaborator

I thought there was already logic to pick up added modules? Does it not work for these purposes?

This comment has been minimized.

@ilevkivskyi

ilevkivskyi Aug 13, 2018

Collaborator

My understanding is that the problem is that freshness of a file is determined by whether the suppressed deps are present in the graph. But they are never added to the graph in the first place because of this code (unless they are added as entry points). So I only add them (them == previously suppressed files that are now found) to the graph here in load_graph, and the invalidation is done by the code in process_graph.

@ilevkivskyi

ilevkivskyi Aug 13, 2018

Collaborator

My understanding is that the problem is that freshness of a file is determined by whether the suppressed deps are present in the graph. But they are never added to the graph in the first place because of this code (unless they are added as entry points). So I only add them (them == previously suppressed files that are now found) to the graph here in load_graph, and the invalidation is done by the code in process_graph.

@ilevkivskyi ilevkivskyi merged commit e402c13 into python:master Aug 13, 2018

2 checks passed

continuous-integration/appveyor/pr AppVeyor build succeeded
Details
continuous-integration/travis-ci/pr The Travis CI build passed
Details

@ilevkivskyi ilevkivskyi deleted the ilevkivskyi:invalidate-cache branch Aug 13, 2018

ilevkivskyi added a commit that referenced this pull request Sep 3, 2018

Fix performance regression caused by #5465: daemon part (#5556)
This is part of the original PR #5544 that takes care of the daemon performance regression caused by #5465.

This change should not affect semantics, only performance by avoiding cloning options for lots of modules in situation with `--follow-imports=skip`.

ilevkivskyi added a commit that referenced this pull request Sep 3, 2018

Fix performance regression caused by #5465: non-daemon part (#5544)
The problem with the initial fix in #5465 is that it assumes that all modules that are in `st.suppressed` that now exist in file system are newly added, while in fact they can be not new if one uses `follow_imports = skip`.

My initial guess was that this causes a lot of files become stale. But it actually turns out that the staleness is determined correctly (all the tests I added actually pass on master). However, we still may parse a lot of modules unnecessarily.

This PR fixes the performance regression by: not treating modules as newly added if they were added to `st.suppressed` because of `follow_imports = skip`

There still some overhead remains -- we need to clone options for module to understand why it got into suppressed dependencies. This can be improved by having to separate suppressed lists, but I think this win will be quite minor and it is not worth the complexity.
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment