Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

prevent expanded paths from getting into the history cache #141

Closed
wants to merge 1 commit into from
Closed

prevent expanded paths from getting into the history cache #141

wants to merge 1 commit into from

Conversation

wlipa
Copy link
Contributor

@wlipa wlipa commented Sep 23, 2015

This patch addresses #127 by avoiding expanding the history entries in-place.

I verified that this change fixed the problem with my deploys.

@rails-bot
Copy link

Thanks for the pull request, and welcome! The Rails team is excited to review your changes, and you should hear from @arthurnn (or someone else) soon.

If any changes to this PR are deemed necessary, please add them as extra commits. This ensures that the reviewer can see what has changed since they last reviewed the code. Due to the way GitHub handles out-of-date commits, this should also make it reasonably obvious what issues have or haven't been addressed. Large or tricky changes may require several passes of review and changes.

Please see the contribution instructions for more information.

@schneems
Copy link
Member

Awesome, thanks for your help. I think you hit the nail on the head. I was trying to save an array allocation, but it totally pollutes the history cache by accident. Sorry for going quiet on the original issue. I'm not getting emails from github for some reason right now. @jeremy you think this is what you were seeing?

I think we need to also add a test so that this doesn't regress by accident. Once we're good to go I can also backport to 3.x

@jeremy
Copy link
Member

jeremy commented Sep 23, 2015

This affects the in-memory cache, but not the cache across deploys.

That's "polluted" by the dependency cache. It stores file digests with absolute paths, but has a cache key that doesn't change across deploys anymore (due to 436e2d6#diff-879998e90345f35e805b8b9707ab1ff2R189). So a dep cache lookup will return a dep list that references assets from old deploys, then we look ask for an asset matching that dep list from the cache, which faithfully returns the old asset.

Making a test case that exposes this proves to be challenging. Can see the case we want to tickle, but we don't have any test coverage for "portable" assets scenarios like this.

Note, just reverting to register_dependency_resolver 'environment-paths', &:paths fixes the problem, since the dep cache will be expired when sprockets asset paths change.

That also has the virtue of making the dep cache useless, since it expires on every deploy. The dep cache needs relative paths, also, to be useful.

@schneems
Copy link
Member

It will leak into the cache when the modified array gets re-written:

cache.set(key, history.rotate!(index)) if index > 0

I'm not sure the conditions where an older digest is used. When index is 0 it means that it's using the last cached build. Maybe if a change is made then reverted it uses an older set? I'm trying to get a failure case for this locally to see if I can help figuring out how to test things.

@wlipa
Copy link
Contributor Author

wlipa commented Sep 23, 2015

In my error scenario the actual place it was leaked was the line:

cache.set(key, history.unshift(deps).take(limit))

It was occurring when a js asset had a changed js dependency between deploys.

@jeremy
Copy link
Member

jeremy commented Sep 23, 2015

Bam! Absolutely.

jeremy added a commit to basecamp/sprockets that referenced this pull request Sep 24, 2015
prevent expanded paths from getting into the history cache
@schneems
Copy link
Member

Thanks for the clarification, I was looking in the wrong place. With the new info it was really easy to reproduce. Set up a file that requires another:

$ echo "//= require foo.js" >> app/assets/javascripts/application.js
$ echo "var foo = 2;" >> app/assets/javascripts/foo.js

Clean and compile once, then modify dependency, 💥 you get absolute paths in your cache.

$ be rake assets:clobber assets:precompile
$ echo "var bar = 2;" >> app/assets/javascripts/foo.js
$ be rake assets:precompile

I'm going to work on a test

schneems added a commit that referenced this pull request Sep 25, 2015
When mutating objects from the cache, sometimes those objects are placed back into the cache leading to accidental absolute paths.

This commit verifies that no absolute paths are accidentally stored in the cache.
schneems added a commit that referenced this pull request Sep 25, 2015
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
schneems added a commit that referenced this pull request Sep 25, 2015
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
@schneems
Copy link
Member

I wrote a really hacky test for this #142, it directly scans the cache for absolute paths. It fails without your patch and passes with it.

While I can observe the bad behavior in my above script, I don't know how to actually load an asset that uses the bad paths. If you could give me some more info on how to force a failure scenario (i.e. make a second app that points at the first load the wrong asset, etc.) I would really appreciate it. As is this test is valid but pretty brittle.

@schneems
Copy link
Member

This issue was closed via d80c07a, going to backport to 3.x and cut 3.3.5, thanks for your help!

While the PR is closed if you can help me repro the problem further to write a better test it would be hugely helpful.

@schneems schneems closed this Sep 25, 2015
schneems added a commit that referenced this pull request Sep 25, 2015
This isn't ideal, we're directly inspecting the cache, but I don't know how to cause a second asset to actually use the information in the cache to incorrectly load an asset. If someone could show me how, that would be great and I'll update the test. To understand the failure mode see: #141 (comment)
@wlipa
Copy link
Contributor Author

wlipa commented Sep 25, 2015

Try doing things in three steps, not just 2. It's actually the third step that reveals the problem at the application level.

Step 1 - clobber, deploy / precompile. Everything is good so far.
Step 2 - deploy / precompile with a dependency change. Cache gets polluted, but the assets are actually OK as far as the app is concerned.
Step 3 - deploy / precompile with another dependency change. App gets busted because the asset returned is from Step 2, not Step 3, due to the polluted cache. So you could look for some difference that the app needs, i.e. a new function call or whatever.

In each of these steps, the Rails root should be changing to something step specific, to mirror the way that Capistrano puts a release date in the folder path for each new deploy. So the assets will have different absolute paths, although the same relative ones.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants