Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

If service worker cache update fails halfway through, app is bricked #1316

Closed
rjcorwin opened this issue Feb 15, 2018 · 8 comments
Closed

If service worker cache update fails halfway through, app is bricked #1316

rjcorwin opened this issue Feb 15, 2018 · 8 comments
Assignees
Milestone

Comments

@rjcorwin
Copy link

Library Affected:
workbox-sw, workbox-build, etc.

Browser & Platform:
E.g. Google Chrome v51.0.1 for Android, or "all browsers".

Issue or Feature Request Description:
When updating an app and the network cuts out halfway through, you end up with a bricked app where some files have been updated but others have not. It would be preferable to fallback to the previous version.

@gauntface
Copy link

gauntface commented Feb 15, 2018

Few things to note:

  1. The user has to be online for the new SW to be picked up.
  2. The app is only "bricked" until they next get online as the new service worker won't install, so the next time the site is online, the new SW will re-installed.

That being said, it would be good to safe guard for this. Ideally, we'd cache requests into a temp cache and rename it in activate, but I don't believe we can rename a cache, but maybe we can read our a re-write the cached requests, but that feels slightly OTT.

@jeffposnick any ideas?

@jeffposnick
Copy link
Contributor

@rjsteinert, just to clarify, which type of cache update are you talking about? Updates to precached resources, or updates to resources that are cached via runtimeCaching?

If you're talking about updates to precached resources, I'm assuming that your resources don't have hash fingerprints as part of their URLs, and you end up overwriting previously cached entries by virtue of sharing the same URL?

@gauntface
Copy link

@jeffposnick This can occur if the URLs are the same, but have different revisions.

@ghost
Copy link

ghost commented Feb 23, 2018

Temp cache is best solution maybe because it also you to pre-order update clients before your new server service is up.
This gets around the old chicken and egg issue.

@gauntface gauntface added this to the V3 Stable milestone Feb 28, 2018
@jeffposnick
Copy link
Contributor

Some historical precedent:

  • sw-toolbox had "$$$inactive$$$" and "$$$active$$$" caches that it swapped out in the activate handler. (Using precache() with sw-toolbox was problematic for other reasons.)

  • sw-precache had a single cache, but it included versioning information in the URL used as the cache key, so overwriting entries wasn't a concern.

We're not going to go back to sw-precache's model (I'd assume?). A disadvantage of sw-toolbox's model is that you end up with two copies of each precached entry for a small period of time, and in an environment where we're particularly concerned about quota issues, that would seem to exasperate things.

Here are some other options:

  • We switch to cache.addAll() and fire off all of the network requests + cache adds in a single, atomic operation. This would mean refactoring the code that currently populates the cache via independent requests, and we wouldn't be able to run the requests through our fetchWrapper. This would also mean decoupling the IDB versioning updates from the cache storage updates, which could potentially lead to that relationship getting out of sync.

  • We handle precaching in multiple stages, where we first make all the network requests, and only if they are all successful, we use the Response objects to do the cache updates + IDB updates. This operates on the assumption that the most likely cause of failures with our current model is a network issue (as opposed to storage quota issues).

@gauntface gauntface self-assigned this Mar 2, 2018
@rjcorwin
Copy link
Author

rjcorwin commented Mar 5, 2018

you end up overwriting previously cached entries by virtue of sharing the same URL?

That's correct. If an update fails, some of the cache has been updated but the rest is not. This is particularly problematic in a situation where you have packed your application into chunks but the network cuts out and you end up with an outdated chunk.

We ended up going with a solution where updates are never at the same URL. The app moves around according a generated UUID on every release. If the update successfully downloads, remove everything in the cache that doesn't have that new release UUID.

This is a higher level complexity than we were hoping to have to support in order to get atomic updates. It would be great if we could pull it off with workbox!

@gauntface
Copy link

Just to close the loop here - v3.0.0-beta.2 now uses temp cache.

@rjcorwin
Copy link
Author

rjcorwin commented Mar 7, 2018

Sounds promising @gauntface and @jeffposnick! Thanks so much.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants