-
Notifications
You must be signed in to change notification settings - Fork 798
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
"Pausing" precaching to avoid bandwidth contention #570
Comments
Couple of thoughts (all random):
Gut feeling is that without some solid research and reproducible cases, I'd be loathed to add anything to workbox at all in this space as it feels speculative. |
Drive-by comment: the production PWAs I've seen run into this issue have generally defined either a custom event or timeout after which they kick off their SW registration and precaching process to avoid as much opportunity for network contention to be a problem. This of course doesn't help if there are fetches occurring midway through this process. I'll admit that pausing precaching could be a desirable option, but I would agree with Matt this area seems speculative right now. I'm going to label this as chillin so we can take some time to research where this is concretely an issue to avoid shipping any premature solutions. Does that sound reasonable? |
Going to close this and see if anyone else requests this feature. If you do add a comment please add a use case. |
I would also like this feature or some general way to prioritize assets. |
@milky2028 can you please provide details of your use case. |
I actually just ended up using this to delay sw registration instead. Thanks! |
It would be great to have an option to delay precaching without having to delay the whole SW registration. 😄 |
Hi @jeffposnick. I know this is closed but may I open the discussion again? I am also seeing this issue when testing a PWA with a poor mobile data connection. The PWA in question contains a load of media data (audio files), so if precaching is occurring (which includes the audio) everything grinds to a halt... I am experimenting with delayed service worker activation to get round this. And the audio is all OK. So PWAs are supposed to be 'progressive' - to add functionality that enhances the user experience, particularly when offline. But precaching, most noticeably under these circumstances (slow mobile connection and large files), actually worsens the user experience... Being able to pause precaching might work OK, but it's technically tricky. Sending a pause message to the service worker is easy. But when exactly do I say 'unpause'? Bottom line here is: I really don't want to have to manage this for myself. It seems to me that the best solution would be to somehow make precaching requests low priority so that precaching only runs in the gaps - when the network connection is not otherwise being used (a bit like garbage collection in a JVM - only getting stuff rather than removing it). But I don't see how you could do this from javascript. It would seem to require some kind of native feature implemented by the browser. |
I believe you're describing Priority Hints, which were experimented with in a previous version of Chrome, but which never progressed to the stage where they were implemented by default. We would be happy to use them in |
Hi there @jeffposnick / @gauntface I just came across this issue in my travels as I'm trying to do just what you mentioned:
It seems that workbox uses Specifically, I'm getting sporadic After looking up this error, it seems that this represents some sort of resource exhaustion within chrome. A few other people have come across this here, and here, and it seems that the answer is to simply make fewer concurrent requests. I noticed that @nachoab came up against this here and solved it by batching requests in chunks of 20, effectively limiting concurrency to 20 inflight requests at most... but that's against the old The easiest and most flexible solution as I see it would be to make Thoughts? Should I make a new issue? |
Library Affected:
workbox-precaching
I was chatting with a developer whose users are typically on mobile devices with very limited bandwidth. He was concerned about bandwidth contention with precaching requests for, e.g., a large
vendor.js
bundle (that wasn't already in the browser's cache, because the initial page used SSR). Specifically, he mentioned an auto-complete text box that relied on communicating with a server to populate the suggestions, and those responses were fighting for bandwidth with the precaching requests that were also going on.I checked and Chrome currently uses a "High" priority for
fetch()
requests originating from the SW, and I don't think you can control that. (I'm going to follow up on that separately, as there are several dormant spec threads talking about exposing this setting.)One thing that we could do at the tooling level to help is to allow developers to "pause" precaching requests when a client page detects that there's something more important going on that would need the bandwidth.
In terms of implementation, this could be pretty clean via a
requestWillFetch
plugin. The plugin would wait on aPromise
that would either resolve immediate (the default state) or could be configured to stay in the pending state until some condition was met.The client page could
postMessage()
to the service worker to let it know when it needs to use aPromise
in a pending state, and then send anotherpostMessage()
to resolve thePromise
so that the precaching request would actually be sent.We would also probably want to switch from using
Promise.all()
for firing off those precaching requests to a chain of promises that execute sequentially, and we could also order the entries in the precache manifest so that they're sorted by the size of the asset, with the smallest assets listed first.The one technical hurdle that I'm not sure about is whether there's an upper limit on the amount of time the
install
handler can keep running. I'm assuming that at some point, even with the use ofevent.waitUntil()
, the service worker thread would be killed.The text was updated successfully, but these errors were encountered: