New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
setTimeout/setInterval are unreliable in ServiceWorkers #838
Comments
This is no different than the risks of using other async methods in SW; cache api, idb, etc. You can use them safely in combination with waitUntil() or respondWith(). |
It means we need to keep the ServiceWorker alive. If there were a good use-case for it, I'd be open to that. The difference from the other async methods is that those methods are all serving an explicit purpose (e.g. do a thing once you've gotten a result out of the database). Is there a good use-case for keeping a SW alive for 100ms and then doing something? |
It might be perfectly reasonable to fire of a fetch, and in parallel wait for 100ms to return something else if the fetch doesn't return in time. |
For that I think you want to use the timeout on fetch? I don't see a timeout feature on fetch, but that's a fetch bug. |
I agree that a timeout on fetch would be helpful too, but not enough.
|
@ojanvafai What is happening at the moment? I assumed it would be extremely inefficient to terminate a SW as soon as the event queue is empty, are you saying that this is in fact what is happening? Why would a user-agent, or operating system, terminate a SW on a device that is resource rich and far from exhausted? BTW, I fully accept you original points about setTimeout/setInterval as long as they are still there with worker threads. NB: While the GeoLocation team is implementing serviceWorkerRegistration.travelManager one could always implement serviceWorkerRegistration.calendarManager. A calendarManager event would be sufficient to re-instantiate a terminated Service Worker. (calendarManager being managed in the browser as will be travelManager and not Google Play or other host process.) |
My point is that it's racy. You don't know when the SW is going to get killed. |
%100 agree. Zap 'em. |
Being able to delay execution is a primitive. We offer waitUntil() to avoid races here and with other async functions. I personally don't see a reason to remove setTimeout(). |
I don't agree that removing setTimeout makes sense either. I don't see how it is any different from any other async APIs. You have to be somewhat careful using it, just like other APIs. Also there are actual use cases enabled by it. |
@mkruisselbrink I don't doubt there are use cases enabled by it. What I'm curious about is if those cases would not be better served by a reliable calendar/time manager that, when faced with an absent/terminated SW, would fire one up again to deliver the event. Sorry, just read your particular use case above and it's compelling. Especially with a dicky network. Sounds best of cache and up-to-date worlds. |
Not sure about Ben's definition of "Safely" and I hope this is a half relevant and intelligent question. WRT: - https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#extendable-event-waituntil-method
Is it be possible that @mkruisselbrink could wrap his setTimeout up in a promise like: - function wait(time) { wait(5000).then(function(resp){alert('Finished waiting');},function(err){}); . . . and such an action (_logically_ in-lining the wait) would mark the SW as immune from "idle" termination? Then @mkruisselbrink could do a Promise.race([fromWait, fromFetch]); function myFunction() { |
Note that if we decide to do this, we'd need to change HTML, since HTML is what causes these to be exposed in service workers. |
Richard, wrapping the setTimeout() in a promise is good, but you are missing the waitUntil(). Something like:
Implementing any kind of implied keep-alive state is problematic and prone to abuse. That's why we offer the explicit waitUntil(). |
Agree with @wanderview and @mkruisselbrink here, Say I wanted to race the cache & the network, but prefer network content if it can arrive in 500ms: function wait(ms) {
return new Promise(resolve => {
setTimeout(resolve, ms);
});
}
function promiseAny(promises) {
return new Promise((resolve, reject) => {
promises = promises.map(p => Promise.resolve(p));
promises.forEach(p => p.then(resolve));
promises.reduce((a, b) => a.catch(() => b)).catch(() => reject(Error("All failed")));
});
};
self.addEventListener('fetch', event => {
const fetchPromise = fetch(event.request);
const cachePromise = caches.match(event.request);
event.respondWith(
promiseAny([
fetchPromise.catch(() => cachePromise),
wait(500).then(() => cachePromise).then(r => r || fetchPromise)
])
);
}); Not sure how you'd achieve this without |
Trying to get this clear in my head. ExtendedEvent.waitUntil() offers viability guarantees that Promise.race() or .all() cannot and does not? Is that correct? Is it because Promise is standard ECMAscript and waitUntil() is SW aware? I've looked at #738 "Describe waitUntil() behavior in terms of Promise.all()". I've also looked at https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil and the spec. Is there better documentation available somewhere? My guess now is that waitUntil will only work within the EventHandler context of an the few (currently 8?) Extendable events used by ServiceWorkers and that is the only environment providing sanctuary from the idle SW watchDog. But please feel free to wax lyrical on this topic. |
Right, we explicitly only want work to be done in the context of a functional event. We don't want people visiting a site once and then later discover that site has been mining bitcoin in the background. So arbitrary long running operations are explicitly not supported outside of functional events. |
Thanks again for explaining Ben and the logic seems sound. (Otherwise there'd be all these indefinite promises lying around just to keep the SW alive :-) I assume you don't allow something like: - var dodgyEvent = new ExtendableEvent('message_OrSomethingElse'); Having said that I see a strong argument for "Install" and "Activate" events being protected but "Message" "Fetch" "Error" "Travel" really? Why should these events empower the developer with a termination veto? Anyway, I've said it before and I'll say it again, this Service Worker stuff is absolutely amazing functionality! well done to all involved. |
The spec allows the browser to kill threads stuck in a |
Just my two cents: we are currently using |
FYI, Firefox works as expected when it comes to adding images to the current page either via SRC or FETCH API. Chrome 48 on the other hand does NOT fire a 'fetch' event. Curious.
/* |
@RichardMaher this doesn't seem related to this issue. If you have found a Chrome bug, can you post it to http://crbug.com? Feel free to ping me on https://twitter.com/jaffathecake/ and I'll try and recreate, but I've created demos that depend on SW handling image loads, and they work. |
Could be differences in image cache? |
Thanks for the offer of the side-bar Jake but I use neither Twatter or FacePlant. I did email you not so long ago but just assumed that you're pretty busy or I ended up in junk-email. Either way, when it comes to "this issue" it was all over as soon as @mkruisselbrink posted their use case. Don't get me wrong, I've learned/am learning a lot from your code example and think this thread should stay open as a platform for displaying and viewing it, but as an issue this could've been closed 8 days ago. Anyway, I led everyone astray by saying it was a Chrome issue. My bad. Both Chrome and FF respond (fire fetch events) to every "fetch" in the main thread. Both Chrome and FF only fire a fetch event once for an IMG element that is added/removed from the DOM many times. Probably documented like that somewhere? But while I'm here and back on topic: - When is the response cached? When the Fetch Event is triggered? PreventDefault() to stop it? Do I need cache: no-store on my INIT parameter to my FETCH call? Does it happen at event.respondWith() time? If I understood @mkruisselbrink 's strategy correctly, the goal was to deliver data as fresh as possible and only go to cache if the network is slow. BUT even then leave the network fetch running to update the cache whenever it finishes. Without something like INIT "no-cache" aren't we at risk of the FETCH just reading cache anyway? I'm not saying your example was meant to be complete and I can't break it (or really understand it :-) see below) but in my test example I was manually updating cache with a PUT(). Silly me? Also, with the (Service Worker acting as a Shared Worker?) and handling fetches from all tabs from the same source, I assume a second page registering the same SW is just like the documented skipWaiting() replacement process? But what happened to the fetchEvent.clientID field? Anyway, here's your code again with some comments to try to help me. I'm sure every other reader immediately understood your code so I apologise to them now: - function wait(ms) {
return new Promise(resolve => {
setTimeout(resolve, ms);
});
}
function promiseAny(promises) {
return new Promise((resolve, reject) => {
// Kick off all Promises (thenable) in array
// p is each element of promises[]
promises = promises.map(p => Promise.resolve(p));
// For first promise, queue if settled "resolved" call Fetch's resolve
promises.forEach(p => p.then(resolve)); // Didn't we just force all "resolved"?
// No, we effectively yeilded to
// allow Array.map() to iterate?
// This works out who won and
// then resolves this promise to that.
// a is "previous" b is "current" (p0,p1)
// Call Fetch's reject if none resolved
// Bit lost here :-( Hardcoding/relying on promises.length == 2 ?
// Does promises.reduce() method return a promise that is .catch(able) here or null?
promises.reduce((a, b) => a.catch(b)).catch(() => reject(Error("All failed")));
});
};
self.addEventListener('fetch', function (event) {
const fetchPromise = wait(2000).then(() => fetch(event.request)); // Slow Network test
// const fetchPromise = fetch(event.request); //
const cachePromise = caches.match(event.request);
console.log(Date(), 'Handling fetch event for', event.request.url);
// Are we not vulnerable here without
// an event.waitUntil() in the mix?
event.respondWith(
promiseAny([
// Plan A = If fetch fails failback to cache request
fetchPromise.catch(() => cachePromise), // I guess catchable
// implies thenable?
// Plan B = Wait 500msecs then go to cache
// r = value "match"ed from cache if present else Plan C fetch from network
wait(300).then(() => cachePromise).then(r => r || fetchPromise) // Returns promise that
// eventually adopts
// thenable state.
])
);
}); |
I'm closing this. |
What if I want to run function(which checks indexeddb and if it finds the record in there, makes a fetch request to api endpoint) every 30 minutes. The logical way to write this is to use Any downside or risk you can think of ? |
The service worker won't stay alive for 30 minutes, so you'll never get that callback. If it did stay alive that long, it'd be bad for user privacy and device resources such as memory and battery. |
It's not mandatory to stay alive for a service worker for 30 minutes. Let's say service worker boots up first time... Now, we have 3 scenarios.
So what's the risk or downside of this? i don't think there's one. If I have the code in the global scope without Makes sense ? |
Bad for privacy and battery.
But your setInterval(() => {
console.log('a');
}, 5000);
setInterval(() => {
console.log('b');
}, 10000);
addEventListener('fetch', (event) => {
setInterval(() => {
console.log('c');
}, 10000);
}); Here are three interval callbacks. So, we spin up the service worker, which do we fire? All we have is two callbacks. The third callback hasn't even been created yet. How can that work? Also, spinning up the service worker every 30 minutes is bad for privacy and battery.
Privacy and battery. https://web.dev/periodic-background-sync/ is a more controlled way of doing this. |
ServiceWorkers are specifically designed to be short-lived. So
setTimeout
/setInterval
don't work reliably. Instead of having a feature in the platform that will lead to authors writing buggy code, we should just not support them inside ServiceWorkers. These two APIs should be moved toDedicatedWorkerGlobalScope
andSharedWorkerGlobalScope
instead where they run reliably.The text was updated successfully, but these errors were encountered: