Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

setTimeout/setInterval are unreliable in ServiceWorkers #838

Closed
ojanvafai opened this issue Feb 22, 2016 · 29 comments
Closed

setTimeout/setInterval are unreliable in ServiceWorkers #838

ojanvafai opened this issue Feb 22, 2016 · 29 comments

Comments

@ojanvafai
Copy link

ServiceWorkers are specifically designed to be short-lived. So setTimeout/setInterval don't work reliably. Instead of having a feature in the platform that will lead to authors writing buggy code, we should just not support them inside ServiceWorkers. These two APIs should be moved to DedicatedWorkerGlobalScope and SharedWorkerGlobalScope instead where they run reliably.

@wanderview
Copy link
Member

This is no different than the risks of using other async methods in SW; cache api, idb, etc. You can use them safely in combination with waitUntil() or respondWith().

@ojanvafai
Copy link
Author

It means we need to keep the ServiceWorker alive. If there were a good use-case for it, I'd be open to that. The difference from the other async methods is that those methods are all serving an explicit purpose (e.g. do a thing once you've gotten a result out of the database). Is there a good use-case for keeping a SW alive for 100ms and then doing something?

@mkruisselbrink
Copy link
Collaborator

It might be perfectly reasonable to fire of a fetch, and in parallel wait for 100ms to return something else if the fetch doesn't return in time.

@esprehn
Copy link

esprehn commented Feb 23, 2016

For that I think you want to use the timeout on fetch? I don't see a timeout feature on fetch, but that's a fetch bug.

@mkruisselbrink
Copy link
Collaborator

I agree that a timeout on fetch would be helpful too, but not enough.
To elaborate on the use case I was trying to describe:

  1. Fetch event comes in, fire off network fetch and start timeout.
  2. After 100ms timeout triggers, resolve promise passed to respondWith with something (but leave a waitUntil promise unresolved)
  3. When fetch finishes, store its result in cache, postMessage to client to inform it of newly available data, resolve waitUntil promise.

@RichardMaher
Copy link

@ojanvafai What is happening at the moment? I assumed it would be extremely inefficient to terminate a SW as soon as the event queue is empty, are you saying that this is in fact what is happening?

Why would a user-agent, or operating system, terminate a SW on a device that is resource rich and far from exhausted?

BTW, I fully accept you original points about setTimeout/setInterval as long as they are still there with worker threads.

NB: While the GeoLocation team is implementing serviceWorkerRegistration.travelManager one could always implement serviceWorkerRegistration.calendarManager. A calendarManager event would be sufficient to re-instantiate a terminated Service Worker. (calendarManager being managed in the browser as will be travelManager and not Google Play or other host process.)

@ojanvafai
Copy link
Author

My point is that it's racy. You don't know when the SW is going to get killed.

@RichardMaher
Copy link

%100 agree. Zap 'em.

@wanderview
Copy link
Member

Being able to delay execution is a primitive. We offer waitUntil() to avoid races here and with other async functions. I personally don't see a reason to remove setTimeout().

@mkruisselbrink
Copy link
Collaborator

I don't agree that removing setTimeout makes sense either. I don't see how it is any different from any other async APIs. You have to be somewhat careful using it, just like other APIs. Also there are actual use cases enabled by it.

@RichardMaher
Copy link

@mkruisselbrink I don't doubt there are use cases enabled by it. What I'm curious about is if those cases would not be better served by a reliable calendar/time manager that, when faced with an absent/terminated SW, would fire one up again to deliver the event.

Sorry, just read your particular use case above and it's compelling. Especially with a dicky network. Sounds best of cache and up-to-date worlds.

@RichardMaher
Copy link

Not sure about Ben's definition of "Safely" and I hope this is a half relevant and intelligent question.

WRT: - https://slightlyoff.github.io/ServiceWorker/spec/service_worker/#extendable-event-waituntil-method

Do not terminate the service worker whose responsible event loop is running task until waiting for all of extendLifetimePromises settles.

Is it be possible that @mkruisselbrink could wrap his setTimeout up in a promise like: -

function wait(time) {
return new Promise(function(resolve, reject) {
var req = setTimeout(function(){resolve();}, time);
});
}

wait(5000).then(function(resp){alert('Finished waiting');},function(err){});

. . . and such an action (_logically_ in-lining the wait) would mark the SW as immune from "idle" termination?

Then @mkruisselbrink could do a Promise.race([fromWait, fromFetch]);

function myFunction() {
var p1 = wait(5000);
var pa = [p1];
Promise.race(pa).then(function(resp){
alert('Finished waiting');
},
function(err){});
}

@annevk
Copy link
Member

annevk commented Feb 24, 2016

Note that if we decide to do this, we'd need to change HTML, since HTML is what causes these to be exposed in service workers.

@wanderview
Copy link
Member

Richard, wrapping the setTimeout() in a promise is good, but you are missing the waitUntil(). Something like:

addEventListener('install', function(evt) {
  evt.waitUntil(wait(1000).then(function() {
    // do delayed stuff
  })):
}):

Implementing any kind of implied keep-alive state is problematic and prone to abuse. That's why we offer the explicit waitUntil().

@jakearchibald
Copy link
Contributor

Agree with @wanderview and @mkruisselbrink here, setTimeout is no more dangerous here than other async APIs, and no less useful. waitUntil is the mechanism we have to extend an event, if you don't use that, you get no guarantees.

Say I wanted to race the cache & the network, but prefer network content if it can arrive in 500ms:

function wait(ms) {
  return new Promise(resolve => {
    setTimeout(resolve, ms);
  });
}

function promiseAny(promises) {
  return new Promise((resolve, reject) => {
    promises = promises.map(p => Promise.resolve(p));
    promises.forEach(p => p.then(resolve));
    promises.reduce((a, b) => a.catch(() => b)).catch(() => reject(Error("All failed")));
  });
};

self.addEventListener('fetch', event => {
  const fetchPromise = fetch(event.request);
  const cachePromise = caches.match(event.request);

  event.respondWith(
    promiseAny([
      fetchPromise.catch(() => cachePromise),
      wait(500).then(() => cachePromise).then(r => r || fetchPromise)
    ])
  );
});

Not sure how you'd achieve this without setTimeout.

@RichardMaher
Copy link

Trying to get this clear in my head.

ExtendedEvent.waitUntil() offers viability guarantees that Promise.race() or .all() cannot and does not? Is that correct? Is it because Promise is standard ECMAscript and waitUntil() is SW aware?

I've looked at #738 "Describe waitUntil() behavior in terms of Promise.all()". I've also looked at https://developer.mozilla.org/en-US/docs/Web/API/ExtendableEvent/waitUntil and the spec.

Is there better documentation available somewhere?

My guess now is that waitUntil will only work within the EventHandler context of an the few (currently 8?) Extendable events used by ServiceWorkers and that is the only environment providing sanctuary from the idle SW watchDog. But please feel free to wax lyrical on this topic.

@wanderview
Copy link
Member

Right, we explicitly only want work to be done in the context of a functional event. We don't want people visiting a site once and then later discover that site has been mining bitcoin in the background. So arbitrary long running operations are explicitly not supported outside of functional events.

@RichardMaher
Copy link

Thanks again for explaining Ben and the logic seems sound. (Otherwise there'd be all these indefinite promises lying around just to keep the SW alive :-)

I assume you don't allow something like: -

var dodgyEvent = new ExtendableEvent('message_OrSomethingElse');
self.addEventListener('message', function (e) {
var neverSettle = new Promise(function(resolve, reject) { });
this.waitUntil(neverSettle);
}
self.dispatchEvent(dodgyEvent);

Having said that I see a strong argument for "Install" and "Activate" events being protected but "Message" "Fetch" "Error" "Travel" really? Why should these events empower the developer with a termination veto?

Anyway, I've said it before and I'll say it again, this Service Worker stuff is absolutely amazing functionality! well done to all involved.

@wanderview
Copy link
Member

The spec allows the browser to kill threads stuck in a waitUntil(). I believe both chrome and firefox implement this using a timeout on the order of minutes.

@delapuente
Copy link

Just my two cents: we are currently using setTimeout() in a similar way @mkruisselbrink points out. This case to respond from the cache only if network times out.

@RichardMaher
Copy link

FYI, Firefox works as expected when it comes to adding images to the current page either via SRC or FETCH API. Chrome 48 on the other hand does NOT fire a 'fetch' event. Curious.

  var noMic;
    noMic = document.createElement('img');
    noMic.style.visibility = 'visible';
    noMic.style.height = '20px';
    noMic.style.width = '20px';
    document.body.appendChild(noMic);
    noMic.src = '//localhost/pushit/images/icon.png';

/*
fetch('//localhost/pushit/images/icon.png').then(function(response) {
return response.blob();
}).then(function(response) {
var objectURL = URL.createObjectURL(response);
noMic.src = objectURL;
}); */

@jakearchibald
Copy link
Contributor

@RichardMaher this doesn't seem related to this issue. If you have found a Chrome bug, can you post it to http://crbug.com? Feel free to ping me on https://twitter.com/jaffathecake/ and I'll try and recreate, but I've created demos that depend on SW handling image loads, and they work.

@wanderview
Copy link
Member

Could be differences in image cache?

@RichardMaher
Copy link

Thanks for the offer of the side-bar Jake but I use neither Twatter or FacePlant. I did email you not so long ago but just assumed that you're pretty busy or I ended up in junk-email. Either way, when it comes to "this issue" it was all over as soon as @mkruisselbrink posted their use case.

Don't get me wrong, I've learned/am learning a lot from your code example and think this thread should stay open as a platform for displaying and viewing it, but as an issue this could've been closed 8 days ago.

Anyway, I led everyone astray by saying it was a Chrome issue. My bad. Both Chrome and FF respond (fire fetch events) to every "fetch" in the main thread. Both Chrome and FF only fire a fetch event once for an IMG element that is added/removed from the DOM many times. Probably documented like that somewhere?

But while I'm here and back on topic: -

When is the response cached? When the Fetch Event is triggered? PreventDefault() to stop it? Do I need cache: no-store on my INIT parameter to my FETCH call? Does it happen at event.respondWith() time?

If I understood @mkruisselbrink 's strategy correctly, the goal was to deliver data as fresh as possible and only go to cache if the network is slow. BUT even then leave the network fetch running to update the cache whenever it finishes.

Without something like INIT "no-cache" aren't we at risk of the FETCH just reading cache anyway? I'm not saying your example was meant to be complete and I can't break it (or really understand it :-) see below) but in my test example I was manually updating cache with a PUT(). Silly me?

Also, with the (Service Worker acting as a Shared Worker?) and handling fetches from all tabs from the same source, I assume a second page registering the same SW is just like the documented skipWaiting() replacement process? But what happened to the fetchEvent.clientID field?

Anyway, here's your code again with some comments to try to help me. I'm sure every other reader immediately understood your code so I apologise to them now: -

function wait(ms) {
  return new Promise(resolve => {
    setTimeout(resolve, ms);
  });
}

function promiseAny(promises) {
  return new Promise((resolve, reject) => {

    // Kick off all Promises (thenable) in array
    // p is each element of promises[]
    promises = promises.map(p => Promise.resolve(p));

    // For first promise, queue if settled "resolved" call Fetch's resolve
    promises.forEach(p => p.then(resolve)); // Didn't we just force all "resolved"?
                                            // No, we effectively yeilded to
                                            // allow Array.map() to iterate?
                                            // This works out who won and 
                                            // then resolves this promise to that.

    // a is "previous" b is "current" (p0,p1)
    // Call Fetch's reject if none resolved 
    // Bit lost here :-( Hardcoding/relying on promises.length == 2 ?
    // Does promises.reduce() method return a promise that is .catch(able) here or null?
    promises.reduce((a, b) => a.catch(b)).catch(() => reject(Error("All failed")));

  });
};
self.addEventListener('fetch', function (event) {
  const fetchPromise = wait(2000).then(() => fetch(event.request));        // Slow Network test
//  const fetchPromise = fetch(event.request);                               // 
  const cachePromise = caches.match(event.request); 

  console.log(Date(), 'Handling fetch event for', event.request.url);

// Are we not vulnerable here without
// an event.waitUntil() in the mix?

  event.respondWith(
    promiseAny([

      // Plan A = If fetch fails failback to cache request 
      fetchPromise.catch(() => cachePromise),                          // I guess catchable 
                                                                       // implies thenable?

      // Plan B = Wait 500msecs then go to cache 
      // r = value "match"ed from cache if present else Plan C fetch from network
      wait(300).then(() => cachePromise).then(r => r || fetchPromise)  // Returns promise that        
                                                                       // eventually adopts
                                                                       // thenable state.
    ])
  );
});

@jakearchibald
Copy link
Contributor

I'm closing this. setTimeout and setInterval are no less reliable in SW than any other async methods, and there are legitimate use cases for them

@novaknole
Copy link

@jakearchibald

What if I want to run function(which checks indexeddb and if it finds the record in there, makes a fetch request to api endpoint) every 30 minutes. The logical way to write this is to use setInterval in the global scope of service worker. Would this be unreliable? even if service worker gets killed by browser and boots up again, setInterval would start again since it's in global scope.

Any downside or risk you can think of ?

@jakearchibald
Copy link
Contributor

What if I want to run function(which checks indexeddb and if it finds the record in there, makes a fetch request to api endpoint) every 30 minutes

The service worker won't stay alive for 30 minutes, so you'll never get that callback.

If it did stay alive that long, it'd be bad for user privacy and device resources such as memory and battery.

@novaknole
Copy link

novaknole commented May 7, 2020

@jakearchibald

It's not mandatory to stay alive for a service worker for 30 minutes.

Let's say service worker boots up first time... setinterval code in the global scope gets called.

Now, we have 3 scenarios.

  1. service worker stays alive for 30 minutes. which means my callback gets called.
  2. service worker gets killed and it boots up again by itself which executes my setInterval code again from scratch.
  3. maybe push event gets called and this is how service worker boots up again. Even in this case, booting up means running the code in the global scope. So all good.

So what's the risk or downside of this? i don't think there's one.

If I have the code in the global scope without setInterval , who knows, maybe it doesn't get killed for the next 30 minutes which means I lose the callback ...

Makes sense ?

@jakearchibald
Copy link
Contributor

  1. service worker stays alive for 30 minutes. which means my callback gets called.

Bad for privacy and battery.

  1. service worker gets killed and it boots up again by itself which executes my setInterval code again from scratch.

But your setInterval code went away once the service worker was killed.

setInterval(() => {
  console.log('a');
}, 5000);

setInterval(() => {
  console.log('b');
}, 10000);

addEventListener('fetch', (event) => {
  setInterval(() => {
    console.log('c');
  }, 10000);
});

Here are three interval callbacks. So, we spin up the service worker, which do we fire? All we have is two callbacks. The third callback hasn't even been created yet. How can that work?

Also, spinning up the service worker every 30 minutes is bad for privacy and battery.

  1. maybe push event gets called and this is how service worker boots up again. Even in this case, booting up means running the code in the global scope. So all good.

push requires you show a notification so the user knows you're doing stuff in the background. If you're doing this every 30 minutes, it's likely the user will block you.

So what's the risk or downside of this? i don't think there's one.

Privacy and battery.

https://web.dev/periodic-background-sync/ is a more controlled way of doing this.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

9 participants