-
Notifications
You must be signed in to change notification settings - Fork 313
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
consider allowing multiple worker thread instances for a single registration #756
Comments
Are there any apparent "low-hanging fruit" gains expected from allowing this, except for the obvious gains from reduced complexity, not having to make sure there is only one thread running of the SW script? |
My initial reaction was that allowing this would never work. There are too many places in the spec that assume there is only one active ServiceWorker, and all kinds of things would break in confusing ways if the ServiceWorker you can postMessage to via controller.postMessage is not necessarily the same worker as the worker who is handling fetch events (or there might even be multiple workers handling fetch events for a single client). But after thinking about it a bit more I don't think any of the problems that would occur by allowing multiple copies of the same service worker to be running at the same time are truly insurmountable. But I'm also not sure if there really is much value in allowing this. I imagine this could be useful in situations where a serviceworker wants to do some relatively heavy CPU bound task in the process of handling some fetch (or other) event, and we don't want to block other fetch requests on that task? I think what we should be supporting for such situations is that the SW can spin up a dedicated/shared worker, whose lifetime is linked to the lifetime of the SW (or maybe to the event it is currently handling, but just the SW would work fine; after all it won't get killed until the fetch event the processing is for has finished), just like a regular website would have to spin up a separate worker to do heavy CPU bound tasks in response to an event. And if we want to allow long-running CPU bound tasks in response to an event, which last past a response has been send to a fetch event, I'm not sure if those should be allowed without some kind of UI presence (for example allowing a new kind of Notification with progress indicator to serve as the UI that keeps a shared worker alive). |
To be clear, I'm only suggesting we state the browser may spin up multiple SW instances, but not make it a major feature. We would still spec explicit solutions for heavy CPU tasks, etc. I just question if maintaining one-and-only-one instance is worth the code complexity and perf hit in a multi-process browser architecture. Can chrome produce multiple service worker instances for different content processes today? Or do you already have a solution to guarantee a single SW instance across the entire browser? |
Chrome already guarantees a single SW instance across the entire browser. Spinning up multiple SW instances does seem like something worth exploring though. I do think we'll have to clarify how various things that return a ServiceWorker instance deal with this, and what happens if you postMessage to them. In particular this seems like it could be helpful in cases of misbehaving service workers (where we can then fire up a new SW and start sending fetch events there, without having to wait for some timeout blocking all other fetches), or just in general in situations where there is a performance concern (foreign fetch comes to mind as well; having potentially every website you're browsing to try to fetch something from the same SW could result in noticeable slowness). |
We've discussed this before a few times. I've always been pretty keen on it, but if memory serves @slightlyoff was less keen. I'm thinking of http2-optimised pages made up of 100s of requests, is there a performance benefit to multiple workers? |
IMHO, this is an implementation detail low enough to not live at the spec level. In the spec we should specify some warranties, for example saying that logically there is only one service worker per scope although it could be implemented as several ones. |
It has to be in the spec becauae it's observable with the right combination of waitUntil(), postMessage(), and worker global variables. |
By sending global state via |
You use set some global state in the SW, then use an evt.waitUntil() to hold the SW alive, and then send a postMessage() or other event triggering mechanism that checks the global state. |
Some thoughts on our options here:
I'd prefer the first, but the others are ways out if it breaks the web in ways we can't work around. |
I agree that 1 would be the preferred outcome. Another variant of 2 could be to make concurrency a per-client thing. So all message/fetch events related to the same client would always end up in the same worker, but other events can end up anywhere. That might have the lowest risk of breaking things, would probably address the browser-vendor implementation concerns, but obviously wouldn't allow some of the optimisations that full concurrency could, as it might very well be beneficial to process multiple fetch events from the same client in multiple threads. |
From an implementor's point of view (1) is very attractive. It gives us the most flexibility to optimize with the smallest amount of complexity. Options (2) and (3) restrict us more and add more complexity to support unique worker instances across processes. |
Agreed, that's what lead me to 2. That way you would have concurrency with fetch, but not with message/push. We'll see what users think. We don't need to consider 2/3 unless 1 is unattainable. |
If we did multiple SW instances I think we could maybe still support one-to-one messaging with the instances. For example, when a SW receives a fetch event it could construct a new I think that would work, but maybe I don't fully understand |
I also have some local patches that spin up N thread instances for each service worker. Let me know if there is a particular site that I should test. |
https://www.flipkart.com/ might be an interesting one to try. Yeah your |
Having the SW send the MessageChannel to the client in its onfetch event works, provided that:
|
Yeah, that's the plan. |
Some more thoughts: Postmessaging to a SW will land in one instance, whereas |
In theory, |
Yeah, whereas |
Just wanted to chime in that I also support option 1 as well. |
Just leaving some feedback related to: https://jakearchibald.com/2016/service-worker-meeting-notes/
So far we have been already using indexedDB (with localForage for simple key/value) the fact that the SW can be killed was strong enough to not to keep states on memory and persist them. |
Re: the PR above. We were using persistent state stored in a variable, but have been able to switch to indexedDB. In time it'd be nice to have a higher level API for syncing shared state between sw instances as polling an indexedDB store is a bit hacky, and accessing indexedDB on every fetch event wouldn't be particularly performant. Similarly, I'd love to see a cookies API land in the near future - we're considering implementing something similar to what we've done for feature flags for cookies in the meantime |
I use Clients.claim a lot just to choke down the simplicity of the SW life cycle (which for me has always been the most complicated aspect of dealing with Service Workers). How would that work with a litany of SWs claiming the client in rapid succession? |
Sorry If I am out of context and late, but just leaving some feedback related to: https://jakearchibald.com/2016/service-worker-meeting-notes/
We have an app where we show some user generated HTML content (including JS/CSS, etc) in an iframe and the resource references inside that HTML are not present on any server but are available via a web worker running on a different origin. |
For what it's worth, one of the more prominent guides to the Push API depends on global state being maintained in a service worker (i.e. a But that's probably bad already even without concurrency, right, since user agents can already shut a SW down at any time it doesn't have events to keep it alive? |
F2F: Interested to hear if Apple's take on this has changed at all. |
@wanderview I ended up making my app rely on global scope, so I'm hoping if multi-threading happens it'll be enabled via an option somehow.
I could, but my app also auto-updates, and I want to avoid the situation where it does a big update twice, simply because two windows are open. Having SW manage all updates seems to work better in this case.
I tried doing this but I couldn't get it to work, and it doesn't seem like IDB locks stay locked for very long. "Transactions are expected to be short-lived, so the browser can terminate a transaction that takes too long, in order to free up storage resources that the long-running transaction has locked." (Mozilla) I ended up using promises to queue tasks, which requires global scope. Each new call to
|
Wouldn't a SharedWorker be ideal for "only one instance can ever be doing this" use cases? |
Copying this from another thread. F2F notes about multi-service worker requirements: F2F: background SW (push, notification, sync, bg fetch) vs foreground SW (fetch, postMessage) - MS & Apple want this for service workers to exist when the browser is closed. Should we spec this? Fetches for notification icons go through the background SW in Edge, since there isn't a foreground SW. What happens to the clients API? How do we represent multiple instances. Facebook global state case:
How does the client message the correct service worker? Edge can work around this by still ensuring there's only one SW at a time. Either using the bg SW, or fg SW. We need to think more about use cases that require speaking to a particular SW, or otherwise depend on global state. |
Random thought: If we made multiple instances an opt-in (at registration time), browsers that require it for particular features like push could reject push subscriptions if multiple instances wasn't enabled. That doesn't solve Facebook's case though. |
Types of multi-instance use:
Option 1: No multiple instances.No change to the spec. Safari/Edge would have to find a way to work around this by switching from one worker to another, potentially serialising state to cater for use-cases like Facebook's. Option 2: Edge and Safari go ahead and use multiple instances, and we see what breaks.If only a few sites break, we could reach out to them. Facebook's case for instance could be fixed by passing message ports or using broadcast channel (#1185 (comment)). If everything turns out fine, we only have to make a minor spec change to recognise there may be a pool of instances that can receive top-level events. If lots of sites break, but Option 3: Multiple instances becomes an opt-in feature, but Edge/Safari reject push subscriptions if it isn't enabled.navigator.serviceWorker.register(url, { multiInstance: true }); This is kinda messy, as we'd have to think of what happens if the registration is updated to opt-out after push subscriptions have been created. Also, it'll be tough for developers if they have to enable it for Edge/Safari, but disable it for Chrome/Firefox (because they don't support it). |
I guess I had assumed something like this would be saying "I'm ok with multi-instances, but I understand there is no guarantee on how many instances I will see." |
As long as they're actively testing in a browser that does the multi instance thing in a big way. If a new browser comes along and does multi instance in a different way, we could be back to square one. I pondered earlier whether opt-in-multi-instance-mode could do something to enforce multi-instance-like behaviour. Eg create a new realm for every top level event. That sounds super expensive, but maybe UAs have tricks here. |
I think multi instance SW is more scalable approach for user agents. Single
threaded SW may produce more complications as new SW features/specs added
and changing the behavior from there would be much harder.
…On Aug 17, 2017 00:27, "Jake Archibald" ***@***.***> wrote:
As long as they're actively testing in a browser that does the multi
instance thing in a big way. If a new browser comes along and does multi
instance in a different way, we could be back to square one.
I pondered earlier whether opt-in-multi-instance-mode could do something
to enforce multi-instance-like behaviour. Eg create a new realm for every
top level event. That sounds super expensive, but maybe UAs have tricks
here.
—
You are receiving this because you were mentioned.
Reply to this email directly, view it on GitHub
<#756 (comment)>,
or mute the thread
<https://github.com/notifications/unsubscribe-auth/ABIlkVvDHysbqKMIa5n42qyGb6sg59Bgks5sY16sgaJpZM4GGqeS>
.
|
Is there a 4th option in which multiple-instances could be an implementation detail, avoiding exposing it to the developer, making global state to be shared somehow? |
I don't see a way to implement this given SW instances would likely live in different processes and global state provides synchronous access. |
For others reading along, here are the notes from the call #1173 (comment). tl;dr: MS are going to implement multiple instances and see what breaks. If all is well, we can spec the idea of a pool of service workers. If developers want to talk to a particular instance, |
Just to be very clear, MSFT is trying a very specific version of multi-instance: one instance for receiving push messages and one instance for handling fetches. This isn't parallelism for fetch handling. |
So, has Edge shipped this model with separate SW for push notifications? |
Yes, the April 2018 Update runs the push event handler in the background. We're investigating forwarding that event handling to the foreground if Microsoft Edge is open. |
F2F:
|
As @mattto mentioned, the latest version (Windows 10 October 2018 Update) forwards the push event from the background to Microsoft Edge in the foreground if it is open. This means that the push event handler in such a case would be run in the same service worker execution context as the fetch event handler. |
FWIW, I think I have run across an actual case where multiple service worker threads could offer a performance boost: https://bugs.chromium.org/p/chromium/issues/detail?id=1293176#c11 Here someone is loading a dev environment with unbundled modules. Its hitting the SW with 1000+ requests on load. From tracing it appears the SW thread is being maxed out and becoming a bottleneck. Just offering this as a data point as to when it might make sense to do this from a performance angle. |
Currently gecko has a bug where multiple service worker threads can be spun up for the same registration. This can happen when windows in separate child processes open documents controlled by the same registration. Per the spec, the FetchEvents should be dispatched to the same worker instance. In practice, we spin up different work threads for each child process and dispatch there.
While discussing how to fix this, we started to wonder if its really a problem. Service workers already discourage shared global state since the thread can be killed at any time. If the service worker script is solely using Cache, IDB, etc to store state, then running multiple thread instances is not a problem.
It seems the main issue where this could cause problems is for code that wants to use global variables while holding the worker alive using waitUntil() or respondWith(). This is theoretically observable by script, but seems to have limited useful applications.
How would people feel about loosening the current spec to allow a browser to spin up more than one worker thread for a single registration? When to execute multiple threads and how many would be up to the browser implementation.
The text was updated successfully, but these errors were encountered: