Should EventSource and WebSocket be exposed in service workers? #947

Open
annevk opened this Issue Aug 10, 2016 · 53 comments

Comments

Projects
None yet
@annevk
Member

annevk commented Aug 10, 2016

It seems like XMLHttpRequest we best treat these as "legacy APIs" that you can use fetch() instead for (at least, once we have upload streams).

Also with the lifetime of a service worker it's unlikely these APIs are useful.

@nolanlawson

This comment has been minimized.

Show comment
Hide comment
@nolanlawson

nolanlawson Sep 20, 2016

Member

FWIW just tested in html5workertest, and it seems Firefox 48 doesn't expose either EventSource or WebSocket inside a ServiceWorker, but Chrome 52 exposes both. Interestingly Firefox 50 Dev Edition exposes WebSocket.

Related: nolanlawson/html5workertest#14

Member

nolanlawson commented Sep 20, 2016

FWIW just tested in html5workertest, and it seems Firefox 48 doesn't expose either EventSource or WebSocket inside a ServiceWorker, but Chrome 52 exposes both. Interestingly Firefox 50 Dev Edition exposes WebSocket.

Related: nolanlawson/html5workertest#14

@smaug----

This comment has been minimized.

Show comment
Hide comment
@smaug----

smaug---- Sep 23, 2016

Yeah, Gecko doesn't have EventSource in any workers.
https://bugzilla.mozilla.org/show_bug.cgi?id=1243942 fixed WebSocket on ServiceWorkers.

Yeah, Gecko doesn't have EventSource in any workers.
https://bugzilla.mozilla.org/show_bug.cgi?id=1243942 fixed WebSocket on ServiceWorkers.

@smaug----

This comment has been minimized.

Show comment
Hide comment
@smaug----

smaug---- Sep 24, 2016

FWIW, if we get sub-workers in SW, these, including XHR, would become useable there too, if for nothing else, consistency.
Unless we then want to create some new type of sub-worker which doesn't do any I/O.

FWIW, if we get sub-workers in SW, these, including XHR, would become useable there too, if for nothing else, consistency.
Unless we then want to create some new type of sub-worker which doesn't do any I/O.

@flaki

This comment has been minimized.

Show comment
Hide comment
@flaki

flaki Oct 20, 2016

I was wondering, is there any legit use case for having WebSockets in service workers? Considering the service worker isn't something long-running, and WebSockets are best utilized when the connection built up with the (relatively expensive) handshake lasts longer I couldn't come up with a good usecase.

(fwiw being able to host "sub-workers" and run socket connections in there does seem to solve this and sounds like a better usecase for me)

flaki commented Oct 20, 2016

I was wondering, is there any legit use case for having WebSockets in service workers? Considering the service worker isn't something long-running, and WebSockets are best utilized when the connection built up with the (relatively expensive) handshake lasts longer I couldn't come up with a good usecase.

(fwiw being able to host "sub-workers" and run socket connections in there does seem to solve this and sounds like a better usecase for me)

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 21, 2016

Member

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams. WebSocket has the additional sadness of not working with HTTP/2. That's the main reason to avoid exposing them in new places.

However, as @smaug---- said, if service workers get sub-workers, and we don't make those different from normal workers, all these restrictions are getting rather arbitrary.

Member

annevk commented Oct 21, 2016

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams. WebSocket has the additional sadness of not working with HTTP/2. That's the main reason to avoid exposing them in new places.

However, as @smaug---- said, if service workers get sub-workers, and we don't make those different from normal workers, all these restrictions are getting rather arbitrary.

@smaug----

This comment has been minimized.

Show comment
Hide comment
@smaug----

smaug---- Oct 23, 2016

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

FWIW, It is very unclear to me how Fetch+Streams let UA to do similar memory handling optimizations to Blob messages as what WebSocket (and XHR) let: (huge )Blobs can be stored in temporary files.

Basically, both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

FWIW, It is very unclear to me how Fetch+Streams let UA to do similar memory handling optimizations to Blob messages as what WebSocket (and XHR) let: (huge )Blobs can be stored in temporary files.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 23, 2016

Member

fetch(url).then((res)=>res.blob()) is as efficient. Fair point for streaming though. Do we have data as to whether that matters there?

Member

annevk commented Oct 23, 2016

fetch(url).then((res)=>res.blob()) is as efficient. Fair point for streaming though. Do we have data as to whether that matters there?

@smaug----

This comment has been minimized.

Show comment
Hide comment
@smaug----

smaug---- Oct 23, 2016

I'm not aware of having data for streaming + blob. But if it is needed for non-streaming case, why wouldn't it be needed for streaming case (I could imagine some web file sharing service to want to use streaming-like API to pass many files, and not do that one fetch per file.).
Gecko does not store right now WebSocket blobs in temporary files, but I consider that as a bug, since it means browser's memory usage may be way too high in certain cases, and XHR's and WebSocket should have similar blob handling.

I'm not aware of having data for streaming + blob. But if it is needed for non-streaming case, why wouldn't it be needed for streaming case (I could imagine some web file sharing service to want to use streaming-like API to pass many files, and not do that one fetch per file.).
Gecko does not store right now WebSocket blobs in temporary files, but I consider that as a bug, since it means browser's memory usage may be way too high in certain cases, and XHR's and WebSocket should have similar blob handling.

@nolanlawson

This comment has been minimized.

Show comment
Hide comment
@nolanlawson

nolanlawson Oct 24, 2016

Member

both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

You could also add "sendBeacon" to that list of obsolete APIs 😉

Member

nolanlawson commented Oct 24, 2016

both WebSocket and EventSource will soon be obsolete due to Fetch + Streams

You could also add "sendBeacon" to that list of obsolete APIs 😉

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 24, 2016

Member

@smaug---- well, with HTTP/2 requests are cheaper since the connection stays alive for longer, so if you already have an open connection for your bi-directional communication channel over a request-response pair, you might as well just send another request to get the actual file. That way it doesn't block the server from sending other messages meanwhile either.

Member

annevk commented Oct 24, 2016

@smaug---- well, with HTTP/2 requests are cheaper since the connection stays alive for longer, so if you already have an open connection for your bi-directional communication channel over a request-response pair, you might as well just send another request to get the actual file. That way it doesn't block the server from sending other messages meanwhile either.

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Dec 7, 2016

Contributor

While it's true that WebSockets do not fit well with the ServiceWorker model, it is not the case that WebSockets are supplanted by the fetch API. Their use cases are different. It is unfortunate that WebSocket over HTTP/2 is not a thing (yet?) but arguably for many things WebSockets are actually used for that is irrelevant.

Contributor

ricea commented Dec 7, 2016

While it's true that WebSockets do not fit well with the ServiceWorker model, it is not the case that WebSockets are supplanted by the fetch API. Their use cases are different. It is unfortunate that WebSocket over HTTP/2 is not a thing (yet?) but arguably for many things WebSockets are actually used for that is irrelevant.

@zdila

This comment has been minimized.

Show comment
Hide comment
@zdila

zdila Dec 7, 2016

We use service worker to indicate incoming call. It starts with push notification. Then ServiceWorker opens WebSocket where it receives message if the ringing is active. If yes then it shows a Notification and waits until it receives "ring_end" message. Afterwards it closes the Notification and the connection. Without WebSocket we would need to use HTTP (long)polling.

zdila commented Dec 7, 2016

We use service worker to indicate incoming call. It starts with push notification. Then ServiceWorker opens WebSocket where it receives message if the ringing is active. If yes then it shows a Notification and waits until it receives "ring_end" message. Afterwards it closes the Notification and the connection. Without WebSocket we would need to use HTTP (long)polling.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Dec 7, 2016

Member

@ricea how are they not supplanted by the Fetch API? I don't think we should do WebSocket over H/2. Not worth the effort.

Member

annevk commented Dec 7, 2016

@ricea how are they not supplanted by the Fetch API? I don't think we should do WebSocket over H/2. Not worth the effort.

@tyoshino

This comment has been minimized.

Show comment
Hide comment
@tyoshino

tyoshino Dec 8, 2016

Contributor

@zdila What did you mean by "incoming call" and "If yes"? Is your application a voice chat?

Does the WebSocket receive only 1 message for ring start and 1 message for ring end? Or while the ringing is active, the server keeps sending a message to indicate the ringing is still active periodically? Or the received WebSocket messages are used for updating existing notification? Why can't the push notification be used for these purposes in addition to the first push notification?

Contributor

tyoshino commented Dec 8, 2016

@zdila What did you mean by "incoming call" and "If yes"? Is your application a voice chat?

Does the WebSocket receive only 1 message for ring start and 1 message for ring end? Or while the ringing is active, the server keeps sending a message to indicate the ringing is still active periodically? Or the received WebSocket messages are used for updating existing notification? Why can't the push notification be used for these purposes in addition to the first push notification?

@tyoshino

This comment has been minimized.

Show comment
Hide comment
@tyoshino

tyoshino Dec 8, 2016

Contributor

@annevk It's off topic (or too much general topic), but I've recently started sharing an idea named WiSH at IETF. Here's the I-D https://tools.ietf.org/html/draft-yoshino-wish-01. There're also some real (ideas that uses some HTTP2 level stuff such as mapping WS frames to HTTP2 frames) WS/HTTP2 proposals. But WiSH just uses the Fetch API and the Streams to provide almost equivalent functionality as WebSocket for now, HTTP2 era, QUIC era, and the future while keeping taking care of migration of existing WebSocket API users from the WebSocket protocol over TCP.

I'm thinking that providing some message oriented API on the web platform makes some sense even after we finish the HTTP API powerful enough. We already have the WebSocket API, and it would be reasonable to keep providing almost the same interface and evolve its capability (e.g. make it able to benefit from HTTP2, QUIC). Until that evolution happens, the combination of the WebSocket API and the WebSocket protocol would stay there to satisfy the demands.

Possible simplifcation of the architecture may motivate disallowing WebSockets in SW, but we haven't done the evolution, yet. I think we should keep WebSockets available regardless of the context (window, worker, SW) for now if the use case is reasonable enough.

(I'd like to hear your general opinion on the WiSH idea somewhere. Should I file an issue at whatwg/fetch repo for this?)

Contributor

tyoshino commented Dec 8, 2016

@annevk It's off topic (or too much general topic), but I've recently started sharing an idea named WiSH at IETF. Here's the I-D https://tools.ietf.org/html/draft-yoshino-wish-01. There're also some real (ideas that uses some HTTP2 level stuff such as mapping WS frames to HTTP2 frames) WS/HTTP2 proposals. But WiSH just uses the Fetch API and the Streams to provide almost equivalent functionality as WebSocket for now, HTTP2 era, QUIC era, and the future while keeping taking care of migration of existing WebSocket API users from the WebSocket protocol over TCP.

I'm thinking that providing some message oriented API on the web platform makes some sense even after we finish the HTTP API powerful enough. We already have the WebSocket API, and it would be reasonable to keep providing almost the same interface and evolve its capability (e.g. make it able to benefit from HTTP2, QUIC). Until that evolution happens, the combination of the WebSocket API and the WebSocket protocol would stay there to satisfy the demands.

Possible simplifcation of the architecture may motivate disallowing WebSockets in SW, but we haven't done the evolution, yet. I think we should keep WebSockets available regardless of the context (window, worker, SW) for now if the use case is reasonable enough.

(I'd like to hear your general opinion on the WiSH idea somewhere. Should I file an issue at whatwg/fetch repo for this?)

@tyoshino

This comment has been minimized.

Show comment
Hide comment
@tyoshino

tyoshino Dec 8, 2016

Contributor

Also with the lifetime of a service worker it's unlikely these APIs are useful.

Yeah. The nature of SW would make most of WS's key features useless. I'd like to understand zdila's use case more.

Regarding streamed efficient blob receiving, basically I agree with Anne's answer at #947 (comment). We haven't had any real study on whether anyone has been utilizing this power, so not backed by data though.

Contributor

tyoshino commented Dec 8, 2016

Also with the lifetime of a service worker it's unlikely these APIs are useful.

Yeah. The nature of SW would make most of WS's key features useless. I'd like to understand zdila's use case more.

Regarding streamed efficient blob receiving, basically I agree with Anne's answer at #947 (comment). We haven't had any real study on whether anyone has been utilizing this power, so not backed by data though.

@zdila

This comment has been minimized.

Show comment
Hide comment
@zdila

zdila Dec 8, 2016

What did you mean by "incoming call" and "If yes"? Is your application a voice chat?
@tyoshino yes, it is a video chat application.

Simplified, we receive only two messages via WebSocket as you described - start ringing with caller details and stop ringing. It can be reworked to use 2 push notifications as you described. It will just mean for us to not to reuse existing API which is now used in several other cases.

zdila commented Dec 8, 2016

What did you mean by "incoming call" and "If yes"? Is your application a voice chat?
@tyoshino yes, it is a video chat application.

Simplified, we receive only two messages via WebSocket as you described - start ringing with caller details and stop ringing. It can be reworked to use 2 push notifications as you described. It will just mean for us to not to reuse existing API which is now used in several other cases.

@tyoshino

This comment has been minimized.

Show comment
Hide comment
@tyoshino

tyoshino Dec 8, 2016

Contributor

Thank you @zdila. Then, I recommend that you switch to use that method (two pushes) for this use case. As Anne said and flaki summarized at #947 (comment), that WebSocket may get closed when the Service Worker instance is shut down even if the server didn't intend to signal ring_end.

to reuse existing API which is now used in several other cases

I see.


So, if one has an existing WebSocket based service, it would be convenient if it also works in a service worker though it requires event.waitUntil() to work correctly and shouldn't last for long time. How about background sync + WebSocket in SW?

Contributor

tyoshino commented Dec 8, 2016

Thank you @zdila. Then, I recommend that you switch to use that method (two pushes) for this use case. As Anne said and flaki summarized at #947 (comment), that WebSocket may get closed when the Service Worker instance is shut down even if the server didn't intend to signal ring_end.

to reuse existing API which is now used in several other cases

I see.


So, if one has an existing WebSocket based service, it would be convenient if it also works in a service worker though it requires event.waitUntil() to work correctly and shouldn't last for long time. How about background sync + WebSocket in SW?

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Dec 9, 2016

Member

We could discuss WiSH over at whatwg/fetch, sure. I think the main problem with keeping WebSocket in is that it'll make it harder to remove in the future. Whereas the reverse will be easy to do once we have explored alternatives.

Member

annevk commented Dec 9, 2016

We could discuss WiSH over at whatwg/fetch, sure. I think the main problem with keeping WebSocket in is that it'll make it harder to remove in the future. Whereas the reverse will be easy to do once we have explored alternatives.

@zdila

This comment has been minimized.

Show comment
Hide comment
@zdila

zdila Dec 9, 2016

@tyoshino I realised that the problem with two-push method is that Chrome now requires to show notification for every push (userVisibleOnly to be true). This means that "ring end" push which would be meant to hide the notification will have to actually show one.

We actually have this problem also now because after the first push we may also find out that there is actually no ringing outgoing already (it is just gone).

zdila commented Dec 9, 2016

@tyoshino I realised that the problem with two-push method is that Chrome now requires to show notification for every push (userVisibleOnly to be true). This means that "ring end" push which would be meant to hide the notification will have to actually show one.

We actually have this problem also now because after the first push we may also find out that there is actually no ringing outgoing already (it is just gone).

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Jan 5, 2017

Contributor

@zdila If you just need a single notification then a hanging GET will be more efficient than a WebSocket in terms of network bytes and browser CPU usage.

Contributor

ricea commented Jan 5, 2017

@zdila If you just need a single notification then a hanging GET will be more efficient than a WebSocket in terms of network bytes and browser CPU usage.

@zdila

This comment has been minimized.

Show comment
Hide comment
@zdila

zdila Jan 5, 2017

@ricea then I would need to do a ugly polling to find out when ring ended. Not an option.

zdila commented Jan 5, 2017

@ricea then I would need to do a ugly polling to find out when ring ended. Not an option.

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Jan 5, 2017

Contributor

@annevk WebSocket is more efficient for small messages. It provides a drop-in replacement for TCP when interfacing with legacy protocols. It is simple to implement on existing infrastructure.

Contributor

ricea commented Jan 5, 2017

@annevk WebSocket is more efficient for small messages. It provides a drop-in replacement for TCP when interfacing with legacy protocols. It is simple to implement on existing infrastructure.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Jan 5, 2017

Member

How does a request body/response body channel established through H/2 rather than WebSocket not have the same benefits?

Member

annevk commented Jan 5, 2017

How does a request body/response body channel established through H/2 rather than WebSocket not have the same benefits?

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Jan 5, 2017

Contributor

An H/2 frame has a 9 octet header, compared to 2 octets for a small WebSocket message. H/2 is a complex multiplexed beast which is not very much like a TCP connection. It is not simple to implement.

Contributor

ricea commented Jan 5, 2017

An H/2 frame has a 9 octet header, compared to 2 octets for a small WebSocket message. H/2 is a complex multiplexed beast which is not very much like a TCP connection. It is not simple to implement.

@puhazh

This comment has been minimized.

Show comment
Hide comment
@puhazh

puhazh Jan 18, 2017

We use WebSockets in ServiceWorkers for syncing Indexed DB data. The site is capable of working in full offline mode, we have multiple input forms in our application and when the user saves a form the data is written to Indexed DB and the data is synced across clients using ServiceWorkers. This is a two way sync and all clients are kept updated of any changes to the data. The sync happens in the service worker as having the sync on page will cause issue when the user navigates away from the page and there should be a WebSocket connection open for each page load, as there is no way we can have a persistent connection open.

puhazh commented Jan 18, 2017

We use WebSockets in ServiceWorkers for syncing Indexed DB data. The site is capable of working in full offline mode, we have multiple input forms in our application and when the user saves a form the data is written to Indexed DB and the data is synced across clients using ServiceWorkers. This is a two way sync and all clients are kept updated of any changes to the data. The sync happens in the service worker as having the sync on page will cause issue when the user navigates away from the page and there should be a WebSocket connection open for each page load, as there is no way we can have a persistent connection open.

@jakearchibald

This comment has been minimized.

Show comment
Hide comment
@jakearchibald

jakearchibald Mar 31, 2017

Collaborator

@annevk is EventSource being deprecated? I thought its auto-reconnection stuff made it a nice high-level API? I agree it isn't useful in a service worker though.

The reason I'm nervous about removing WebSockets from SW is it prevents use of a protocol. Removing XHR wasn't so bad, as the intention is for fetch to be able to do everything it does.

Collaborator

jakearchibald commented Mar 31, 2017

@annevk is EventSource being deprecated? I thought its auto-reconnection stuff made it a nice high-level API? I agree it isn't useful in a service worker though.

The reason I'm nervous about removing WebSockets from SW is it prevents use of a protocol. Removing XHR wasn't so bad, as the intention is for fetch to be able to do everything it does.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Mar 31, 2017

Member

EventSource is not deprecated, but you can do everything it can do with Fetch, the same goes for WebSocket, though WebSocket has some framing advantages as pointed out above and maybe some connection advantages as long as HTTP sits on top of TCP.

Anyway, if everyone exposes these let's close this and hope we don't regret it later.

Member

annevk commented Mar 31, 2017

EventSource is not deprecated, but you can do everything it can do with Fetch, the same goes for WebSocket, though WebSocket has some framing advantages as pointed out above and maybe some connection advantages as long as HTTP sits on top of TCP.

Anyway, if everyone exposes these let's close this and hope we don't regret it later.

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv May 9, 2017

@annevk I'm not sure I understand the claim that Fetch can replace WebSocket. I Googled around a bit and couldn't find anything explaining how this could be done, so I hope you don't mind if I ask here.

Are you suggesting that people can replace a WebSocket with a single "full-duplex" HTTP request, relying on the ability to start reading the response while still sending the request body?

If that is what you mean, that could work over HTTP/2 but lots of HTTP/1 infrastructure is not designed to support bidirectional streaming (even though the protocol technically doesn't prevent it). Can/should apps be probing somehow to see if they have HTTP/2 all the way back to the origin server? In a world with proxies, CDNs, load balancers, etc., it's pretty common to have requests that bounce between protocol versions as they traverse the network, so this seems error-prone.

Or are you instead suggesting that apps should use old-fashion long polling patterns, while sending upstream messages as separate requests? This is problematic in use cases where the client and server want to maintain some shared state connected to the WebSocket. When polyfilling WebSocket using long polling, you end up needing to configure load balancers to support "sticky sessions" (so that messages intended for the same virtual-WebSocket land at the same server), which is tricky at best and often not supported at all.

On another note, does fetch streaming guarantee that HTTP chunk boundaries on the wire will be preserved through to Javascript? Or would the application need to provide its own framing? WebSocket's built-in framing is nice to have, though admittedly not that hard to implement manually.

kentonv commented May 9, 2017

@annevk I'm not sure I understand the claim that Fetch can replace WebSocket. I Googled around a bit and couldn't find anything explaining how this could be done, so I hope you don't mind if I ask here.

Are you suggesting that people can replace a WebSocket with a single "full-duplex" HTTP request, relying on the ability to start reading the response while still sending the request body?

If that is what you mean, that could work over HTTP/2 but lots of HTTP/1 infrastructure is not designed to support bidirectional streaming (even though the protocol technically doesn't prevent it). Can/should apps be probing somehow to see if they have HTTP/2 all the way back to the origin server? In a world with proxies, CDNs, load balancers, etc., it's pretty common to have requests that bounce between protocol versions as they traverse the network, so this seems error-prone.

Or are you instead suggesting that apps should use old-fashion long polling patterns, while sending upstream messages as separate requests? This is problematic in use cases where the client and server want to maintain some shared state connected to the WebSocket. When polyfilling WebSocket using long polling, you end up needing to configure load balancers to support "sticky sessions" (so that messages intended for the same virtual-WebSocket land at the same server), which is tricky at best and often not supported at all.

On another note, does fetch streaming guarantee that HTTP chunk boundaries on the wire will be preserved through to Javascript? Or would the application need to provide its own framing? WebSocket's built-in framing is nice to have, though admittedly not that hard to implement manually.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk May 9, 2017

Member

Can/should apps be probing somehow to see if they have HTTP/2 all the way back to the origin server?

I guess so, just like they'd have to do with WebSocket today. But it should work with HTTP/1 too, in theory.

On another note, does fetch streaming guarantee that HTTP chunk boundaries on the wire will be preserved through to Javascript?

Chunked is only applicable to HTTP/1 and if you use it via the mechanism defined in HTTP it will end up decoded by the time the data reaches JavaScript.

Member

annevk commented May 9, 2017

Can/should apps be probing somehow to see if they have HTTP/2 all the way back to the origin server?

I guess so, just like they'd have to do with WebSocket today. But it should work with HTTP/1 too, in theory.

On another note, does fetch streaming guarantee that HTTP chunk boundaries on the wire will be preserved through to Javascript?

Chunked is only applicable to HTTP/1 and if you use it via the mechanism defined in HTTP it will end up decoded by the time the data reaches JavaScript.

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv May 9, 2017

I guess so, just like they'd have to do with WebSocket today. But it should work with HTTP/1 too, in theory.

Hmm, I would be pretty concerned that this is the kind of thing that will regularly and unexpectedly break.

Proxies have a lot of legitimate reasons for breaking full-duplex streaming. For example, nginx likes to buffer request bodies to disk in order to protect backends from slow-loris attacks and in order to avoid excessive memory buffering. But this feature assumes half-duplex (response-strictly-follows-request) operation.

I worry that even with HTTP/2 (and especially with HTTP/1), we're going to see trouble with infrastructure (proxies, CDNs, load-balancers, etc.) that assume half-duplex because "it seemed to work" and the developers didn't know better. These assumptions may even appear unexpectedly in production, when the CDN added a new "feature" or the load balancer admin changed a setting that looked harmless.

Thus it seems pretty risky for applications ever to rely on this.

The nice thing about WebSocket is that it's very explicitly intended to be full-duplex, which makes it much less likely that anyone would break it by accident.

Of course, these are human bugs, not spec bugs.

Chunked is only applicable to HTTP/1 and if you use it via the mechanism defined in HTTP it will end up decoded by the time the data reaches JavaScript.

Sorry, that's not quite what I was asking. What I was asking was, if the server sends two chunks (or HTTP/2 frames), will the client be guaranteed to receive two callbacks from a ReadableStream, or might it only receive one callback in which the chunks are concatenated? I'm assuming the latter, hence the application must define its own framing on top. (EDIT: Yeah this clearly must be the case because HTTP itself certainly doesn't specify that proxies must preserve chunk boundaries on entities passing through them, so it would be silly for an application to rely on chunk boundaries being preserved...)

kentonv commented May 9, 2017

I guess so, just like they'd have to do with WebSocket today. But it should work with HTTP/1 too, in theory.

Hmm, I would be pretty concerned that this is the kind of thing that will regularly and unexpectedly break.

Proxies have a lot of legitimate reasons for breaking full-duplex streaming. For example, nginx likes to buffer request bodies to disk in order to protect backends from slow-loris attacks and in order to avoid excessive memory buffering. But this feature assumes half-duplex (response-strictly-follows-request) operation.

I worry that even with HTTP/2 (and especially with HTTP/1), we're going to see trouble with infrastructure (proxies, CDNs, load-balancers, etc.) that assume half-duplex because "it seemed to work" and the developers didn't know better. These assumptions may even appear unexpectedly in production, when the CDN added a new "feature" or the load balancer admin changed a setting that looked harmless.

Thus it seems pretty risky for applications ever to rely on this.

The nice thing about WebSocket is that it's very explicitly intended to be full-duplex, which makes it much less likely that anyone would break it by accident.

Of course, these are human bugs, not spec bugs.

Chunked is only applicable to HTTP/1 and if you use it via the mechanism defined in HTTP it will end up decoded by the time the data reaches JavaScript.

Sorry, that's not quite what I was asking. What I was asking was, if the server sends two chunks (or HTTP/2 frames), will the client be guaranteed to receive two callbacks from a ReadableStream, or might it only receive one callback in which the chunks are concatenated? I'm assuming the latter, hence the application must define its own framing on top. (EDIT: Yeah this clearly must be the case because HTTP itself certainly doesn't specify that proxies must preserve chunk boundaries on entities passing through them, so it would be silly for an application to rely on chunk boundaries being preserved...)

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk May 10, 2017

Member

Ah yes, no guarantees about how many bytes your callback contains.

No feedback on the infrastructure issues, I don't know enough about it, but it does sound like it needs to be deployed with care, once we get there.

Member

annevk commented May 10, 2017

Ah yes, no guarantees about how many bytes your callback contains.

No feedback on the infrastructure issues, I don't know enough about it, but it does sound like it needs to be deployed with care, once we get there.

@uasan

This comment has been minimized.

Show comment
Hide comment
@uasan

uasan May 16, 2017

EventSource is not deprecated, but you can do everything it can do with Fetch

This is not possible until Firefox does implement Readable Stream in Fetch API.

uasan commented May 16, 2017

EventSource is not deprecated, but you can do everything it can do with Fetch

This is not possible until Firefox does implement Readable Stream in Fetch API.

@agnivade

This comment has been minimized.

Show comment
Hide comment
@agnivade

agnivade May 30, 2017

Hi all,

So I have a requirement where I need to get intermittent push messages from the server to inside a service worker. And it should work in flaky network conditions. I was thinking of creating a websocket inside the service worker and listen for any updates. But that does not seem to be the recommendation here.

I am wondering what is the ideal way to achieve this now. Using web push notifications is an option that I am considering. However it seems pretty convoluted to set up.

EventSource is not deprecated, but you can do everything it can do with Fetch

@annevk - can you explain how that will work when a server wants to push something to the client ? As far as I understand, its a successor to our beloved xhr. But its still a simple request-response API.

I am thinking of using EventSource in the main app code outside of service worker and have a message passing mechanism to trigger the service worker when I receive an event.

Hi all,

So I have a requirement where I need to get intermittent push messages from the server to inside a service worker. And it should work in flaky network conditions. I was thinking of creating a websocket inside the service worker and listen for any updates. But that does not seem to be the recommendation here.

I am wondering what is the ideal way to achieve this now. Using web push notifications is an option that I am considering. However it seems pretty convoluted to set up.

EventSource is not deprecated, but you can do everything it can do with Fetch

@annevk - can you explain how that will work when a server wants to push something to the client ? As far as I understand, its a successor to our beloved xhr. But its still a simple request-response API.

I am thinking of using EventSource in the main app code outside of service worker and have a message passing mechanism to trigger the service worker when I receive an event.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Jun 28, 2017

Member

The same way EventSource works except at a lower level of abstraction. You keep the connection open and push more bytes into the response body.

Member

annevk commented Jun 28, 2017

The same way EventSource works except at a lower level of abstraction. You keep the connection open and push more bytes into the response body.

@myndzi

This comment has been minimized.

Show comment
Hide comment
@myndzi

myndzi Jul 17, 2017

WebSockets provide a reasonably low-latency persistent channel for things like games. I was looking into whether and how Service Workers could be utilized to allow a user to load (offline and/or quickly) an in-progress game from the last-known state, while still allowing low-latency updates when the network is available. It sounds like Service Workers may not be suitable for this use, if they are expected to be short-lived. It does however seem desirable to allow low-latency asynchronous messaging when it's available, and otherwise fall back to last-known state. This seems relevant to the conversation here, then; I'm wondering if there is some combination of proposals that could replace WebSocket's persistence and latency in this context? I'm not sure I am grokking such an option from the comments here...

myndzi commented Jul 17, 2017

WebSockets provide a reasonably low-latency persistent channel for things like games. I was looking into whether and how Service Workers could be utilized to allow a user to load (offline and/or quickly) an in-progress game from the last-known state, while still allowing low-latency updates when the network is available. It sounds like Service Workers may not be suitable for this use, if they are expected to be short-lived. It does however seem desirable to allow low-latency asynchronous messaging when it's available, and otherwise fall back to last-known state. This seems relevant to the conversation here, then; I'm wondering if there is some combination of proposals that could replace WebSocket's persistence and latency in this context? I'm not sure I am grokking such an option from the comments here...

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 12, 2017

Member

@ricea I'd love to explore WebSocket vs Fetch further and since for now this seems to be the place, I'll keep asking here. Why are H/2 frames that large? If we did WebSocket over H/2, as some have suggested at various times, wouldn't we run into the same problem? Does QUIC have the same problem?

Also, you suggest that WebSocket is easier to setup, referencing TCP, but any realistic setup needs at least all the complexity of TLS. Is H/2 and QUIC in the future really that much worse?

Member

annevk commented Oct 12, 2017

@ricea I'd love to explore WebSocket vs Fetch further and since for now this seems to be the place, I'll keep asking here. Why are H/2 frames that large? If we did WebSocket over H/2, as some have suggested at various times, wouldn't we run into the same problem? Does QUIC have the same problem?

Also, you suggest that WebSocket is easier to setup, referencing TCP, but any realistic setup needs at least all the complexity of TLS. Is H/2 and QUIC in the future really that much worse?

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Oct 12, 2017

FWIW, we are going to need to support WebSockets in Cloudflare Workers. A lot of existing applications use WebSockets and it would not be reasonable for us to ask them to rewrite to HTTP/2 fetch before they can use Workers.

kentonv commented Oct 12, 2017

FWIW, we are going to need to support WebSockets in Cloudflare Workers. A lot of existing applications use WebSockets and it would not be reasonable for us to ask them to rewrite to HTTP/2 fetch before they can use Workers.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 12, 2017

Member

Yeah, I suspect everyone has to support these now given that this issue never got fixed. The issue can be closed I suspect, but I'd still be interested in exploring the trade offs in depth. And then mostly on the theoretical side as I can appreciate that there's still lots of bugs and legacy deployments on the practical side.

Member

annevk commented Oct 12, 2017

Yeah, I suspect everyone has to support these now given that this issue never got fixed. The issue can be closed I suspect, but I'd still be interested in exploring the trade offs in depth. And then mostly on the theoretical side as I can appreciate that there's still lots of bugs and legacy deployments on the practical side.

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Oct 12, 2017

On the theoretical side, I would make two arguments:

  • The built-in message framing provided by WebSocket is pretty useful. While it could be rebuilt as a library on top of streams, I feel like a lot of modern web APIs are intended to avoid the need for using libraries to do basic tasks.
  • My previous argument in this thread, that while bidirectional streaming in an HTTP request is supported by the protocol, it's the kind of thing that middleboxes are highly likely to break without realizing it, possibly in subtle ways that don't fail fast (e.g. infinite buffering). Whereas WebSocket is very clearly intended to operate in a bidirectional streaming mode, and will fail fast when it's not supported.

That said I don't feel super-strongly about this, from the theoretical angle.

kentonv commented Oct 12, 2017

On the theoretical side, I would make two arguments:

  • The built-in message framing provided by WebSocket is pretty useful. While it could be rebuilt as a library on top of streams, I feel like a lot of modern web APIs are intended to avoid the need for using libraries to do basic tasks.
  • My previous argument in this thread, that while bidirectional streaming in an HTTP request is supported by the protocol, it's the kind of thing that middleboxes are highly likely to break without realizing it, possibly in subtle ways that don't fail fast (e.g. infinite buffering). Whereas WebSocket is very clearly intended to operate in a bidirectional streaming mode, and will fail fast when it's not supported.

That said I don't feel super-strongly about this, from the theoretical angle.

@wenbozhu

This comment has been minimized.

Show comment
Hide comment
@wenbozhu

wenbozhu Oct 12, 2017

Purely on the theoretical side,

https://tools.ietf.org/html/rfc7540#section-8.1

doesn't really spec out full-duplex (simplex bidi is never an issue, i.e. upload followed by a download). Rather, it clarifies early 2xx completion is allowed (as opposed to http/1.1 which states early error responses are to be expected). While "early 2xx completion" may enable full-duplex support, it's not quite the same thing. More specifically, early 2xx completion is about committing an OK response (i.e. generating all the headers) while request body is still in flight. When the server decides to produce an early response, any request body that has not been received is deemed "useless" data. This is not really the case for most full-duplex use cases, where response data is causally generated from the request data, albeit in a streaming fashion (e.g. speech translation).

===

For user-space protocols that use http/2 (framing) purely as a transport, HTTP/2 (being a transport to HTTP/1.1 semantics) can certainly be treated as a full-duplex bidi transport (i.e. multiplex TCP streams), subject to middlebox interpretation (which is a big unknown to me).

Purely on the theoretical side,

https://tools.ietf.org/html/rfc7540#section-8.1

doesn't really spec out full-duplex (simplex bidi is never an issue, i.e. upload followed by a download). Rather, it clarifies early 2xx completion is allowed (as opposed to http/1.1 which states early error responses are to be expected). While "early 2xx completion" may enable full-duplex support, it's not quite the same thing. More specifically, early 2xx completion is about committing an OK response (i.e. generating all the headers) while request body is still in flight. When the server decides to produce an early response, any request body that has not been received is deemed "useless" data. This is not really the case for most full-duplex use cases, where response data is causally generated from the request data, albeit in a streaming fashion (e.g. speech translation).

===

For user-space protocols that use http/2 (framing) purely as a transport, HTTP/2 (being a transport to HTTP/1.1 semantics) can certainly be treated as a full-duplex bidi transport (i.e. multiplex TCP streams), subject to middlebox interpretation (which is a big unknown to me).

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Oct 13, 2017

Contributor

@annevk Part of the extra size of HTTP/2 frame headers is explained by needing stream IDs, which is an inescapable consequence of being multiplexed. I must confess I don't know what the rest is.

There were three things I think I was talking about when I said "HTTP/2 is not simple to implement":

  1. From a simple engineering standpoint, implementing the HTTP/1.1 protocol is easy, as long you forget about chunked transfer encoding. I've done it several times just for personal projects. I would never contemplate implementing HTTP/2 for a personal project. The WebSocket protocol has lots of seemingly unnecessary fiddly bits, but a small simply implementation can still be put together quickly.
  2. From a deployment standpoint, small scale HTTP deployments are ubiquitous and adding in WebSocket support to an existing deployment is easy in many developments. Adding HTTP/2 to a large-scale deployment where you're already using a reverse proxy setup is pretty straightforward, but for a small-scale deployment you suddenly have a whole extra bundle of complexity to deal with. I expect this part to change as even small deployments start from scratch with HTTP/2.
  3. From a conceptual standpoint, TCP:TLS:WebSocket is a 1:1:1 relationship. TCP:TLS:HTTP/2:fetch is a 1:1:1:N relationship. When all you need is a single stream, multiplexing is pure cognitive overhead.
Contributor

ricea commented Oct 13, 2017

@annevk Part of the extra size of HTTP/2 frame headers is explained by needing stream IDs, which is an inescapable consequence of being multiplexed. I must confess I don't know what the rest is.

There were three things I think I was talking about when I said "HTTP/2 is not simple to implement":

  1. From a simple engineering standpoint, implementing the HTTP/1.1 protocol is easy, as long you forget about chunked transfer encoding. I've done it several times just for personal projects. I would never contemplate implementing HTTP/2 for a personal project. The WebSocket protocol has lots of seemingly unnecessary fiddly bits, but a small simply implementation can still be put together quickly.
  2. From a deployment standpoint, small scale HTTP deployments are ubiquitous and adding in WebSocket support to an existing deployment is easy in many developments. Adding HTTP/2 to a large-scale deployment where you're already using a reverse proxy setup is pretty straightforward, but for a small-scale deployment you suddenly have a whole extra bundle of complexity to deal with. I expect this part to change as even small deployments start from scratch with HTTP/2.
  3. From a conceptual standpoint, TCP:TLS:WebSocket is a 1:1:1 relationship. TCP:TLS:HTTP/2:fetch is a 1:1:1:N relationship. When all you need is a single stream, multiplexing is pure cognitive overhead.
@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Oct 13, 2017

Contributor

@kentonv While lots of developers get surprisingly far using bare WebSockets, I couldn't recommend it for general-purpose applications over the open Internet[1]. There are far too many people stuck in environments where only HTTP/1.1 will get through. So, in practice, you need fallbacks, and to make that not be painful you need some kind of library.

[1] Games seem to be an exception. Game developers appear to be quite happy to say "if my game doesn't work with your ISP, get a new ISP".

Contributor

ricea commented Oct 13, 2017

@kentonv While lots of developers get surprisingly far using bare WebSockets, I couldn't recommend it for general-purpose applications over the open Internet[1]. There are far too many people stuck in environments where only HTTP/1.1 will get through. So, in practice, you need fallbacks, and to make that not be painful you need some kind of library.

[1] Games seem to be an exception. Game developers appear to be quite happy to say "if my game doesn't work with your ISP, get a new ISP".

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 13, 2017

Member

So if we assume the cost for HTTP/2 approaches zero in the future, the main features WebSocket has that Fetch does not would be:

  1. Smaller frames.
  2. Dedicated connection.

It feels like we should explore exposing those "primitives" to Fetch somehow if they are important.

It worries me a little bit that @wenbozhu claims that HTTP is not full-duplex whereas the discussion in whatwg/fetch#229 concluded it pretty clearly is. I hope that's just a misunderstanding.

Member

annevk commented Oct 13, 2017

So if we assume the cost for HTTP/2 approaches zero in the future, the main features WebSocket has that Fetch does not would be:

  1. Smaller frames.
  2. Dedicated connection.

It feels like we should explore exposing those "primitives" to Fetch somehow if they are important.

It worries me a little bit that @wenbozhu claims that HTTP is not full-duplex whereas the discussion in whatwg/fetch#229 concluded it pretty clearly is. I hope that's just a misunderstanding.

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Oct 13, 2017

Contributor

My position on the full-duplex issue is that HTTP/1.1 will never be full-duplex except in restricted environments. The interoperability issues are intractable.

I know of no first-order interoperability issues with HTTP/2, but what happens when you're in an environment where HTTP/2 doesn't work? Should the page behave differently depending on the transport protocol in use?

Contributor

ricea commented Oct 13, 2017

My position on the full-duplex issue is that HTTP/1.1 will never be full-duplex except in restricted environments. The interoperability issues are intractable.

I know of no first-order interoperability issues with HTTP/2, but what happens when you're in an environment where HTTP/2 doesn't work? Should the page behave differently depending on the transport protocol in use?

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Oct 14, 2017

@annevk

  1. Smaller frames.

I think this argument is confused. HTTP/2 (AFAIK) doesn't have frames in the WebSocket sense. HTTP/2 frames are an internal protocol detail used for multiplexing but not revealed to the application. WebSocket frames are application-visible message segmentation that have nothing to do with multiplexing. So comparing their sizes doesn't really make sense; these are completely unrelated protocol features.

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

I would agree that for a big cloud service, having a fallback from WebSocket is necessary. But there are plenty of smaller / private services where you can pretty safely skip the fallback these days.

kentonv commented Oct 14, 2017

@annevk

  1. Smaller frames.

I think this argument is confused. HTTP/2 (AFAIK) doesn't have frames in the WebSocket sense. HTTP/2 frames are an internal protocol detail used for multiplexing but not revealed to the application. WebSocket frames are application-visible message segmentation that have nothing to do with multiplexing. So comparing their sizes doesn't really make sense; these are completely unrelated protocol features.

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

I would agree that for a big cloud service, having a fallback from WebSocket is necessary. But there are plenty of smaller / private services where you can pretty safely skip the fallback these days.

@ricea

This comment has been minimized.

Show comment
Hide comment
@ricea

ricea Oct 16, 2017

Contributor

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

The other factor is lower per-message overhead, when messages are sent individually.

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

That's great news, thanks. We only have client-side metrics, so we don't have much insight into why things are failing in the wild.

The bad news is that Chrome has no logic to intentionally make WebSockets fail fast when they're not available. So if it's doing that then the cause is unknown.

Contributor

ricea commented Oct 16, 2017

So I would rephrase as:

  1. Built-in message segmentation.
  2. Dedicated connections (no multiplexing overhead).

The other factor is lower per-message overhead, when messages are sent individually.

@ricea FWIW Sandstorm.io (my previous project / startup) had a number of longstanding bugs that would manifest when WebSockets weren't available. In practice I don't remember ever getting a complaint from a single user who turned out to be on a WebSocket-breaking network. We did get a number of reports from users where it turned out Chrome had incorrectly decided that WebSockets weren't available and so was fast-failing them without even trying -- this was always fixed by restarting Chrome.

That's great news, thanks. We only have client-side metrics, so we don't have much insight into why things are failing in the wild.

The bad news is that Chrome has no logic to intentionally make WebSockets fail fast when they're not available. So if it's doing that then the cause is unknown.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 16, 2017

Member

@kentonv it's a bit confused, but as @ricea points out it's likely you end up packaging such messages in H2 frames, meaning you have more overhead per message.

Should the page behave differently depending on the transport protocol in use?

That's already the case (requiring secure contexts; I think various performance APIs might expose the protocol) and would definitely be the case if we ever expose a H2 server push API. It doesn't seem like a huge deal to me. If we always required everything to work over H1 we can't really make progress.

Member

annevk commented Oct 16, 2017

@kentonv it's a bit confused, but as @ricea points out it's likely you end up packaging such messages in H2 frames, meaning you have more overhead per message.

Should the page behave differently depending on the transport protocol in use?

That's already the case (requiring secure contexts; I think various performance APIs might expose the protocol) and would definitely be the case if we ever expose a H2 server push API. It doesn't seem like a huge deal to me. If we always required everything to work over H1 we can't really make progress.

@annevk annevk referenced this issue in mcmanus/draft-h2ws Oct 16, 2017

Closed

Frame overhead #1

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Oct 21, 2017

I guess I consider "Dedicated connections (no multiplexing overhead)" and "lower per-message overhead" to be the same issue, since the overhead is specifically there to allow for multiplexing.

kentonv commented Oct 21, 2017

I guess I consider "Dedicated connections (no multiplexing overhead)" and "lower per-message overhead" to be the same issue, since the overhead is specifically there to allow for multiplexing.

@annevk

This comment has been minimized.

Show comment
Hide comment
@annevk

annevk Oct 22, 2017

Member

Well, there's at least one proposal for WebSocket over H2 that addresses one, but not the other: mcmanus/draft-h2ws#1.

Member

annevk commented Oct 22, 2017

Well, there's at least one proposal for WebSocket over H2 that addresses one, but not the other: mcmanus/draft-h2ws#1.

@chick-fil-a-21

This comment has been minimized.

Show comment
Hide comment
@chick-fil-a-21

chick-fil-a-21 Mar 5, 2018

@kentonv we got web sockets in the Cloudflare workers yet?

@kentonv we got web sockets in the Cloudflare workers yet?

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Mar 5, 2018

@chick-fil-a-21 Currently CF Workers supports WebSocket pass-through -- that is, if all you do is rewrite requests, e.g. changing some headers or the URL, it will "just work" with a WebSocket request. We don't yet support examining or creating WebSocket content, only proxying.

kentonv commented Mar 5, 2018

@chick-fil-a-21 Currently CF Workers supports WebSocket pass-through -- that is, if all you do is rewrite requests, e.g. changing some headers or the URL, it will "just work" with a WebSocket request. We don't yet support examining or creating WebSocket content, only proxying.

@kentonv

This comment has been minimized.

Show comment
Hide comment
@kentonv

kentonv Mar 7, 2018

@unicomp21 Thanks! But this issue thread probably isn't the right place to discuss Cloudflare stuff. I suggest posting a thread on the Cloudflare Community; you can @ me there if you like. You can enable the Workers beta in your Cloudflare account settings (it's one of the boxes along the top).

kentonv commented Mar 7, 2018

@unicomp21 Thanks! But this issue thread probably isn't the right place to discuss Cloudflare stuff. I suggest posting a thread on the Cloudflare Community; you can @ me there if you like. You can enable the Workers beta in your Cloudflare account settings (it's one of the boxes along the top).

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment