You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
An issue for discussing and refining what the transport sequence should be in the next major version of ably-js.
One option is just #1327 -- starting the lib with websocket as a preferredTransport, and going back to the existing comet with fallback hosts + websocket upgrade attempt once connected strategy that we use now if that doesn't connect within some small timeout.
Another is getting rid of transport upgrade and moving to a pure fallback model, where you stick with whatever transport you first succeeded with. One advantage of this is simplicity, both of the sdk connectionmanager code and at the conceptual level (upgrade now generally works pretty solidly but it has undeniably been a past source of bugs).
As previously discussed, we think that for this to mesh with fallback hosts without adding too long a delay before connection in the event of either an issue with the closed datacenter or a (corporate MITM'd) network which is screwing with websockets, we should have a websocket version of the 'is the internet up' check, maybe using cloudflare workers or something, that we can run in parallel with the existing cdn connectivity check.
So a possible sequence (not set in stone, just a starting point for discussion) might be something like:
connection attempt to primary host over websocket
if not yet succeeded after 4s, start parallel XHR and websocket connectivity checks
if websocket succeeds, continue pure-websocket host-fallback sequence (if connection attempt to primary host hasn't succeeded after 10s, cancel and try the first fallback host)
if websocket connectivity check has not succeeded after 6s (so 10s after entire connect sequence started) but the xhr one has, cancel the ws attempt and start an xhr polling attempt to the primary host. If that fails, continue the host fallback sequence sticking with xhr polling. Persist xhr polling as the preferred transport for all future connect attempts -- but also occasionally retry the websocket connectivity check (eg if the network changes); if that ever succeeds clear the persisted xhr transport preference (but otherwise take no immediate action and continue with xhr).
if neither check succeeds, just go into the disconnected state and wait 15s to retry, as now.
One question is, what role does this leave for xhr_streaming transport? I suggest: none. Any MITM'd corporate network fucky enough to block / not understand websockets is also quite likely to block / not understand xhr streaming, as we have seen in the past.
One question is, what role does this leave for xhr_streaming transport? I suggest: none. Any MITM'd corporate network fucky enough to block / not understand websockets is also quite likely to block / not understand xhr streaming, as we have seen in the past.
This has been implemented in ably-js v2 in #1645: PR removed XHRStreaming transport, replaced upgrade mechanism with a websocket + base fallback transport mechanism whereby no two transports may be simultaneously connected.
An issue for discussing and refining what the transport sequence should be in the next major version of ably-js.
One option is just #1327 -- starting the lib with websocket as a preferredTransport, and going back to the existing comet with fallback hosts + websocket upgrade attempt once connected strategy that we use now if that doesn't connect within some small timeout.
Another is getting rid of transport upgrade and moving to a pure fallback model, where you stick with whatever transport you first succeeded with. One advantage of this is simplicity, both of the sdk connectionmanager code and at the conceptual level (upgrade now generally works pretty solidly but it has undeniably been a past source of bugs).
As previously discussed, we think that for this to mesh with fallback hosts without adding too long a delay before connection in the event of either an issue with the closed datacenter or a (corporate MITM'd) network which is screwing with websockets, we should have a websocket version of the 'is the internet up' check, maybe using cloudflare workers or something, that we can run in parallel with the existing cdn connectivity check.
So a possible sequence (not set in stone, just a starting point for discussion) might be something like:
disconnected
state and wait 15s to retry, as now.One question is, what role does this leave for xhr_streaming transport? I suggest: none. Any MITM'd corporate network fucky enough to block / not understand websockets is also quite likely to block / not understand xhr streaming, as we have seen in the past.
cc owenpearson @lawrence-forooghian paddybyers
┆Issue is synchronized with this Jira Task by Unito
The text was updated successfully, but these errors were encountered: