Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

Already on GitHub? Sign in to your account

Suggestion: Less round trips #224

Closed
laino opened this Issue Feb 12, 2014 · 27 comments

Comments

Projects
None yet
6 participants
Contributor

laino commented Feb 12, 2014

Before I'll try to start actually coding that: It would be a good idea to have polling-based transports wait 0,2 seconds after the last write occured or until a specified amount of data has been written before actually returning it all at once. So if a client opens a new session, the client could have the few messages already in the handshake. And the server could, in addition to completing the handshake already send the first few messages (i.e. answers etc.) in the first response. This could drastically reduce the amount of time needed to connect to a service using engine.io, because at first everything is polling.

On volafile.io we currently have 2-3 round trips before the client gets past the "Connecting..." screen and the filelist is loaded. The proposed change could reduce that to one round-trip and the time needed to connect and receive the first message from ~ 1,5seconds to less than half that.

Contributor

rauchg commented Feb 12, 2014

You mean so that the clients side socket assumes the handshake is gonna work?

Contributor

laino commented Feb 12, 2014

Yes. And the server shouldn't flush after every message and instead wait a little for more messages to write the reply to the handshake along with more messages in one request.

Contributor

rauchg commented Feb 12, 2014

Interesting… it's very tempting to add this since it can result in a pretty sizable performance improvement. On the other hand it's scary to add this much complexity, and we could try to do it on userland.

Contributor

3rd-Eden commented Feb 12, 2014

So basically you want to save bandwidth at the cost of latency. Creating a "near" realtime experience instead of a realtime experience.

On Feb 12, 2014, at 20:55, binlain notifications@github.com wrote:

Yes. And the server shouldn't flush after every message and instead wait a little for more messages to write the reply to the handshake along with more messages in one request.


Reply to this email directly or view it on GitHub.

Contributor

rauchg commented Feb 12, 2014

No, he definitely wants to reduce latency. If I'm following right this is akin to the ømq "faster than tcp" approach based on buffering http://zeromq.org/area:faq#toc6

Contributor

laino commented Feb 12, 2014

In the end it will save bandwith AND reduce latency.
Because this is how it conventionally worked:

  1. Send handshake
  2. Wait for reply to handshake
  3. Send message
  4. Wait for some response to that message

Almost all use cases of engine.io will work like that.

New model:

  1. Send handshake and first message
  2. Wait for reply to the handshake and response to the message

To the user everything will look the same, except that engine.io will intelligently combine the messages and make everything magically faster.
The delay to wait before sending out the messages for good could be user configurable, where a delay of zero or less would mean to flush it immediately. From what I've seen engine.io uses already a write queue, and this could be done in less than a few lines of code (I have a 'working' prototype on a branch of my engine.io fork, but it's not ready)

Contributor

rauchg commented Feb 12, 2014

It also reminds me of the QUIC roundtrip reduction approach

Round-trip times, roughly defined by the speed of light are essentially fixed; the way to decrease connection latency is to make fewer round-trips. Much of the work on QUIC is concentrated on reducing the round trips required when establishing a new connection, including the handshake step, encryption setup, and initial data requests. QUIC clients would, for example, include the session negotiation information in the initial packet

http://en.wikipedia.org/wiki/QUIC

Contributor

laino commented Feb 12, 2014

And in my prototype I only use setImmediate(this.flush), which will give the user just enough time to write something after engine.io informed him of the new client. This doesn't introduce any extra latency (well, maybe a few nanoseconds) but saves at least one round-trip

Contributor

laino commented Feb 12, 2014

I finished the prototype. One test case had to be modified very slightly by increasing a timeout. All tests are passing. I'll have to play around with it a little. Currently it used setImmediate, which would have to be replaced by setTimeout(f,0). But first I'll do some testing in some real life applications.

Contributor

mokesmokes commented Feb 12, 2014

It relates to this as well: LearnBoost#215
So here's an idea:

  1. Connection request contains the first client message in the query string. This part of the query string of course will not be repeated in later XHRs during this connection. So in the API we can add an initialMessage string to the connection options.
  2. Custom handshake response can return server data in the connection ack.
  3. When message is received in connection ack the message event can be fired in the next tick after open event
Contributor

rauchg commented Feb 12, 2014

Query strings have limits in size and also are commonly logged. Having messages in the query string would break the principle of least astonishment. If you start looking at the traffic and you see your first message in a query string you'll be like:

Contributor

laino commented Feb 12, 2014

Yep. POST is probably a better idea.

Contributor

laino commented Feb 12, 2014

Playing around with it, I don't think I understand engine.io well enough yet. Might take me some time.

Contributor

mokesmokes commented Feb 12, 2014

Yup, could change the connection request to POST and have the initial message in the body. Then all you need is an optional handshake interceptor on the server side (similar to socket.io 0.x) and it's done. Am I right?

Contributor

rauchg commented Feb 13, 2014

@binlain please join us in ##socket.io freenode when you have a chance

Contributor

mokesmokes commented Feb 18, 2014

Btw - using request body in initial POST request will only work for xhr. Websocket connection must be opened with GET according to the spec, so for that we can only use query string or not support it at all for that transport.

Contributor

rauchg commented Feb 18, 2014

Also just noticed the PR title should be Fewer instead of Less

Contributor

rauchg commented Feb 18, 2014

Kidding (mostly)

Contributor

mokesmokes commented Feb 20, 2014

So any thoughts about proceeding on this by changing xhr open to POST and an optional initialMessage in the request body, and adding this message to the query string for websockets? This coupled with a custom handshake function server-side.

Contributor

rauchg commented Feb 20, 2014

What do you mean by a custom handshake function?

Contributor

mokesmokes commented Feb 20, 2014

I mean that the user can supply a function that will accept or reject the connection. Of course, this function can also act on the initial message, if supplied.

Contributor

rauchg commented Feb 21, 2014

Sounds complex. Why not let the user do that upon the open event ?

Contributor

mokesmokes commented Feb 21, 2014

Actually you're right. If the client can send an initial message in the connection request a custom handshake is of less importance, and the client will wait for a server response rather than immediately sending another message packet following the open. So can we agree on the above? XHR open is a POST and for websockets initial message in the query string?

Contributor

laino commented Feb 21, 2014

I wouldn't put it in the query string, just use POST if possible and if it isn't supported... well then we'll have to live with more round-trips. And websockets do have lower latency anyways, because we're skipping the TCP handshake for each message (unless we have keep-alive)

Contributor

mokesmokes commented Feb 21, 2014

Websocket cannot be opened with POST, the spec requires GET, no option besides query string unless we want to do crazy stuff with cookies which is probably a worse idea. So this idea is really to target Android versions prior to Kit Kat: I don't really see the importance of this feature for non-mobile, and iPhone and Kit Kat can use websockets with the rememberUpgrade feature which will almost always work (I'm assuming most apps will run engine.io over SSL, so websockets actually work reliably).

I remember client side strophe.js deferring all sends with setTimeout(fun, 0) (or some configurable amount) and also providing socket.flush() to be able to send immediately if needed. Often this saves a roundtrip when multiple rpc calls would be triggered by some user event or for example when initializing a modularized app:

Currently:

socket.send(); // Would trigger a roundtrip
/* Somewhere later in the same execution flow */
socket.send(); // Would be queued until response
/* Somewhere later in the same execution flow */
socket.send(); // Would be queued until response

With deferring:

socket.send(); // Would be queued until next tick
/* Somewhere later in the same execution flow */
socket.send(); // Would be queued until next tick
// Optionally socket.flush(); would try to flush right away
/* Somewhere later in the same execution flow */
socket.send(); // Would be queued until next tick
// Next tick all would be flushed in batch, saving roundtrip
Contributor

darrachequesne commented Nov 18, 2016

Closing due to inactivity, please reopen if needed.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment