Skip to content


Subversion checkout URL

You can clone with
Download ZIP


Handshake data flush bug #289

wants to merge 2 commits into from

4 participants


The steps to reproduce the issue are somewhat complex, but we encountered it when proxying concurrently to many different back-end servers. Somewhere between 30 and 50% of the time the handshake would get corrupted. It appears that the assumption was being made that if the second socket.write had been flushed, the first one would have been flushed as well - but that apparently that is not always the case or perhaps, because noDelay is set they were being split into separate responses instead of as the single response they are supposed to be? Either way, the source of the problem was the double socket.write that was being done to push the string data and the binary data.

I condensed these into a single write by reconstructing the buffer if modifications were necessary and the handshake failures seem to have stopped.


@piscisaureus @AvianFlu @mmalecki Thoughts? This is clearly your realm of expertise.


@ashaffer Copying all data into a single buffer makes this particular problem go away, but it doesn't solve the underlying cause. Making two write calls should just work. So I'm curious what kind of "corruption" you're seeing, e.g. are buffers truncated, are bytes changed, do packets arrive in the wrong order?


Unfortunately I never pinned down the exact nature of the corruption that was occurring. My best guess as to what was happening is that the first write was sometimes being flushed completely before the second write happened, and this response splitting was causing issues in some way. Perhaps a chunked encoding header would fix it as well (since during the handshake its still using standard HTTP which cares about such things?).

If there are any decent debugging tools out there for websockets then i'd be happy to run some tests for you, but the best i've found was manually poring through tcpdump logs which was incredibly tedious and didn't end up yielding me much.


@ashaffer Unless your backend service is very poorly written, response splitting should never be a problem. Remember that tcp is a stream-oriented protocol. If this is really the cause then you probably want to fix your backend. What I would advice you to look into however is whether the handshake entirely makes it to the backend. If you see a tcp FIN or RST packet (an 'end' or 'error' event, in node terms) before the handshake is done, then that could be caused by a node-http-proxy bug.


Sorry I should elaborate on the structure of our setup. We are proxying twice (both times through node-http-proxy) with the final backend of SockJS (0.3.1). The foremost proxy splits requests between 3 backend node-http-proxy servers, which in turn only proxy to one final server instance. We have this somewhat odd setup because we have extended the Meteor project to support instanced sub-applications and so have sat our own proxy in front of several Meteor instances, each of which already does one layer of proxying. However, the Meteor proxy instances are a 1:1 mapping between proxy and backend, whereas our subapp proxy is not.

Each client will at any given time be connected (via the proxys) to all 3 backend server instances. We have localized the issue to occurring in the foremost proxy instance in which the fan-out occurs. It seems to occur around 30% of the time, which means often we would get 1 socket failing and two working, although plenty of the time we'd get other configurations like all 3 working or all 3 failing.

The response splitting issue is only my best guess as I can't think of why else two separate socket.write's as opposed to one would so completely resolve the issue for us. Definitely not saying that is the issue and based on my understanding of TCP as well it shouldn't be the issue, but it or something like it fits the evidence more closely than any other explanation I can think of, though i'd certainly be interested to hear any other ideas as i've been trying to nail this down for a few weeks now and this was the solution I came up with.


Bjorn Stromberg:

@ashaffer It sounds like your backend isn't handling things well. Looking at SockJS, this isn't very pretty:

didMessage: (payload) ->
        if @ws and @session and payload.length > 0
                message = JSON.parse(payload)
            catch x
                return @didClose(1002, 'Broken framing.')

Yeah, that looks wrong. Maybe file an issue with SockJS ?


That looked promising, but I just tested it and neither the else case of that if nor the catch are being hit when the connections fail.


So, that code is not being run at all, or message comes out healthy?


Message seems to be coming out healthy


Also this code is proxying data on its way back to the client, not going to the back-end, so I think it being a SockJS bug is fairly unlikely, as sockjs isn't touching the data between when this happens and when the connection breaks.

The only options to me seem to be that it's a bug in node-http-proxy, a race condition in node itself's handling of write flushes, or in chrome and firefox's implementation of websockets. Whichever way happens to be the culprit, condensing to a single socket.write fixes the problem.


@ashaffer Can you condense this to a reproducible test case? Nodejitsu uses node-http-proxy in production, load-balancing websockets on a very large scale, and we have never seen this problem.

Your infrastructure is very complex, you haven't described the exact nature of the handshake corruption, and there isn't a simple, standalone reproduction of the problem.

We're happy to help, but there isn't really a lot to go on, here.

@indexzero indexzero closed this
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
This page is out of date. Refresh to see the latest.
Showing with 7 additions and 4 deletions.
  1. +7 −4 lib/node-http-proxy/http-proxy.js
11 lib/node-http-proxy/http-proxy.js
@@ -607,6 +607,7 @@ HttpProxy.prototype.proxyWebSocketRequest = function (req, socket, head, buffer)
// connections.
function proxyError (err) {
+ console.log('proxy error', err);
process.nextTick(function () {
@@ -669,7 +670,7 @@ HttpProxy.prototype.proxyWebSocketRequest = function (req, socket, head, buffer)
sdata = sdata.substr(0, + CRLF));
// Get the Non-Printable data
- data = data.slice(Buffer.byteLength(sdata), data.length);
+ var bdata = data.slice(Buffer.byteLength(sdata), data.length);
if (self.source.https && ! {
@@ -677,6 +678,9 @@ HttpProxy.prototype.proxyWebSocketRequest = function (req, socket, head, buffer)
// HTTP then replace `ws` with `wss` in the data sent back to the client.
sdata = sdata.replace('ws:', 'wss:');
+ data = new Buffer(Buffer.byteLength(sdata)+bdata.length);
+ data.write(sdata, 0, Buffer.byteLength(sdata), 'utf8');
+ bdata.copy(data, Buffer.byteLength(sdata), 0, bdata.length);
try {
@@ -684,8 +688,7 @@ HttpProxy.prototype.proxyWebSocketRequest = function (req, socket, head, buffer)
// Write the printable and non-printable data to the socket
// from the original incoming request.
- self.emit('websocket:handshake', req, socket, head, sdata, data);
- socket.write(sdata);
+ self.emit('websocket:handshake', req, socket, head, data, data);
var flushed = socket.write(data);
if (!flushed) {
@@ -700,7 +703,7 @@ HttpProxy.prototype.proxyWebSocketRequest = function (req, socket, head, buffer)
setTimeout(function () {
- }, 100);
+ }, 700);
catch (ex) {
Something went wrong with that request. Please try again.