You can clone with
HTTPS or Subversion.
I've encountered problems sending somewhat large messages (more than 4k) from Firefox 18, and having the connection closed. The client reports error 1006 (which is just abnormal termination). Nothing is logged on the server.
Using the same client code and a Node.js server I'm not getting the same behavior, which is leading me to believe it's Tornado.
I don't have a repeatable case, but wanted to at least note the issue in case anyone else encounters it, as it was hard to determine the source. Oddly this did not happen on localhost, but when connecting to a remote server.
This sounds exactly like what I've been chasing. I'm sending a lot of data from Tornado using WebSockets, using multiple streams. Everything works from a local server; from a remote server, the client gets the connection close before all data is received.
After eliminating client code (the problem exists with both a Python ws4py client and a C libwebsockets client), and the HAProxy frontend on the remote box, I'm left with the Tornado server. Digging into its code, the issue appears to be due to the handler being closed when its stream still has buffered write data (from send() in IOStream._handle_write() blocking with network congestion).
I'm attempting to resolve things in my own code by checking ws_connection.stream.writing(), but what to do if the stream is still writing? time.sleep() called from an IOLoop handler doesn't allow the IOLoop to run. I wonder if the real solution is to define a pending close state for IOStream and IOLoop, and then not actually close until all data is written (with options to forcibly close and to set a timeout).
What's closing the handler? Do you mean that the client is half-closing its side of the connection and waiting to receive the rest of the data (that's possible with http, but I didn't think it could happen with websockets)? We have a pending close state in tornado.httpserver.HTTPConnection; maybe the write/finish logic needs to be either copied to websockets or moved down into IOStream itself.
The client isn't doing anything apart from listening for messages until it sees a close from the server.
I've resolved things in my code by using an IOLoop callback in my sending thread to close the stream, which reschedules itself if the stream is still writing. i.e:
stream = self._handler.ws_connection.stream ;