Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP

Loading…

Refactor TCPServer and HTTPServer to support TLS NPN #533

Open
wants to merge 1 commit into from

2 participants

@alekstorm
  • TLS NPN means that one of many protocols can be selected after a TCP connection is established. A layer of indirection was added to TCPServer to allow it to delegate handling of a TCP connection to whichever protocol handler was negotiated. If the npn_protocols parameter (a list of (name, handler) tuples in order of preference) was passed to the constructor, the connection is over TLS, and NPN succeeded, the handler for the chosen name will be called. Otherwise, the protocol constructor parameter will be called. For example, SPDYServer is essentially:
  class SPDYServer(TCPServer):
      def __init__(self, request_callback):
          http_protocol = HTTPServerProtocol(request_callback)
          TCPServer.__init__(self, http_protocol,
              npn_protocols=[
                  ('spdy/2', SPDYServerProtocol(request_callback)),
                  ('http/1.1', http_protocol)])
  • TCPServer was moved from netutil to its own module, tcpserver.

  • Since utilizing NPN support in Python 3.3 requires the ssl.SSLContext class, which isn't available in Python 2.x, the wrap_socket() top-level function was added to netutil to abstract away these details. In addition, the SUPPORTS_NPN constant was added as a convenience for determining if the system supported NPN.

  • Previously, web.RequestHandler formatted the HTTP response itself and wrote it directly to the IOStream. This responsibility has been moved to the HTTPRequest.connection object, which must provide the write_preamble() and write() methods - the former writes the response status line and headers, while the latter writes a chunk of the response body.

  • Although IOStream.connect() already takes a callback parameter, in SSLIOStream it's not called until the SSL handshake is completed (which contains TLS NPN) - and TCPServer, which doesn't call connect(), won't know which protocol handler to execute until that happens. To fix this, a set_connect_callback method was added to IOStream.

  • Snippets that conditionally imported BytesIO and ssl were moved into util and netutil, respectively. These symbols are now imported from there.

From the SPDY fork

@alekstorm alekstorm Refactor TCPServer and HTTPServer to support TLS NPN
* TLS NPN means that one of many protocols can be selected after a TCP connection is established. A
  layer of indirection was added to TCPServer to allow it to delegate handling of a TCP connection
  to whichever protocol handler was negotiated. If the `npn_protocols` parameter (a list of
  (name, handler) tuples in order of preference) was passed to the constructor, the connection is
  over TLS, and NPN succeeded, the handler for the chosen name will be called. Otherwise, the
  `protocol` constructor parameter will be called. For example, SPDYServer is essentially:

  class SPDYServer(TCPServer):
      def __init__(self, request_callback):
          http_protocol = HTTPServerProtocol(request_callback)
          TCPServer.__init__(self, http_protocol,
              npn_protocols=[
                  ('spdy/2', SPDYServerProtocol(request_callback)),
                  ('http/1.1', http_protocol)])

* TCPServer was moved from netutil to its own module, tcpserver.

* Since utilizing NPN support in Python 3.3 requires the `ssl.SSLContext` class, which isn't
  available in Python 2.x, the wrap_socket() top-level function was added to `netutil` to abstract
  away these details. In addition, the `SUPPORTS_NPN` constant was added as a convenience for
  determining if the system supported NPN.

* Previously, `web.RequestHandler` formatted the HTTP response itself and wrote it directly to the
  IOStream. This responsibility has been moved to the HTTPRequest.connection object, which must
  provide the write_preamble() and write() methods - the former writes the response status line and
  headers, while the latter writes a chunk of the response body.

* Although IOStream.connect() already takes a callback parameter, in SSLIOStream it's not called
  until the SSL handshake is completed (which contains TLS NPN) - and TCPServer, which doesn't call
  connect(), won't know which protocol handler to execute until that happens. To fix this, a
  set_connect_callback method was added to IOStream.

* Snippets that conditionally imported BytesIO and ssl were moved into util and netutil,
  respectively. These symbols are now imported from there.
37e5bff
@bdarnell
Owner

I don't think TCPServer is the right layer for NPN stuff to happen.
We'll eventually need to refactor the way IOStream and SSLIOStream
interact to support protocols that use the STARTTLS pattern; by that
point I'd like to deprecate all the ssl-related stuff in TCPServer
(i.e. HTTPServer.handle_stream would get a regular IOStream that it
would immediately switch to SSL mode if so configured).

In the meantime, my thinking for NPN support is that things that take
ssl_options now should also take an ssl_context, but the only
npn-specific functionality in any current tornado code would be an
accessor to selected_npn_protocol on SSLIOStream. SPDYServer would
then pass an SSLContext initialized with the appropriate npn options
to TCPServer.__init__, and in handle_stream it would look at
selected_npn_protocol to decide whether to go into HTTP mode or SPDY
mode.

As for the other changes in this diff, I'll look more closely at them
if they either get their own pull request or we resolve things with this one,
but here are my initial comments:

  • I'm still -1 on giving TCPServer its own file.
  • I'm not sure IOStream.set_connect_callback is necessary with the npn changes outlined above
  • A new wrap_socket abstraction seems reasonable for bridging the gap between the old-style ssl_options and SSLContexts, but again I'd let apps that need NPN pass in the whole SSLContext.
  • Moving the HTTP response formatting to the connection makes sense. We use the term headers elsewhere in tornado to refer to the status line+header block, and I think I'd prefer to keep it that way instead of introducing the term preamble.
  • Consolidating the conditional import of BytesIO and ssl is a good change.
@alekstorm
@bdarnell
Owner

STARTTLS for HTTP is a non-starter, but it's necessary for other protocols (notably IMAP and SMTP). To be clear, the change I'm referring to would be to consolidate IOStream and SSLIOStream in one class, with a method to start an SSL handshake (and maybe the reverse, like SSLSocket.unwrap). In this world TCPServer would be ignorant of SSL, but an HTTPServer configured for ssl would unconditionally call start_tls() as soon as it was handed a new IOStream.

I think it should be an error to specify both ssl_options and ssl_context. I'm indifferent as to whether we create accessors on SSLIOStream for things like selected_npn_protocol (I was thinking we already had a precedent for this in get_ssl_certificate, but now that I look that's on HTTPRequest, and it just reaches around the IOStream and goes straight to the socket)

I'm strongly opposed to TCPServer processing the NPN information - in my mind there is only ever one application-level protocol which may switch between different modes depending on the content of the communication. It's just a historical accident that the switch between SPDY and HTTP is done out-of-band by NPN while the switch between HTTP 1.0 and 1.1 is done by a version number in the first line. (and websockets use a third switching mechanism).

I hadn't grokked that you wanted set_connect_callback to set a "connect" callback for server-side connections. Aside from the slightly-confusing name it seems reasonable, although this concern vanishes if we merge IOStream and SSLIOStream and start the handshake with start_tls(callback) instead of doing it in the constructor.

The headers vs first-line confusion that you're afraid of already exists in both HTTPServer and SimpleAsyncHTTPClient (on_headers callbacks) and RequestHandler (_generate_headers()), and it hasn't been a problem. That said, I don't feel too strongly about this.

@alekstorm
@alekstorm

Just tried moving TCPServer back into netutil, but that creates a circular dependency between netutil and iostream. What would you like me to do instead? This is one of the reasons that utility modules are generally kept as lightweight and dependency-free as possible.

@bdarnell bdarnell added the tcpserver label
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Commits on Jun 8, 2012
  1. @alekstorm

    Refactor TCPServer and HTTPServer to support TLS NPN

    alekstorm authored
    * TLS NPN means that one of many protocols can be selected after a TCP connection is established. A
      layer of indirection was added to TCPServer to allow it to delegate handling of a TCP connection
      to whichever protocol handler was negotiated. If the `npn_protocols` parameter (a list of
      (name, handler) tuples in order of preference) was passed to the constructor, the connection is
      over TLS, and NPN succeeded, the handler for the chosen name will be called. Otherwise, the
      `protocol` constructor parameter will be called. For example, SPDYServer is essentially:
    
      class SPDYServer(TCPServer):
          def __init__(self, request_callback):
              http_protocol = HTTPServerProtocol(request_callback)
              TCPServer.__init__(self, http_protocol,
                  npn_protocols=[
                      ('spdy/2', SPDYServerProtocol(request_callback)),
                      ('http/1.1', http_protocol)])
    
    * TCPServer was moved from netutil to its own module, tcpserver.
    
    * Since utilizing NPN support in Python 3.3 requires the `ssl.SSLContext` class, which isn't
      available in Python 2.x, the wrap_socket() top-level function was added to `netutil` to abstract
      away these details. In addition, the `SUPPORTS_NPN` constant was added as a convenience for
      determining if the system supported NPN.
    
    * Previously, `web.RequestHandler` formatted the HTTP response itself and wrote it directly to the
      IOStream. This responsibility has been moved to the HTTPRequest.connection object, which must
      provide the write_preamble() and write() methods - the former writes the response status line and
      headers, while the latter writes a chunk of the response body.
    
    * Although IOStream.connect() already takes a callback parameter, in SSLIOStream it's not called
      until the SSL handshake is completed (which contains TLS NPN) - and TCPServer, which doesn't call
      connect(), won't know which protocol handler to execute until that happens. To fix this, a
      set_connect_callback method was added to IOStream.
    
    * Snippets that conditionally imported BytesIO and ssl were moved into util and netutil,
      respectively. These symbols are now imported from there.
This page is out of date. Refresh to see the latest.
View
217 tornado/httpserver.py
@@ -27,21 +27,17 @@ class except to start a server at the beginning of the process
from __future__ import absolute_import, division, with_statement
import Cookie
+import functools
+import httplib
import logging
import socket
import time
+from tornado import httputil, iostream, netutil, stack_context
from tornado.escape import utf8, native_str, parse_qs_bytes
-from tornado import httputil
-from tornado import iostream
-from tornado.netutil import TCPServer
-from tornado import stack_context
-from tornado.util import b, bytes_type
-
-try:
- import ssl # Python 2.6+
-except ImportError:
- ssl = None
+from tornado.netutil import ssl
+from tornado.tcpserver import TCPServer
+from tornado.util import b
class HTTPServer(TCPServer):
@@ -59,9 +55,10 @@ class HTTPServer(TCPServer):
def handle_request(request):
message = "You requested %s\n" % request.uri
- request.write("HTTP/1.1 200 OK\r\nContent-Length: %d\r\n\r\n%s" % (
- len(message), message))
- request.finish()
+ request.connection.write_preamble(
+ status_code=200,
+ headers=[("Content-Length", str(len(message)))])
+ request.connection.write(message, finished=True)
http_server = httpserver.HTTPServer(handle_request)
http_server.listen(8888)
@@ -72,76 +69,48 @@ def handle_request(request):
in `HTTPServer` is HTTP/1.1 keep-alive connections. We do not, however,
implement chunked encoding, so the request callback must provide a
``Content-Length`` header or implement chunked encoding for HTTP/1.1
- requests for the server to run correctly for HTTP/1.1 clients. If
- the request handler is unable to do this, you can provide the
- ``no_keep_alive`` argument to the `HTTPServer` constructor, which will
- ensure the connection is closed on every request no matter what HTTP
- version the client is using.
-
- If ``xheaders`` is ``True``, we support the ``X-Real-Ip`` and ``X-Scheme``
- headers, which override the remote IP and HTTP scheme for all requests.
- These headers are useful when running Tornado behind a reverse proxy or
- load balancer.
-
- `HTTPServer` can serve SSL traffic with Python 2.6+ and OpenSSL.
- To make this server serve SSL traffic, send the ssl_options dictionary
- argument with the arguments required for the `ssl.wrap_socket` method,
- including "certfile" and "keyfile"::
-
- HTTPServer(applicaton, ssl_options={
- "certfile": os.path.join(data_dir, "mydomain.crt"),
- "keyfile": os.path.join(data_dir, "mydomain.key"),
- })
-
- `HTTPServer` initialization follows one of three patterns (the
- initialization methods are defined on `tornado.netutil.TCPServer`):
-
- 1. `~tornado.netutil.TCPServer.listen`: simple single-process::
-
- server = HTTPServer(app)
- server.listen(8888)
- IOLoop.instance().start()
-
- In many cases, `tornado.web.Application.listen` can be used to avoid
- the need to explicitly create the `HTTPServer`.
-
- 2. `~tornado.netutil.TCPServer.bind`/`~tornado.netutil.TCPServer.start`:
- simple multi-process::
-
- server = HTTPServer(app)
- server.bind(8888)
- server.start(0) # Forks multiple sub-processes
- IOLoop.instance().start()
-
- When using this interface, an `IOLoop` must *not* be passed
- to the `HTTPServer` constructor. `start` will always start
- the server on the default singleton `IOLoop`.
-
- 3. `~tornado.netutil.TCPServer.add_sockets`: advanced multi-process::
-
- sockets = tornado.netutil.bind_sockets(8888)
- tornado.process.fork_processes(0)
- server = HTTPServer(app)
- server.add_sockets(sockets)
- IOLoop.instance().start()
-
- The `add_sockets` interface is more complicated, but it can be
- used with `tornado.process.fork_processes` to give you more
- flexibility in when the fork happens. `add_sockets` can
- also be used in single-process servers if you want to create
- your listening sockets in some way other than
- `tornado.netutil.bind_sockets`.
+ requests for the server to run correctly for HTTP/1.1 clients.
+
+ For initialization patterns, see `TCPServer`.
+
+ :arg dict ssl_options: Keyword arguments to be passed to `ssl.wrap_socket`.
+
+ All other keyword arguments are passed to `HTTPServerProtocol`.
"""
- def __init__(self, request_callback, no_keep_alive=False, io_loop=None,
- xheaders=False, ssl_options=None, **kwargs):
+ def __init__(self, request_callback, io_loop=None, ssl_options=None,
+ no_keep_alive=False, xheaders=False):
+ protocol = HTTPServerProtocol(request_callback,
+ no_keep_alive=no_keep_alive, xheaders=xheaders)
+ npn_protocols = None
+ if ssl_options and netutil.SUPPORTS_NPN:
+ npn_protocols = [('http/1.1', protocol)]
+ TCPServer.__init__(self,
+ protocol=protocol,
+ npn_protocols=npn_protocols,
+ io_loop=io_loop,
+ ssl_options=ssl_options)
+
+
+class HTTPServerProtocol(object):
+ """A server protocol handler implementing the HTTP specification, meant
+ to be instantiated and passed to the `TCPServer` constructor.
+
+ :arg bool no_keep_alive: If ``True``, ensures the connection is closed on
+ every request no matter what HTTP version the client is using.
+
+ :arg bool xheaders: If ``True``, we support the ``X-Real-Ip`` and
+ ``X-Scheme`` headers, which override the remote IP and HTTP scheme for
+ all requests. These headers are useful when running Tornado behind a
+ reverse proxy or load balancer.
+
+ """
+ def __init__(self, request_callback, no_keep_alive=False, xheaders=False):
self.request_callback = request_callback
self.no_keep_alive = no_keep_alive
self.xheaders = xheaders
- TCPServer.__init__(self, io_loop=io_loop, ssl_options=ssl_options,
- **kwargs)
- def handle_stream(self, stream, address):
+ def __call__(self, stream, address, server):
HTTPConnection(stream, address, self.request_callback,
self.no_keep_alive, self.xheaders)
@@ -151,45 +120,67 @@ class _BadRequestException(Exception):
pass
-class HTTPConnection(object):
+class BaseHTTPConnection(object):
+ def __init__(self, stream, address, request_callback, xheaders=False):
+ self.stream = stream
+ self.address = address
+ self.request_callback = request_callback
+ self.xheaders = xheaders
+
+
+class HTTPConnection(BaseHTTPConnection):
"""Handles a connection to an HTTP client, executing HTTP requests.
- We parse HTTP headers and bodies, and execute the request callback
- until the HTTP conection is closed.
+ Parses HTTP headers and bodies, and executes the request callback,
+ providing itself as an interface for writing the response.
+
"""
def __init__(self, stream, address, request_callback, no_keep_alive=False,
xheaders=False):
- self.stream = stream
- self.address = address
- self.request_callback = request_callback
+ BaseHTTPConnection.__init__(self, stream, address, request_callback,
+ xheaders)
self.no_keep_alive = no_keep_alive
- self.xheaders = xheaders
- self._request = None
- self._request_finished = False
# Save stack context here, outside of any request. This keeps
# contexts from one request from leaking into the next.
self._header_callback = stack_context.wrap(self._on_headers)
self.stream.read_until(b("\r\n\r\n"), self._header_callback)
- self._write_callback = None
- def write(self, chunk, callback=None):
- """Writes a chunk of output to the stream."""
+ self._request = None
+ self._request_finished = False
+
+ def set_close_callback(self, callback):
+ self.stream.set_close_callback(callback)
+
+ def write(self, chunk, finished=False, callback=None):
+ """Writes a chunk of the response body to the stream."""
assert self._request, "Request closed"
if not self.stream.closed():
- self._write_callback = stack_context.wrap(callback)
- self.stream.write(chunk, self._on_write_complete)
-
- def finish(self):
+ self.stream.write(chunk, functools.partial(self._on_write_complete,
+ callback=stack_context.wrap(callback)))
+ if finished:
+ self._finish()
+
+ def write_preamble(self, status_code, reason=None, version="HTTP/1.1",
+ headers=None, finished=False, callback=None):
+ """Writes the response status line and headers to the stream."""
+ lines = [utf8(version + " " +
+ str(status_code) +
+ " " + (reason or
+ httplib.responses.get(status_code, '')))]
+ lines.extend([(utf8(n) + b(": ") + utf8(v)) for n, v in headers or []])
+ self.write(b("\r\n").join(lines) + b("\r\n\r\n"), callback=callback)
+ if finished:
+ self._finish()
+
+ def _finish(self):
"""Finishes the request."""
assert self._request, "Request closed"
self._request_finished = True
if not self.stream.writing():
self._finish_request()
- def _on_write_complete(self):
- if self._write_callback is not None:
- callback = self._write_callback
- self._write_callback = None
+ def _on_write_complete(self, callback=None):
+ if callback is not None:
callback()
# _on_write_complete is enqueued on the IOLoop whenever the
# IOStream's write buffer becomes empty, but it's possible for
@@ -247,7 +238,7 @@ def _on_headers(self, data):
self._request = HTTPRequest(
connection=self, method=method, uri=uri, version=version,
- headers=headers, remote_ip=remote_ip)
+ headers=headers, remote_ip=remote_ip, xheaders=self.xheaders)
content_length = headers.get("Content-Length")
if content_length:
@@ -289,6 +280,7 @@ def _on_request_body(self, data):
break
else:
logging.warning("Invalid multipart/form-data")
+ self._request._finish_time = time.time()
self.request_callback(self._request)
@@ -357,22 +349,31 @@ class HTTPRequest(object):
File uploads are available in the files property, which maps file
names to lists of :class:`HTTPFile`.
+ .. attribute:: xheaders
+
+ If ``True``, we support the ``X-Real-Ip`` and ``X-Scheme`` headers,
+ which override the remote IP and HTTP scheme for all requests. These
+ headers are useful when running Tornado behind a reverse proxy or load
+ balancer.
+
.. attribute:: connection
An HTTP request is attached to a single HTTP connection, which can
be accessed through the "connection" attribute. Since connections
are typically kept open in HTTP/1.1, multiple requests can be handled
sequentially on a single connection.
+
"""
def __init__(self, method, uri, version="HTTP/1.0", headers=None,
body=None, remote_ip=None, protocol=None, host=None,
- files=None, connection=None):
+ files=None, xheaders=False, priority=None, framing='http',
+ connection=None):
self.method = method
self.uri = uri
self.version = version
self.headers = headers or httputil.HTTPHeaders()
self.body = body or ""
- if connection and connection.xheaders:
+ if xheaders:
# Squid uses X-Forwarded-For, others use X-Real-Ip
self.remote_ip = self.headers.get(
"X-Real-Ip", self.headers.get("X-Forwarded-For", remote_ip))
@@ -392,6 +393,7 @@ def __init__(self, method, uri, version="HTTP/1.0", headers=None,
self.protocol = "https"
else:
self.protocol = "http"
+
self.host = host or self.headers.get("Host") or "127.0.0.1"
self.files = files or {}
self.connection = connection
@@ -423,19 +425,12 @@ def cookies(self):
self._cookies = {}
return self._cookies
- def write(self, chunk, callback=None):
- """Writes the given chunk to the response stream."""
- assert isinstance(chunk, bytes_type)
- self.connection.write(chunk, callback=callback)
-
- def finish(self):
- """Finishes this HTTP request on the open connection."""
- self.connection.finish()
- self._finish_time = time.time()
-
def full_url(self):
"""Reconstructs the full URL for this request."""
- return self.protocol + "://" + self.host + self.uri
+ url = self.protocol + "://" + self.host + self.path
+ if self.query:
+ url += '?' + self.query
+ return url
def request_time(self):
"""Returns the amount of time it took for this request to execute."""
View
19 tornado/iostream.py
@@ -22,19 +22,15 @@
import errno
import logging
import os
+import re
import socket
import sys
-import re
from tornado import ioloop
from tornado import stack_context
+from tornado.netutil import ssl, wrap_socket
from tornado.util import b, bytes_type
-try:
- import ssl # Python 2.6+
-except ImportError:
- ssl = None
-
class IOStream(object):
r"""A utility class to write to and read from a non-blocking socket.
@@ -212,6 +208,10 @@ def write(self, data, callback=None):
self._add_io_state(self.io_loop.WRITE)
self._maybe_add_error_listener()
+ def set_connect_callback(self, callback):
+ """Call the given callback when the stream is established."""
+ self._connect_callback = stack_context.wrap(callback)
+
def set_close_callback(self, callback):
"""Call the given callback when the stream is closed."""
self._close_callback = stack_context.wrap(callback)
@@ -622,10 +622,13 @@ def __init__(self, *args, **kwargs):
it will be used as additional keyword arguments to ssl.wrap_socket.
"""
self._ssl_options = kwargs.pop('ssl_options', {})
+ self._ssl_options['do_handshake_on_connect'] = False
+ self._npn_protocols = kwargs.pop('npn_protocols', None)
super(SSLIOStream, self).__init__(*args, **kwargs)
self._ssl_accepting = True
self._handshake_reading = False
self._handshake_writing = False
+ self._add_io_state(self.io_loop.READ)
def reading(self):
return self._handshake_reading or super(SSLIOStream, self).reading()
@@ -673,9 +676,7 @@ def _handle_write(self):
super(SSLIOStream, self)._handle_write()
def _handle_connect(self):
- self.socket = ssl.wrap_socket(self.socket,
- do_handshake_on_connect=False,
- **self._ssl_options)
+ self.socket = wrap_socket(self.socket, self._ssl_options, self._npn_protocols)
# Don't call the superclass's _handle_connect (which is responsible
# for telling the application that the connection is complete)
# until we've completed the SSL handshake (so certificates are
View
205 tornado/netutil.py
@@ -19,14 +19,11 @@
from __future__ import absolute_import, division, with_statement
import errno
-import logging
import os
import socket
import stat
-from tornado import process
from tornado.ioloop import IOLoop
-from tornado.iostream import IOStream, SSLIOStream
from tornado.platform.auto import set_close_exec
try:
@@ -35,189 +32,28 @@
ssl = None
-class TCPServer(object):
- r"""A non-blocking, single-threaded TCP server.
+SUPPORTS_NPN = getattr(ssl, 'HAS_NPN', False)
- To use `TCPServer`, define a subclass which overrides the `handle_stream`
- method.
- `TCPServer` can serve SSL traffic with Python 2.6+ and OpenSSL.
- To make this server serve SSL traffic, send the ssl_options dictionary
- argument with the arguments required for the `ssl.wrap_socket` method,
- including "certfile" and "keyfile"::
-
- TCPServer(ssl_options={
- "certfile": os.path.join(data_dir, "mydomain.crt"),
- "keyfile": os.path.join(data_dir, "mydomain.key"),
- })
-
- `TCPServer` initialization follows one of three patterns:
-
- 1. `listen`: simple single-process::
-
- server = TCPServer()
- server.listen(8888)
- IOLoop.instance().start()
-
- 2. `bind`/`start`: simple multi-process::
-
- server = TCPServer()
- server.bind(8888)
- server.start(0) # Forks multiple sub-processes
- IOLoop.instance().start()
-
- When using this interface, an `IOLoop` must *not* be passed
- to the `TCPServer` constructor. `start` will always start
- the server on the default singleton `IOLoop`.
-
- 3. `add_sockets`: advanced multi-process::
-
- sockets = bind_sockets(8888)
- tornado.process.fork_processes(0)
- server = TCPServer()
- server.add_sockets(sockets)
- IOLoop.instance().start()
-
- The `add_sockets` interface is more complicated, but it can be
- used with `tornado.process.fork_processes` to give you more
- flexibility in when the fork happens. `add_sockets` can
- also be used in single-process servers if you want to create
- your listening sockets in some way other than
- `bind_sockets`.
- """
- def __init__(self, io_loop=None, ssl_options=None):
- self.io_loop = io_loop
- self.ssl_options = ssl_options
- self._sockets = {} # fd -> socket object
- self._pending_sockets = []
- self._started = False
-
- def listen(self, port, address=""):
- """Starts accepting connections on the given port.
-
- This method may be called more than once to listen on multiple ports.
- `listen` takes effect immediately; it is not necessary to call
- `TCPServer.start` afterwards. It is, however, necessary to start
- the `IOLoop`.
- """
- sockets = bind_sockets(port, address=address)
- self.add_sockets(sockets)
-
- def add_sockets(self, sockets):
- """Makes this server start accepting connections on the given sockets.
-
- The ``sockets`` parameter is a list of socket objects such as
- those returned by `bind_sockets`.
- `add_sockets` is typically used in combination with that
- method and `tornado.process.fork_processes` to provide greater
- control over the initialization of a multi-process server.
- """
- if self.io_loop is None:
- self.io_loop = IOLoop.instance()
-
- for sock in sockets:
- self._sockets[sock.fileno()] = sock
- add_accept_handler(sock, self._handle_connection,
- io_loop=self.io_loop)
-
- def add_socket(self, socket):
- """Singular version of `add_sockets`. Takes a single socket object."""
- self.add_sockets([socket])
-
- def bind(self, port, address=None, family=socket.AF_UNSPEC, backlog=128):
- """Binds this server to the given port on the given address.
-
- To start the server, call `start`. If you want to run this server
- in a single process, you can call `listen` as a shortcut to the
- sequence of `bind` and `start` calls.
-
- Address may be either an IP address or hostname. If it's a hostname,
- the server will listen on all IP addresses associated with the
- name. Address may be an empty string or None to listen on all
- available interfaces. Family may be set to either ``socket.AF_INET``
- or ``socket.AF_INET6`` to restrict to ipv4 or ipv6 addresses, otherwise
- both will be used if available.
-
- The ``backlog`` argument has the same meaning as for
- `socket.listen`.
-
- This method may be called multiple times prior to `start` to listen
- on multiple ports or interfaces.
- """
- sockets = bind_sockets(port, address=address, family=family,
- backlog=backlog)
- if self._started:
- self.add_sockets(sockets)
- else:
- self._pending_sockets.extend(sockets)
-
- def start(self, num_processes=1):
- """Starts this server in the IOLoop.
-
- By default, we run the server in this process and do not fork any
- additional child process.
-
- If num_processes is ``None`` or <= 0, we detect the number of cores
- available on this machine and fork that number of child
- processes. If num_processes is given and > 1, we fork that
- specific number of sub-processes.
-
- Since we use processes and not threads, there is no shared memory
- between any server code.
-
- Note that multiple processes are not compatible with the autoreload
- module (or the ``debug=True`` option to `tornado.web.Application`).
- When using multiple processes, no IOLoops can be created or
- referenced until after the call to ``TCPServer.start(n)``.
- """
- assert not self._started
- self._started = True
- if num_processes != 1:
- process.fork_processes(num_processes)
- sockets = self._pending_sockets
- self._pending_sockets = []
- self.add_sockets(sockets)
-
- def stop(self):
- """Stops listening for new connections.
-
- Requests currently in progress may still continue after the
- server is stopped.
- """
- for fd, sock in self._sockets.iteritems():
- self.io_loop.remove_handler(fd)
- sock.close()
-
- def handle_stream(self, stream, address):
- """Override to handle a new `IOStream` from an incoming connection."""
- raise NotImplementedError()
-
- def _handle_connection(self, connection, address):
- if self.ssl_options is not None:
- assert ssl, "Python 2.6+ and OpenSSL required for SSL"
- try:
- connection = ssl.wrap_socket(connection,
- server_side=True,
- do_handshake_on_connect=False,
- **self.ssl_options)
- except ssl.SSLError, err:
- if err.args[0] == ssl.SSL_ERROR_EOF:
- return connection.close()
- else:
- raise
- except socket.error, err:
- if err.args[0] == errno.ECONNABORTED:
- return connection.close()
- else:
- raise
- try:
- if self.ssl_options is not None:
- stream = SSLIOStream(connection, io_loop=self.io_loop)
- else:
- stream = IOStream(connection, io_loop=self.io_loop)
- self.handle_stream(stream, address)
- except Exception:
- logging.error("Error in connection callback", exc_info=True)
+def wrap_socket(sock, ssl_options, npn_protocols):
+ if npn_protocols is not None:
+ # NPN requires Python 3.x features like SSLContext
+ assert SUPPORTS_NPN, "Python 3.3+ and OpenSSL 1.0.1+ required for TLS NPN"
+ ssl_options = ssl_options.copy()
+ context = ssl.SSLContext(ssl_options.pop('ssl_version',
+ ssl.PROTOCOL_TLSv1))
+ context.verify_mode = ssl_options.pop('cert_reqs', ssl.CERT_NONE)
+ if 'ca_certs' in ssl_options:
+ context.load_verify_locations(cafile=ssl_options.pop('ca_certs'))
+ if 'ciphers' in ssl_options:
+ context.set_ciphers(ssl_options.pop('ciphers'))
+ if 'certfile' in ssl_options:
+ context.load_cert_chain(certfile=ssl_options.pop('certfile'),
+ keyfile=ssl_options.pop('keyfile', None))
+ context.set_npn_protocols(npn_protocols)
+ return context.wrap_socket(sock, **ssl_options)
+ else:
+ return ssl.wrap_socket(sock, **ssl_options)
def bind_sockets(port, address=None, family=socket.AF_UNSPEC, backlog=128):
@@ -270,6 +106,7 @@ def bind_sockets(port, address=None, family=socket.AF_UNSPEC, backlog=128):
sockets.append(sock)
return sockets
+
if hasattr(socket, 'AF_UNIX'):
def bind_unix_socket(file, mode=0600, backlog=128):
"""Creates a listening unix socket.
View
13 tornado/simple_httpclient.py
@@ -5,8 +5,9 @@
from tornado.httpclient import HTTPRequest, HTTPResponse, HTTPError, AsyncHTTPClient, main
from tornado.httputil import HTTPHeaders
from tornado.iostream import IOStream, SSLIOStream
+from tornado.netutil import ssl
from tornado import stack_context
-from tornado.util import b
+from tornado.util import b, BytesIO
import base64
import collections
@@ -22,16 +23,6 @@
import urlparse
import zlib
-try:
- from io import BytesIO # python 3
-except ImportError:
- from cStringIO import StringIO as BytesIO # python 2
-
-try:
- import ssl # python 2.6+
-except ImportError:
- ssl = None
-
_DEFAULT_CA_CERTS = os.path.dirname(__file__) + '/ca-certificates.crt'
View
240 tornado/tcpserver.py
@@ -0,0 +1,240 @@
+#!/usr/bin/env python
+#
+# Copyright 2011 Facebook
+#
+# Licensed under the Apache License, Version 2.0 (the "License"); you may
+# not use this file except in compliance with the License. You may obtain
+# a copy of the License at
+#
+# http://www.apache.org/licenses/LICENSE-2.0
+#
+# Unless required by applicable law or agreed to in writing, software
+# distributed under the License is distributed on an "AS IS" BASIS, WITHOUT
+# WARRANTIES OR CONDITIONS OF ANY KIND, either express or implied. See the
+# License for the specific language governing permissions and limitations
+# under the License.
+
+from __future__ import absolute_import, division, with_statement
+
+import errno
+import logging
+import socket
+
+from tornado import process
+from tornado.ioloop import IOLoop
+from tornado.iostream import IOStream, SSLIOStream
+from tornado.netutil import add_accept_handler, bind_sockets, ssl, wrap_socket
+
+
+class TCPServer(object):
+ r"""A non-blocking, single-threaded TCP server.
+
+ `TCPServer` takes a ``protocol`` argument that is called when an incoming
+ connection is opened.
+
+ `TCPServer` can serve SSL traffic with Python 2.6+ and OpenSSL.
+ To make this server serve SSL traffic, send the ssl_options dictionary
+ argument with the arguments required for the `ssl.wrap_socket` method,
+ including "certfile" and "keyfile"::
+
+ TCPServer(ssl_options={
+ "certfile": os.path.join(data_dir, "mydomain.crt"),
+ "keyfile": os.path.join(data_dir, "mydomain.key"),
+ })
+
+ `TCPServer` supports TLS NPN if Python 3.3+ and OpenSSL 1.0.1+ are
+ installed. The ``npn_protocols`` argument specifies a list of ``(name,
+ handler)`` tuples in order of preference. Once NPN is completed, the
+ handler for the selected name will be called. If the client does not
+ support NPN, the handler passed in the ``protocol`` argument will be used
+ instead. This parameter requires ``ssl_options`` to be passed as well.
+
+ `TCPServer` initialization follows one of three patterns:
+
+ 1. `listen`: simple single-process::
+
+ server = TCPServer()
+ server.listen(8888)
+ IOLoop.instance().start()
+
+ 2. `bind`/`start`: simple multi-process::
+
+ server = TCPServer()
+ server.bind(8888)
+ server.start(0) # Forks multiple sub-processes
+ IOLoop.instance().start()
+
+ When using this interface, an `IOLoop` must *not* be passed
+ to the `TCPServer` constructor. `start` will always start
+ the server on the default singleton `IOLoop`.
+
+ 3. `add_sockets`: advanced multi-process::
+
+ sockets = bind_sockets(8888)
+ tornado.process.fork_processes(0)
+ server = TCPServer()
+ server.add_sockets(sockets)
+ IOLoop.instance().start()
+
+ The `add_sockets` interface is more complicated, but it can be
+ used with `tornado.process.fork_processes` to give you more
+ flexibility in when the fork happens. `add_sockets` can
+ also be used in single-process servers if you want to create
+ your listening sockets in some way other than
+ `bind_sockets`.
+
+ """
+ def __init__(self, protocol, io_loop=None, npn_protocols=None,
+ ssl_options=None):
+ self.protocol = protocol
+ self.io_loop = io_loop
+ assert not ssl_options or ssl, "Python 2.6+ and OpenSSL required for SSL"
+ assert ssl_options or not npn_protocols, "npn_protocols requires ssl_options"
+ self.npn_protocols = npn_protocols
+ self.ssl_options = ssl_options
+ if self.ssl_options:
+ self.ssl_options.update({
+ 'server_side': True,
+ 'do_handshake_on_connect': False
+ })
+ self._sockets = {} # fd -> socket object
+ self._pending_sockets = []
+ self._started = False
+
+ def listen(self, port, address=""):
+ """Starts accepting connections on the given port.
+
+ This method may be called more than once to listen on multiple ports.
+ `listen` takes effect immediately; it is not necessary to call
+ `TCPServer.start` afterwards. It is, however, necessary to start
+ the `IOLoop`.
+ """
+ sockets = bind_sockets(port, address=address)
+ self.add_sockets(sockets)
+
+ def add_sockets(self, sockets):
+ """Makes this server start accepting connections on the given sockets.
+
+ The ``sockets`` parameter is a list of socket objects such as
+ those returned by `bind_sockets`.
+ `add_sockets` is typically used in combination with that
+ method and `tornado.process.fork_processes` to provide greater
+ control over the initialization of a multi-process server.
+ """
+ if self.io_loop is None:
+ self.io_loop = IOLoop.instance()
+
+ for sock in sockets:
+ self._sockets[sock.fileno()] = sock
+ add_accept_handler(sock, self._handle_connection,
+ io_loop=self.io_loop)
+
+ def add_socket(self, socket):
+ """Singular version of `add_sockets`. Takes a single socket object."""
+ self.add_sockets([socket])
+
+ def bind(self, port, address=None, family=socket.AF_UNSPEC, backlog=128):
+ """Binds this server to the given port on the given address.
+
+ To start the server, call `start`. If you want to run this server
+ in a single process, you can call `listen` as a shortcut to the
+ sequence of `bind` and `start` calls.
+
+ Address may be either an IP address or hostname. If it's a hostname,
+ the server will listen on all IP addresses associated with the
+ name. Address may be an empty string or None to listen on all
+ available interfaces. Family may be set to either ``socket.AF_INET``
+ or ``socket.AF_INET6`` to restrict to ipv4 or ipv6 addresses, otherwise
+ both will be used if available.
+
+ The ``backlog`` argument has the same meaning as for
+ `socket.listen`.
+
+ This method may be called multiple times prior to `start` to listen
+ on multiple ports or interfaces.
+ """
+ sockets = bind_sockets(port, address=address, family=family,
+ backlog=backlog)
+ if self._started:
+ self.add_sockets(sockets)
+ else:
+ self._pending_sockets.extend(sockets)
+
+ def start(self, num_processes=1):
+ """Starts this server in the IOLoop.
+
+ By default, we run the server in this process and do not fork any
+ additional child process.
+
+ If num_processes is ``None`` or <= 0, we detect the number of cores
+ available on this machine and fork that number of child
+ processes. If num_processes is given and > 1, we fork that
+ specific number of sub-processes.
+
+ Since we use processes and not threads, there is no shared memory
+ between any server code.
+
+ Note that multiple processes are not compatible with the autoreload
+ module (or the ``debug=True`` option to `tornado.web.Application`).
+ When using multiple processes, no IOLoops can be created or
+ referenced until after the call to ``TCPServer.start(n)``.
+ """
+ assert not self._started
+ self._started = True
+ if num_processes != 1:
+ process.fork_processes(num_processes)
+ sockets = self._pending_sockets
+ self._pending_sockets = []
+ self.add_sockets(sockets)
+
+ def stop(self):
+ """Stops listening for new connections.
+
+ Requests currently in progress may still continue after the
+ server is stopped.
+ """
+ for fd, sock in self._sockets.iteritems():
+ self.io_loop.remove_handler(fd)
+ sock.close()
+
+ def _handle_connection(self, connection, address):
+ if self.ssl_options is not None:
+ try:
+ npn_protocols = None
+ if self.npn_protocols is not None:
+ npn_protocols = [name for name, _ in self.npn_protocols]
+ connection = wrap_socket(connection, self.ssl_options,
+ npn_protocols)
+ except ssl.SSLError, err:
+ if err.args[0] == ssl.SSL_ERROR_EOF:
+ return connection.close()
+ else:
+ raise
+ except socket.error, err:
+ if err.args[0] == errno.ECONNABORTED:
+ return connection.close()
+ else:
+ raise
+
+ stream = SSLIOStream(connection, io_loop=self.io_loop)
+ if self.npn_protocols is not None:
+ def on_connect():
+ handler = self.protocol
+ selected_name = connection.selected_npn_protocol()
+ if selected_name:
+ for name, protocol in self.npn_protocols:
+ if name == selected_name:
+ handler = protocol
+ break
+ self._run_protocol(handler, stream, address)
+ stream.set_connect_callback(on_connect)
+ return
+ else:
+ stream = IOStream(connection, io_loop=self.io_loop)
+ self._run_protocol(self.protocol, stream, address)
+
+ def _run_protocol(self, handler, stream, address):
+ try:
+ handler(stream, address, self)
+ except Exception:
+ logging.error("Error in connection callback", exc_info=True)
View
71 tornado/test/httpserver_test.py
@@ -2,11 +2,12 @@
from __future__ import absolute_import, division, with_statement
-from tornado import httpclient, simple_httpclient, netutil
-from tornado.escape import json_decode, utf8, _unicode, recursive_unicode, native_str
+from tornado.escape import json_decode, _unicode, recursive_unicode, native_str
+from tornado.httpclient import HTTPRequest
from tornado.httpserver import HTTPServer
from tornado.httputil import HTTPHeaders
from tornado.iostream import IOStream
+from tornado.netutil import bind_unix_socket, ssl
from tornado.simple_httpclient import SimpleAsyncHTTPClient
from tornado.testing import AsyncHTTPTestCase, LogTrapTestCase, AsyncTestCase
from tornado.util import b, bytes_type
@@ -17,11 +18,6 @@
import sys
import tempfile
-try:
- import ssl
-except ImportError:
- ssl = None
-
class HandlerBaseTestCase(AsyncHTTPTestCase, LogTrapTestCase):
def get_app(self):
@@ -169,18 +165,7 @@ def post(self):
})
-class RawRequestHTTPConnection(simple_httpclient._HTTPConnection):
- def set_request(self, request):
- self.__next_request = request
-
- def _on_connect(self, parsed, parsed_hostname):
- self.stream.write(self.__next_request)
- self.__next_request = None
- self.stream.read_until(b("\r\n\r\n"), self._on_headers)
-
# This test is also called from wsgi_test
-
-
class HTTPConnectionTest(AsyncHTTPTestCase, LogTrapTestCase):
def get_handlers(self):
return [("/multipart", MultipartTestHandler),
@@ -189,40 +174,26 @@ def get_handlers(self):
def get_app(self):
return Application(self.get_handlers())
- def raw_fetch(self, headers, body):
- client = SimpleAsyncHTTPClient(self.io_loop)
- conn = RawRequestHTTPConnection(self.io_loop, client,
- httpclient.HTTPRequest(self.get_url("/")),
- None, self.stop,
- 1024 * 1024)
- conn.set_request(
- b("\r\n").join(headers +
- [utf8("Content-Length: %d\r\n" % len(body))]) +
- b("\r\n") + body)
- response = self.wait()
- client.close()
- response.rethrow()
- return response
-
def test_multipart_form(self):
# Encodings here are tricky: Headers are latin1, bodies can be
# anything (we use utf8 by default).
- response = self.raw_fetch([
- b("POST /multipart HTTP/1.0"),
- b("Content-Type: multipart/form-data; boundary=1234567890"),
- b("X-Header-encoding-test: \xe9"),
- ],
- b("\r\n").join([
- b("Content-Disposition: form-data; name=argument"),
- b(""),
- u"\u00e1".encode("utf-8"),
- b("--1234567890"),
- u'Content-Disposition: form-data; name="files"; filename="\u00f3"'.encode("utf8"),
- b(""),
- u"\u00fa".encode("utf-8"),
- b("--1234567890--"),
- b(""),
- ]))
+ self.http_client.fetch(HTTPRequest(
+ url=self.get_url("/multipart"),
+ method="POST",
+ headers={
+ 'Content-Type': 'multipart/form-data; boundary=1234567890',
+ 'X-Header-encoding-test': b('\xe9')},
+ body=b("\r\n").join([
+ b("Content-Disposition: form-data; name=argument"),
+ b(""),
+ u"\u00e1".encode("utf-8"),
+ b("--1234567890"),
+ u'Content-Disposition: form-data; name="files"; filename="\u00f3"'.encode("utf8"),
+ b(""),
+ u"\u00fa".encode("utf-8"),
+ b("--1234567890--"),
+ b("")])), self.stop)
+ response = self.wait(timeout=5)
data = json_decode(response.body)
self.assertEqual(u"\u00e9", data["header"])
self.assertEqual(u"\u00e1", data["argument"])
@@ -387,7 +358,7 @@ def tearDown(self):
def test_unix_socket(self):
sockfile = os.path.join(self.tmpdir, "test.sock")
- sock = netutil.bind_unix_socket(sockfile)
+ sock = bind_unix_socket(sockfile)
app = Application([("/hello", HelloWorldRequestHandler)])
server = HTTPServer(app, io_loop=self.io_loop)
server.add_socket(sock)
View
5 tornado/util.py
@@ -2,6 +2,11 @@
from __future__ import absolute_import, division, with_statement
+try:
+ from io import BytesIO # python 3
+except ImportError:
+ from cStringIO import StringIO as BytesIO # python 2
+
class ObjectDict(dict):
"""Makes a dictionary behave like an object."""
View
55 tornado/web.py
@@ -83,12 +83,7 @@ def get(self):
from tornado import stack_context
from tornado import template
from tornado.escape import utf8, _unicode
-from tornado.util import b, bytes_type, import_object, ObjectDict, raise_exc_info
-
-try:
- from io import BytesIO # python 3
-except ImportError:
- from cStringIO import StringIO as BytesIO # python 2
+from tornado.util import b, BytesIO, bytes_type, import_object, ObjectDict, raise_exc_info
class RequestHandler(object):
@@ -124,8 +119,7 @@ def __init__(self, application, request, **kwargs):
self.clear()
# Check since connection is not available in WSGI
if getattr(self.request, "connection", None):
- self.request.connection.stream.set_close_callback(
- self.on_connection_close)
+ self.request.connection.set_close_callback(self.on_connection_close)
self.initialize(**kwargs)
def initialize(self):
@@ -625,7 +619,7 @@ def create_template_loader(self, template_path):
kwargs["autoescape"] = settings["autoescape"]
return template.Loader(template_path, **kwargs)
- def flush(self, include_footers=False, callback=None):
+ def flush(self, include_footers=False, finished=False, callback=None):
"""Flushes the current output buffer to the network.
The ``callback`` argument, if given, can be used for flow control:
@@ -641,23 +635,30 @@ def flush(self, include_footers=False, callback=None):
self._write_buffer = []
if not self._headers_written:
self._headers_written = True
+ if hasattr(self, "_new_cookie"):
+ for cookie in self._new_cookie.values():
+ self.add_header("Set-Cookie", cookie.OutputString(None))
for transform in self._transforms:
self._status_code, self._headers, chunk = \
transform.transform_first_chunk(
- self._status_code, self._headers, chunk, include_footers)
- headers = self._generate_headers()
+ self._status_code, self._headers, chunk, include_footers)
+ self.request.connection.write_preamble(
+ status_code=self._status_code,
+ version=self.request.version,
+ headers=itertools.chain(self._headers.iteritems(),
+ self._list_headers),
+ finished=finished and not chunk,
+ callback=callback if not chunk else None)
else:
for transform in self._transforms:
chunk = transform.transform_chunk(chunk, include_footers)
- headers = b("")
# Ignore the chunk and only write the headers for HEAD requests
- if self.request.method == "HEAD":
- if headers:
- self.request.write(headers, callback=callback)
- return
-
- self.request.write(headers + chunk, callback=callback)
+ if self.request.method != "HEAD" and chunk:
+ self._body_written = True
+ self.request.connection.write(chunk, finished=finished, callback=callback)
+ elif callback:
+ callback()
def finish(self, chunk=None):
"""Finishes this response, ending the HTTP request."""
@@ -694,11 +695,10 @@ def finish(self, chunk=None):
# set on the IOStream (which would otherwise prevent the
# garbage collection of the RequestHandler when there
# are keepalive connections)
- self.request.connection.stream.set_close_callback(None)
+ self.request.connection.set_close_callback(None)
if not self.application._wsgi:
- self.flush(include_footers=True)
- self.request.finish()
+ self.flush(include_footers=True, finished=True)
self._log()
self._finished = True
self.on_finish()
@@ -1024,17 +1024,6 @@ def _execute(self, transforms, *args, **kwargs):
except Exception, e:
self._handle_request_exception(e)
- def _generate_headers(self):
- lines = [utf8(self.request.version + " " +
- str(self._status_code) +
- " " + httplib.responses[self._status_code])]
- lines.extend([(utf8(n) + b(": ") + utf8(v)) for n, v in
- itertools.chain(self._headers.iteritems(), self._list_headers)])
- if hasattr(self, "_new_cookie"):
- for cookie in self._new_cookie.values():
- lines.append(utf8("Set-Cookie: " + cookie.OutputString(None)))
- return b("\r\n").join(lines) + b("\r\n\r\n")
-
def _log(self):
"""Logs the current request.
@@ -1046,7 +1035,7 @@ def _log(self):
def _request_summary(self):
return self.request.method + " " + self.request.uri + \
- " (" + self.request.remote_ip + ")"
+ " (" + str(self.request.remote_ip) + ")"
def _handle_request_exception(self, e):
if isinstance(e, HTTPError):
View
74 tornado/websocket.py
@@ -88,19 +88,13 @@ def _execute(self, transforms, *args, **kwargs):
# Websocket only supports GET method
if self.request.method != 'GET':
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 405 Method Not Allowed\r\n\r\n"
- ))
- self.stream.close()
+ self.request.connection.write_preamble(status_code=405, finished=True)
return
# Upgrade header should be present and should be equal to WebSocket
if self.request.headers.get("Upgrade", "").lower() != 'websocket':
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 400 Bad Request\r\n\r\n"
- "Can \"Upgrade\" only to \"WebSocket\"."
- ))
- self.stream.close()
+ self.request.connection.write_preamble(status_code=400)
+ self.request.connection.write(b('Can "Upgrade" only to "WebSocket"'), finished=True)
return
# Connection header should be upgrade. Some proxy servers/load balancers
@@ -108,11 +102,8 @@ def _execute(self, transforms, *args, **kwargs):
headers = self.request.headers
connection = map(lambda s: s.strip().lower(), headers.get("Connection", "").split(","))
if 'upgrade' not in connection:
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 400 Bad Request\r\n\r\n"
- "\"Connection\" must be \"Upgrade\"."
- ))
- self.stream.close()
+ self.request.connection.write_preamble(status_code=400)
+ self.request.connection.write(b('"Connection" must be "Upgrade".'), finished=True)
return
# The difference between version 8 and 13 is that in 8 the
@@ -126,10 +117,7 @@ def _execute(self, transforms, *args, **kwargs):
self.ws_connection = WebSocketProtocol76(self)
self.ws_connection.accept_connection()
else:
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 426 Upgrade Required\r\n"
- "Sec-WebSocket-Version: 8\r\n\r\n"))
- self.stream.close()
+ self.request.connection.write_preamble(status_code=426, headers=[('Sec-WebSocket-Version', '8')], finished=True)
def write_message(self, message, binary=False):
"""Sends the given message to the client of this Web Socket.
@@ -294,35 +282,29 @@ def accept_connection(self):
return
scheme = self.handler.get_websocket_scheme()
+ headers = [
+ ('Upgrade', 'WebSocket'),
+ ('Connection', 'Upgrade'),
+ ('Server', 'TornadoServer/%s' % tornado.version),
+ ('Sec-WebSocket-Origin', self.request.headers['Origin']),
+ ('Sec-WebSocket-Location', '%s://%s%s' % (scheme, self.request.host, self.request.uri))]
# draft76 only allows a single subprotocol
- subprotocol_header = ''
subprotocol = self.request.headers.get("Sec-WebSocket-Protocol", None)
if subprotocol:
selected = self.handler.select_subprotocol([subprotocol])
if selected:
assert selected == subprotocol
- subprotocol_header = "Sec-WebSocket-Protocol: %s\r\n" % selected
+ headers.append(("Sec-WebSocket-Protocol", selected))
# Write the initial headers before attempting to read the challenge.
# This is necessary when using proxies (such as HAProxy), which
# need to see the Upgrade headers before passing through the
# non-HTTP traffic that follows.
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 101 WebSocket Protocol Handshake\r\n"
- "Upgrade: WebSocket\r\n"
- "Connection: Upgrade\r\n"
- "Server: TornadoServer/%(version)s\r\n"
- "Sec-WebSocket-Origin: %(origin)s\r\n"
- "Sec-WebSocket-Location: %(scheme)s://%(host)s%(uri)s\r\n"
- "%(subprotocol)s"
- "\r\n" % (dict(
- version=tornado.version,
- origin=self.request.headers["Origin"],
- scheme=scheme,
- host=self.request.host,
- uri=self.request.uri,
- subprotocol=subprotocol_header))))
+ self.request.connection.write_preamble(
+ status_code=101,
+ reason='WebSocket Protocol Handshake',
+ headers=headers)
self.stream.read_bytes(8, self._handle_challenge)
def challenge_response(self, challenge):
@@ -479,22 +461,18 @@ def _challenge_response(self):
return tornado.escape.native_str(base64.b64encode(sha1.digest()))
def _accept_connection(self):
- subprotocol_header = ''
- subprotocols = self.request.headers.get("Sec-WebSocket-Protocol", '')
- subprotocols = [s.strip() for s in subprotocols.split(',')]
+ headers = [
+ ('Upgrade', 'websocket'),
+ ('Connection', 'Upgrade'),
+ ('Sec-WebSocket-Accept', self._challenge_response())]
+
+ subprotocols = [s.strip() for s in self.request.headers.get("Sec-WebSocket-Protocol", '').split(',')]
if subprotocols:
selected = self.handler.select_subprotocol(subprotocols)
if selected:
- assert selected in subprotocols
- subprotocol_header = "Sec-WebSocket-Protocol: %s\r\n" % selected
-
- self.stream.write(tornado.escape.utf8(
- "HTTP/1.1 101 Switching Protocols\r\n"
- "Upgrade: websocket\r\n"
- "Connection: Upgrade\r\n"
- "Sec-WebSocket-Accept: %s\r\n"
- "%s"
- "\r\n" % (self._challenge_response(), subprotocol_header)))
+ headers.append(('Sec-WebSocket-Protocol', selected))
+
+ self.request.connection.write_preamble(status_code=101, headers=headers)
self.async_callback(self.handler.open)(*self.handler.open_args, **self.handler.open_kwargs)
self._receive_frame()
View
16 tornado/wsgi.py
@@ -43,12 +43,7 @@
from tornado import httputil
from tornado import web
from tornado.escape import native_str, utf8, parse_qs_bytes
-from tornado.util import b
-
-try:
- from io import BytesIO # python 3
-except ImportError:
- from cStringIO import StringIO as BytesIO # python 2
+from tornado.util import b, BytesIO
class WSGIApplication(web.Application):
@@ -243,13 +238,8 @@ def start_response(status, response_headers, exc_info=None):
if "server" not in header_set:
headers.append(("Server", "TornadoServer/%s" % tornado.version))
- parts = [escape.utf8("HTTP/1.1 " + data["status"] + "\r\n")]
- for key, value in headers:
- parts.append(escape.utf8(key) + b(": ") + escape.utf8(value) + b("\r\n"))
- parts.append(b("\r\n"))
- parts.append(body)
- request.write(b("").join(parts))
- request.finish()
+ request.connection.write_preamble(status_code=int(data["status"].split()[0]), headers=headers)
+ request.connection.write(body, finished=True)
self._log(status_code, request)
@staticmethod
Something went wrong with that request. Please try again.