View
@@ -1,312 +1,32 @@
.. _index:
========
Waitress
--------
========
Waitress is meant to be a production-quality pure-Python WSGI server with
very acceptable performance. It has no dependencies except ones which live
in the Python standard library. It runs on CPython on Unix and Windows under
Python 2.7+ and Python 3.3+. It is also known to run on PyPy 1.6.0 on UNIX.
Python 2.7+ and Python 3.4+. It is also known to run on PyPy 1.6.0 on UNIX.
It supports HTTP/1.0 and HTTP/1.1.
Usage
-----
Here's normal usage of the server:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='*:8080')
This will run waitress on port 8080 on all available IP addresses, both IPv4
and IPv6.
.. code-block:: python
from waitress import serve
serve(wsgiapp, host='0.0.0.0', port=8080)
This will run waitress on port 8080 on all available IPv4 addresses.
If you want to serve your application on all IP addresses, on port 8080, you
can omit the ``host`` and ``port`` arguments and just call ``serve`` with the
WSGI app as a single argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp)
Press Ctrl-C (or Ctrl-Break on Windows) to exit the server.
The default is to bind to any IPv4 address on port 8080:
.. code-block:: python
from waitress import serve
serve(wsgiapp)
If you want to serve your application through a UNIX domain socket (to serve
a downstream HTTP server/proxy, e.g. nginx, lighttpd, etc.), call ``serve``
with the ``unix_socket`` argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp, unix_socket='/path/to/unix.sock')
Needless to say, this configuration won't work on Windows.
Exceptions generated by your application will be shown on the console by
default. See :ref:`logging` to change this.
There's an entry point for :term:`PasteDeploy` (``egg:waitress#main``) that
lets you use Waitress's WSGI gateway from a configuration file, e.g.:
.. code-block:: ini
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Using ``host`` and ``port`` is also supported:
.. code-block:: ini
[server:main]
host = 127.0.0.1
port = 8080
The :term:`PasteDeploy` syntax for UNIX domain sockets is analagous:
.. code-block:: ini
[server:main]
use = egg:waitress#main
unix_socket = /path/to/unix.sock
You can find more settings to tweak (arguments to ``waitress.serve`` or
equivalent settings in PasteDeploy) in :ref:`arguments`.
Additionally, there is a command line runner called ``waitress-serve``, which
can be used in development and in situations where the likes of
:term:`PasteDeploy` is not necessary:
.. code-block:: bash
# Listen on both IPv4 and IPv6 on port 8041
waitress-serve --listen=*:8041 myapp:wsgifunc
# Listen on only IPv4 on port 8041
waitress-serve --port=8041 myapp:wsgifunc
For more information on this, see :ref:`runner`.
.. _logging:
Logging
-------
``waitress.serve`` calls ``logging.basicConfig()`` to set up logging to the
console when the server starts up. Assuming no other logging configuration
has already been done, this sets the logging default level to
``logging.WARNING``. The Waitress logger will inherit the root logger's
level information (it logs at level ``WARNING`` or above).
Waitress sends its logging output (including application exception
renderings) to the Python logger object named ``waitress``. You can
influence the logger level and output stream using the normal Python
``logging`` module API. For example:
.. code-block:: python
import logging
logger = logging.getLogger('waitress')
logger.setLevel(logging.INFO)
Within a PasteDeploy configuration file, you can use the normal Python
``logging`` module ``.ini`` file format to change similar Waitress logging
options. For example:
.. code-block:: ini
[logger_waitress]
level = INFO
Using Behind a Reverse Proxy
----------------------------
Often people will set up "pure Python" web servers behind reverse proxies,
especially if they need SSL support (Waitress does not natively support SSL).
Even if you don't need SSL support, it's not uncommon to see Waitress and
other pure-Python web servers set up to "live" behind a reverse proxy; these
proxies often have lots of useful deployment knobs.
If you're using Waitress behind a reverse proxy, you'll almost always want
your reverse proxy to pass along the ``Host`` header sent by the client to
Waitress, in either case, as it will be used by most applications to generate
correct URLs.
For example, when using Nginx as a reverse proxy, you might add the following
lines in a ``location`` section::
proxy_set_header Host $host;
The Apache directive named ``ProxyPreserveHost`` does something similar when
used as a reverse proxy.
Unfortunately, even if you pass the ``Host`` header, the Host header does not
contain enough information to regenerate the original URL sent by the client.
For example, if your reverse proxy accepts HTTPS requests (and therefore URLs
which start with ``https://``), the URLs generated by your application when
used behind a reverse proxy served by Waitress might inappropriately be
``http://foo`` rather than ``https://foo``. To fix this, you'll want to
change the ``wsgi.url_scheme`` in the WSGI environment before it reaches your
application. You can do this in one of three ways:
1. You can pass a ``url_scheme`` configuration variable to the
``waitress.serve`` function.
2. You can configure the proxy reverse server to pass a header,
``X_FORWARDED_PROTO``, whose value will be set for that request as
the ``wsgi.url_scheme`` environment value. Note that you must also
conigure ``waitress.serve`` by passing the IP address of that proxy
as its ``trusted_proxy``.
3. You can use Paste's ``PrefixMiddleware`` in conjunction with
configuration settings on the reverse proxy server.
Using ``url_scheme`` to set ``wsgi.url_scheme``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can have the Waitress server use the ``https`` url scheme by default.:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='0.0.0.0:8080', url_scheme='https')
This works if all URLs generated by your application should use the ``https``
scheme.
Passing the ``X_FORWARDED_PROTO`` header to set ``wsgi.url_scheme``
-------------------------------------------------------------------
If your proxy accepts both HTTP and HTTPS URLs, and you want your application
to generate the appropriate url based on the incoming scheme, also set up
your proxy to send a ``X-Forwarded-Proto`` with the original URL scheme along
with each proxied request. For example, when using Nginx::
proxy_set_header X-Forwarded-Proto $scheme;
or via Apache::
RequestHeader set X-Forwarded-Proto https
.. note::
You must also configure the Waitress server's ``trusted_proxy`` to
contain the IP address of the proxy in order for this header to override
the default URL scheme.
Using ``url_prefix`` to influence ``SCRIPT_NAME`` and ``PATH_INFO``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
You can have the Waitress server use a particular url prefix by default for all
URLs generated by downstream applications that take ``SCRIPT_NAME`` into
account.:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='0.0.0.0:8080', url_prefix='/foo')
Setting this to any value except the empty string will cause the WSGI
``SCRIPT_NAME`` value to be that value, minus any trailing slashes you add, and
it will cause the ``PATH_INFO`` of any request which is prefixed with this
value to be stripped of the prefix. This is useful in proxying scenarios where
you wish to forward all traffic to a Waitress server but need URLs generated by
downstream applications to be prefixed with a particular path segment.
Using Paste's ``PrefixMiddleware`` to set ``wsgi.url_scheme``
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
If only some of the URLs generated by your application should use the
``https`` scheme (and some should use ``http``), you'll need to use Paste's
``PrefixMiddleware`` as well as change some configuration settings on your
proxy. To use ``PrefixMiddleware``, wrap your application before serving it
using Waitress:
.. code-block:: python
from waitress import serve
from paste.deploy.config import PrefixMiddleware
app = PrefixMiddleware(app)
serve(app)
Once you wrap your application in the the ``PrefixMiddleware``, the
middleware will notice certain headers sent from your proxy and will change
the ``wsgi.url_scheme`` and possibly other WSGI environment variables
appropriately.
Once your application is wrapped by the prefix middleware, you should
instruct your proxy server to send along the original ``Host`` header from
the client to your Waitress server, as well as sending along a
``X-Forwarded-Proto`` header with the appropriate value for
``wsgi.url_scheme``.
If your proxy accepts both HTTP and HTTPS URLs, and you want your application
to generate the appropriate url based on the incoming scheme, also set up
your proxy to send a ``X-Forwarded-Proto`` with the original URL scheme along
with each proxied request. For example, when using Nginx::
proxy_set_header X-Forwarded-Proto $scheme;
It's permitted to set an ``X-Forwarded-For`` header too; the
``PrefixMiddleware`` uses this to adjust other environment variables (you'll
have to read its docs to find out which ones, I don't know what they are). For
the ``X-Forwarded-For`` header::
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
Note that you can wrap your application in the PrefixMiddleware declaratively
in a :term:`PasteDeploy` configuration file too, if your web framework uses
PasteDeploy-style configuration:
.. code-block:: ini
[app:myapp]
use = egg:mypackage#myapp
[filter:paste_prefix]
use = egg:PasteDeploy#prefix
[pipeline:main]
pipeline =
paste_prefix
myapp
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Note that you can also set ``PATH_INFO`` and ``SCRIPT_NAME`` using
PrefixMiddleware too (its original purpose, really) instead of using Waitress'
``url_prefix`` adjustment. See the PasteDeploy docs for more information.
Extended Documentation
----------------------
.. toctree::
:maxdepth: 1
design.rst
differences.rst
api.rst
arguments.rst
filewrapper.rst
runner.rst
glossary.rst
usage
logging
reverse-proxy
design
differences
api
arguments
filewrapper
runner
glossary
Change History
--------------
@@ -317,33 +37,31 @@ Change History
Known Issues
------------
- Does not support SSL natively.
- Does not support TLS natively. See :ref:`using-behind-a-reverse-proxy` for more information.
Support and Development
-----------------------
The `Pylons Project web site <http://pylonsproject.org/>`_ is the main online
The `Pylons Project web site <https://pylonsproject.org/>`_ is the main online
source of Waitress support and development information.
To report bugs, use the `issue tracker
<http://github.com/Pylons/waitress/issues>`_.
<https://github.com/Pylons/waitress/issues>`_.
If you've got questions that aren't answered by this documentation,
contact the `Pylons-devel maillist
<http://groups.google.com/group/pylons-devel>`_ or join the `#pyramid
IRC channel <irc://irc.freenode.net/#pyramid>`_.
contact the `Pylons-discuss maillist
<https://groups.google.com/forum/#!forum/pylons-discuss>`_ or join the `#pyramid
IRC channel <https://webchat.freenode.net/?channels=pyramid>`_.
Browse and check out tagged and trunk versions of Waitress via
the `Waitress GitHub repository <http://github.com/Pylons/waitress/>`_.
the `Waitress GitHub repository <https://github.com/Pylons/waitress/>`_.
To check out the trunk via ``git``, use this command:
.. code-block:: text
git clone git@github.com:Pylons/waitress.git
To find out how to become a contributor to Waitress, please see the
`contributor's section of the documentation
<http://docs.pylonsproject.org/index.html#contributing>`_.
To find out how to become a contributor to Waitress, please see the guidelines in `contributing.md <https://github.com/Pylons/waitress/blob/master/contributing.md>`_ and `How to Contribute Source Code and Documentation <https://pylonsproject.org/community-how-to-contribute.html>`_.
Why?
----
@@ -373,7 +91,7 @@ framework distribution simply for its server component is awkward. The test
suite of the CherryPy server also depends on the CherryPy web framework, so
even if we forked its server component into a separate distribution, we would
have still needed to backfill for all of its tests. The CherryPy team has
started work on `Cheroot <https://bitbucket.org/cherrypy/cheroot>`_, which
started work on `Cheroot <https://bitbucket.org/cherrypy/cheroot/src/default/>`_, which
should solve this problem, however.
Waitress is a fork of the WSGI-related components which existed in
View
@@ -0,0 +1,190 @@
.. _access-logging:
==============
Access Logging
==============
The WSGI design is modular. Waitress logs error conditions, debugging
output, etc., but not web traffic. For web traffic logging, Paste
provides `TransLogger
<https://web.archive.org/web/20160707041338/http://pythonpaste.org/modules/translogger.html>`_
:term:`middleware`. TransLogger produces logs in the `Apache Combined
Log Format <https://httpd.apache.org/docs/current/logs.html#combined>`_.
.. _logging-to-the-console-using-python:
Logging to the Console Using Python
-----------------------------------
``waitress.serve`` calls ``logging.basicConfig()`` to set up logging to the
console when the server starts up. Assuming no other logging configuration
has already been done, this sets the logging default level to
``logging.WARNING``. The Waitress logger will inherit the root logger's
level information (it logs at level ``WARNING`` or above).
Waitress sends its logging output (including application exception
renderings) to the Python logger object named ``waitress``. You can
influence the logger level and output stream using the normal Python
``logging`` module API. For example:
.. code-block:: python
import logging
logger = logging.getLogger('waitress')
logger.setLevel(logging.INFO)
Within a PasteDeploy configuration file, you can use the normal Python
``logging`` module ``.ini`` file format to change similar Waitress logging
options. For example:
.. code-block:: ini
[logger_waitress]
level = INFO
.. _logging-to-the-console-using-pastedeploy:
Logging to the Console Using PasteDeploy
----------------------------------------
TransLogger will automatically setup a logging handler to the console when called with no arguments.
It "just works" in environments that don't configure logging.
This is by virtue of its default configuration setting of ``setup_console_handler = True``.
.. TODO:
.. .. _logging-to-a-file-using-python:
.. Logging to a File Using Python
.. ------------------------------
.. Show how to configure the WSGI logger via python.
.. _logging-to-a-file-using-pastedeploy:
Logging to a File Using PasteDeploy
------------------------------------
TransLogger does not write to files, and the Python logging system
must be configured to do this. The Python class :class:`FileHandler`
logging handler can be used alongside TransLogger to create an
``access.log`` file similar to Apache's.
Like any standard :term:`middleware` with a Paste entry point,
TransLogger can be configured to wrap your application using ``.ini``
file syntax. First add a
``[filter:translogger]`` section, then use a ``[pipeline:main]``
section file to form a WSGI pipeline with both the translogger and
your application in it. For instance, if you have this:
.. code-block:: ini
[app:wsgiapp]
use = egg:mypackage#wsgiapp
[server:main]
use = egg:waitress#main
host = 127.0.0.1
port = 8080
Add this:
.. code-block:: ini
[filter:translogger]
use = egg:Paste#translogger
setup_console_handler = False
[pipeline:main]
pipeline = translogger
wsgiapp
Using PasteDeploy this way to form and serve a pipeline is equivalent to
wrapping your app in a TransLogger instance via the bottom of the ``main``
function of your project's ``__init__`` file:
.. code-block:: python
from mypackage import wsgiapp
from waitress import serve
from paste.translogger import TransLogger
serve(TransLogger(wsgiapp, setup_console_handler=False))
.. note::
TransLogger will automatically set up a logging handler to the console when
called with no arguments, so it "just works" in environments that don't
configure logging. Since our logging handlers are configured, we disable
the automation via ``setup_console_handler = False``.
With the filter in place, TransLogger's logger (named the ``wsgi`` logger) will
propagate its log messages to the parent logger (the root logger), sending
its output to the console when we request a page:
.. code-block:: text
00:50:53,694 INFO [wsgiapp] Returning: Hello World!
(content-type: text/plain)
00:50:53,695 INFO [wsgi] 192.168.1.111 - - [11/Aug/2011:20:09:33 -0700] "GET /hello
HTTP/1.1" 404 - "-"
"Mozilla/5.0 (Macintosh; U; Intel Mac OS X; en-US; rv:1.8.1.6) Gecko/20070725
Firefox/2.0.0.6"
To direct TransLogger to an ``access.log`` FileHandler, we need the
following to add a FileHandler (named ``accesslog``) to the list of
handlers, and ensure that the ``wsgi`` logger is configured and uses
this handler accordingly:
.. code-block:: ini
# Begin logging configuration
[loggers]
keys = root, wsgiapp, wsgi
[handlers]
keys = console, accesslog
[logger_wsgi]
level = INFO
handlers = accesslog
qualname = wsgi
propagate = 0
[handler_accesslog]
class = FileHandler
args = ('%(here)s/access.log','a')
level = INFO
formatter = generic
As mentioned above, non-root loggers by default propagate their log records
to the root logger's handlers (currently the console handler). Setting
``propagate`` to ``0`` (``False``) here disables this; so the ``wsgi`` logger
directs its records only to the ``accesslog`` handler.
Finally, there's no need to use the ``generic`` formatter with
TransLogger, as TransLogger itself provides all the information we
need. We'll use a formatter that passes-through the log messages as
is. Add a new formatter called ``accesslog`` by including the
following in your configuration file:
.. code-block:: ini
[formatters]
keys = generic, accesslog
[formatter_accesslog]
format = %(message)s
Finally alter the existing configuration to wire this new
``accesslog`` formatter into the FileHandler:
.. code-block:: ini
[handler_accesslog]
class = FileHandler
args = ('%(here)s/access.log','a')
level = INFO
formatter = accesslog
View
@@ -0,0 +1,174 @@
..index:: reverse, proxy, TLS, SSL, https
.. _using-behind-a-reverse-proxy:
============================
Using Behind a Reverse Proxy
============================
Often people will set up "pure Python" web servers behind reverse proxies,
especially if they need TLS support (Waitress does not natively support TLS).
Even if you don't need TLS support, it's not uncommon to see Waitress and
other pure-Python web servers set up to "live" behind a reverse proxy; these
proxies often have lots of useful deployment knobs.
If you're using Waitress behind a reverse proxy, you'll almost always want
your reverse proxy to pass along the ``Host`` header sent by the client to
Waitress, in either case, as it will be used by most applications to generate
correct URLs.
For example, when using nginx as a reverse proxy, you might add the following
lines in a ``location`` section.
.. code-block:: nginx
proxy_set_header Host $host;
The Apache directive named ``ProxyPreserveHost`` does something similar when
used as a reverse proxy.
Unfortunately, even if you pass the ``Host`` header, the Host header does not
contain enough information to regenerate the original URL sent by the client.
For example, if your reverse proxy accepts HTTPS requests (and therefore URLs
which start with ``https://``), the URLs generated by your application when
used behind a reverse proxy served by Waitress might inappropriately be
``http://foo`` rather than ``https://foo``. To fix this, you'll want to
change the ``wsgi.url_scheme`` in the WSGI environment before it reaches your
application. You can do this in one of three ways:
1. You can pass a ``url_scheme`` configuration variable to the
``waitress.serve`` function.
2. You can configure the proxy reverse server to pass a header,
``X_FORWARDED_PROTO``, whose value will be set for that request as
the ``wsgi.url_scheme`` environment value. Note that you must also
conigure ``waitress.serve`` by passing the IP address of that proxy
as its ``trusted_proxy``.
3. You can use Paste's ``PrefixMiddleware`` in conjunction with
configuration settings on the reverse proxy server.
Using ``url_scheme`` to set ``wsgi.url_scheme``
-----------------------------------------------
You can have the Waitress server use the ``https`` url scheme by default.:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='0.0.0.0:8080', url_scheme='https')
This works if all URLs generated by your application should use the ``https``
scheme.
Passing the ``X_FORWARDED_PROTO`` header to set ``wsgi.url_scheme``
-------------------------------------------------------------------
If your proxy accepts both HTTP and HTTPS URLs, and you want your application
to generate the appropriate url based on the incoming scheme, also set up
your proxy to send a ``X-Forwarded-Proto`` with the original URL scheme along
with each proxied request. For example, when using nginx::
proxy_set_header X-Forwarded-Proto $scheme;
or via Apache::
RequestHeader set X-Forwarded-Proto https
.. note::
You must also configure the Waitress server's ``trusted_proxy`` to
contain the IP address of the proxy in order for this header to override
the default URL scheme.
Using ``url_prefix`` to influence ``SCRIPT_NAME`` and ``PATH_INFO``
-------------------------------------------------------------------
You can have the Waitress server use a particular url prefix by default for all
URLs generated by downstream applications that take ``SCRIPT_NAME`` into
account.:
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='0.0.0.0:8080', url_prefix='/foo')
Setting this to any value except the empty string will cause the WSGI
``SCRIPT_NAME`` value to be that value, minus any trailing slashes you add, and
it will cause the ``PATH_INFO`` of any request which is prefixed with this
value to be stripped of the prefix. This is useful in proxying scenarios where
you wish to forward all traffic to a Waitress server but need URLs generated by
downstream applications to be prefixed with a particular path segment.
Using Paste's ``PrefixMiddleware`` to set ``wsgi.url_scheme``
-------------------------------------------------------------
If only some of the URLs generated by your application should use the
``https`` scheme (and some should use ``http``), you'll need to use Paste's
``PrefixMiddleware`` as well as change some configuration settings on your
proxy. To use ``PrefixMiddleware``, wrap your application before serving it
using Waitress:
.. code-block:: python
from waitress import serve
from paste.deploy.config import PrefixMiddleware
app = PrefixMiddleware(app)
serve(app)
Once you wrap your application in the the ``PrefixMiddleware``, the
middleware will notice certain headers sent from your proxy and will change
the ``wsgi.url_scheme`` and possibly other WSGI environment variables
appropriately.
Once your application is wrapped by the prefix middleware, you should
instruct your proxy server to send along the original ``Host`` header from
the client to your Waitress server, as well as sending along a
``X-Forwarded-Proto`` header with the appropriate value for
``wsgi.url_scheme``.
If your proxy accepts both HTTP and HTTPS URLs, and you want your application
to generate the appropriate url based on the incoming scheme, also set up
your proxy to send a ``X-Forwarded-Proto`` with the original URL scheme along
with each proxied request. For example, when using nginx.
.. code-block:: nginx
proxy_set_header X-Forwarded-Proto $scheme;
It's permitted to set an ``X-Forwarded-For`` header too; the
``PrefixMiddleware`` uses this to adjust other environment variables (you'll
have to read its docs to find out which ones, I don't know what they are). For
the ``X-Forwarded-For`` header.
.. code-block:: nginx
proxy_set_header X-Forwarded-For $proxy_add_x_forwarded_for;
Note that you can wrap your application in the PrefixMiddleware declaratively
in a :term:`PasteDeploy` configuration file too, if your web framework uses
PasteDeploy-style configuration:
.. code-block:: ini
[app:myapp]
use = egg:mypackage#myapp
[filter:paste_prefix]
use = egg:PasteDeploy#prefix
[pipeline:main]
pipeline =
paste_prefix
myapp
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Note that you can also set ``PATH_INFO`` and ``SCRIPT_NAME`` using
PrefixMiddleware too (its original purpose, really) instead of using Waitress'
``url_prefix`` adjustment. See the PasteDeploy docs for more information.
View
@@ -3,14 +3,10 @@
waitress-serve
--------------
Waitress comes bundled with a thin command-line wrapper around the
``waitress.serve`` function called ``waitress-serve``. This is useful for
development, and in production situations where serving of static assets is
delegated to a reverse proxy, such as Nginx or Apache.
.. versionadded:: 0.8.4
.. note::
This feature is new as of Waitress 0.8.4.
Waitress comes bundled with a thin command-line wrapper around the ``waitress.serve`` function called ``waitress-serve``.
This is useful for development, and in production situations where serving of static assets is delegated to a reverse proxy, such as nginx or Apache.
``waitress-serve`` takes the very same :ref:`arguments <arguments>` as the
``waitress.serve`` function, but where the function's arguments have
View
@@ -0,0 +1,83 @@
.. _usage:
=====
Usage
=====
The following code will run waitress on port 8080 on all available IP addresses, both IPv4 and IPv6.
.. code-block:: python
from waitress import serve
serve(wsgiapp, listen='*:8080')
Press :kbd:`Ctrl-C` (or :kbd:`Ctrl-Break` on Windows) to exit the server.
The following will run waitress on port 8080 on all available IPv4 addresses, but not IPv6.
.. code-block:: python
from waitress import serve
serve(wsgiapp, host='0.0.0.0', port=8080)
By default Waitress binds to any IPv4 address on port 8080.
You can omit the ``host`` and ``port`` arguments and just call ``serve`` with the WSGI app as a single argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp)
If you want to serve your application through a UNIX domain socket (to serve a downstream HTTP server/proxy such as nginx, lighttpd, and so on), call ``serve`` with the ``unix_socket`` argument:
.. code-block:: python
from waitress import serve
serve(wsgiapp, unix_socket='/path/to/unix.sock')
Needless to say, this configuration won't work on Windows.
Exceptions generated by your application will be shown on the console by
default. See :ref:`access-logging` to change this.
There's an entry point for :term:`PasteDeploy` (``egg:waitress#main``) that
lets you use Waitress's WSGI gateway from a configuration file, e.g.:
.. code-block:: ini
[server:main]
use = egg:waitress#main
listen = 127.0.0.1:8080
Using ``host`` and ``port`` is also supported:
.. code-block:: ini
[server:main]
host = 127.0.0.1
port = 8080
The :term:`PasteDeploy` syntax for UNIX domain sockets is analagous:
.. code-block:: ini
[server:main]
use = egg:waitress#main
unix_socket = /path/to/unix.sock
You can find more settings to tweak (arguments to ``waitress.serve`` or
equivalent settings in PasteDeploy) in :ref:`arguments`.
Additionally, there is a command line runner called ``waitress-serve``, which
can be used in development and in situations where the likes of
:term:`PasteDeploy` is not necessary:
.. code-block:: bash
# Listen on both IPv4 and IPv6 on port 8041
waitress-serve --listen=*:8041 myapp:wsgifunc
# Listen on only IPv4 on port 8041
waitress-serve --port=8041 myapp:wsgifunc
For more information on this, see :ref:`runner`.
View
@@ -55,12 +55,13 @@
'Programming Language :: Python :: 3.4',
'Programming Language :: Python :: 3.5',
'Programming Language :: Python :: 3.6',
"Programming Language :: Python :: Implementation :: CPython",
"Programming Language :: Python :: Implementation :: PyPy",
'Programming Language :: Python :: 3.7',
'Programming Language :: Python :: Implementation :: CPython',
'Programming Language :: Python :: Implementation :: PyPy',
'Natural Language :: English',
'Operating System :: OS Independent',
'Topic :: Internet :: WWW/HTTP',
"Topic :: Internet :: WWW/HTTP :: WSGI",
'Topic :: Internet :: WWW/HTTP :: WSGI',
],
url='https://github.com/Pylons/waitress',
packages=find_packages(),
View
@@ -1,6 +1,6 @@
[tox]
envlist =
py27,py34,py35,py36,pypy,
py27,py34,py35,py36,py37,pypy,
docs,
{py2,py3}-cover,coverage
@@ -13,6 +13,7 @@ basepython =
py35: python3.5
py36: python3.6
py37: python3.7
py38: python3.8
pypy: pypy
py2: python2.7
py3: python3.5
View
@@ -66,6 +66,9 @@ def slash_fixed_str(s):
s = '/' + s.lstrip('/').rstrip('/')
return s
def str_iftruthy(s):
return str(s) if s else None
class _str_marker(str):
pass
@@ -98,7 +101,7 @@ class Adjustments(object):
('max_request_header_size', int),
('max_request_body_size', int),
('expose_tracebacks', asbool),
('ident', str),
('ident', str_iftruthy),
('asyncore_loop_timeout', int),
('asyncore_use_poll', asbool),
('unix_socket', str),
View
@@ -11,7 +11,6 @@
# FOR A PARTICULAR PURPOSE.
#
##############################################################################
import asyncore
import socket
import threading
import time
@@ -29,12 +28,11 @@
WSGITask,
)
from waitress.utilities import (
logging_dispatcher,
InternalServerError,
)
from waitress.utilities import InternalServerError
from . import wasyncore
class HTTPChannel(logging_dispatcher, object):
class HTTPChannel(wasyncore.dispatcher, object):
"""
Setting self.requests = [somerequest] prevents more requests from being
received until the out buffers have been flushed.
@@ -76,9 +74,9 @@ def __init__(
# outbuf_lock used to access any outbuf
self.outbuf_lock = threading.Lock()
asyncore.dispatcher.__init__(self, sock, map=map)
wasyncore.dispatcher.__init__(self, sock, map=map)
# Don't let asyncore.dispatcher throttle self.addr on us.
# Don't let wasyncore.dispatcher throttle self.addr on us.
self.addr = addr
def any_outbuf_has_data(self):
@@ -281,23 +279,23 @@ def handle_close(self):
self.logger.exception(
'Unknown exception while trying to close outbuf')
self.connected = False
asyncore.dispatcher.close(self)
wasyncore.dispatcher.close(self)
def add_channel(self, map=None):
"""See asyncore.dispatcher
"""See wasyncore.dispatcher
This hook keeps track of opened channels.
"""
asyncore.dispatcher.add_channel(self, map)
wasyncore.dispatcher.add_channel(self, map)
self.server.active_channels[self._fileno] = self
def del_channel(self, map=None):
"""See asyncore.dispatcher
"""See wasyncore.dispatcher
This hook keeps track of closed channels.
"""
fd = self._fileno # next line sets this to None
asyncore.dispatcher.del_channel(self, map)
wasyncore.dispatcher.del_channel(self, map)
ac = self.server.active_channels
if fd in ac:
del ac[fd]
View
@@ -1,3 +1,4 @@
import os
import sys
import types
import platform
@@ -8,6 +9,11 @@
except ImportError: # pragma: no cover
from urllib import parse as urlparse
try:
import fcntl
except ImportError: # pragma: no cover
fcntl = None # windows
# True if we are running on Python 3.
PY2 = sys.version_info[0] == 2
PY3 = sys.version_info[0] == 3
@@ -138,3 +144,30 @@ def exec_(code, globs=None, locs=None):
RuntimeWarning
)
HAS_IPV6 = False
def set_nonblocking(fd): # pragma: no cover
if PY3 and sys.version_info[1] >= 5:
os.set_blocking(fd, False)
elif fcntl is None:
raise RuntimeError('no fcntl module present')
else:
flags = fcntl.fcntl(fd, fcntl.F_GETFL, 0)
flags = flags | os.O_NONBLOCK
fcntl.fcntl(fd, fcntl.F_SETFL, flags)
if PY3:
ResourceWarning = ResourceWarning
else:
ResourceWarning = UserWarning
def qualname(cls):
if PY3:
return cls.__qualname__
return cls.__name__
try:
import thread
except ImportError:
# py3
import _thread as thread
View
@@ -274,7 +274,7 @@ def get_header_lines(header):
for line in lines:
if line.startswith((b' ', b'\t')):
if not r:
# http://corte.si/posts/code/pathod/pythonservers/index.html
# https://corte.si/posts/code/pathod/pythonservers/index.html
raise ParsingError('Malformed header line "%s"' % tostr(line))
r[-1] += line
else:
View
@@ -12,7 +12,6 @@
#
##############################################################################
import asyncore
import os
import os.path
import socket
@@ -22,14 +21,13 @@
from waitress.adjustments import Adjustments
from waitress.channel import HTTPChannel
from waitress.task import ThreadedTaskDispatcher
from waitress.utilities import (
cleanup_unix_socket,
logging_dispatcher,
)
from waitress.utilities import cleanup_unix_socket
from waitress.compat import (
IPPROTO_IPV6,
IPV6_V6ONLY,
)
from . import wasyncore
def create_server(application,
map=None,
@@ -98,10 +96,10 @@ def create_server(application,
# This class is only ever used if we have multiple listen sockets. It allows
# the serve() API to call .run() which starts the asyncore loop, and catches
# the serve() API to call .run() which starts the wasyncore loop, and catches
# SystemExit/KeyboardInterrupt so that it can atempt to cleanly shut down.
class MultiSocketServer(object):
asyncore = asyncore # test shim
asyncore = wasyncore # test shim
def __init__(self,
map=None,
@@ -131,15 +129,19 @@ def run(self):
use_poll=self.adj.asyncore_use_poll,
)
except (SystemExit, KeyboardInterrupt):
self.task_dispatcher.shutdown()
self.close()
def close(self):
self.task_dispatcher.shutdown()
wasyncore.close_all(self.map)
class BaseWSGIServer(logging_dispatcher, object):
class BaseWSGIServer(wasyncore.dispatcher, object):
channel_class = HTTPChannel
next_channel_cleanup = 0
socketmod = socket # test shim
asyncore = asyncore # test shim
asyncore = wasyncore # test shim
def __init__(self,
application,
@@ -155,7 +157,7 @@ def __init__(self,
adj = Adjustments(**kw)
if map is None:
# use a nonglobal socket map by default to hopefully prevent
# conflicts with apps and libs that use the asyncore global socket
# conflicts with apps and libs that use the wasyncore global socket
# map ala https://github.com/Pylons/waitress/issues/63
map = {}
if sockinfo is None:
@@ -191,21 +193,35 @@ def bind_server_socket(self):
def get_server_name(self, ip):
"""Given an IP or hostname, try to determine the server name."""
if ip:
server_name = str(ip)
else:
server_name = str(self.socketmod.gethostname())
# Convert to a host name if necessary.
for c in server_name:
if c != '.' and not c.isdigit():
return server_name
try:
if server_name == '0.0.0.0' or server_name == '::':
if not ip:
raise ValueError('Requires an IP to get the server name')
server_name = str(ip)
# If we are bound to all IP's, just return the current hostname, only
# fall-back to "localhost" if we fail to get the hostname
if server_name == '0.0.0.0' or server_name == '::':
try:
return str(self.socketmod.gethostname())
except (socket.error, UnicodeDecodeError): # pragma: no cover
# We also deal with UnicodeDecodeError in case of Windows with
# non-ascii hostname
return 'localhost'
# Now let's try and convert the IP address to a proper hostname
try:
server_name = self.socketmod.gethostbyaddr(server_name)[0]
except socket.error: # pragma: no cover
except (socket.error, UnicodeDecodeError): # pragma: no cover
# We also deal with UnicodeDecodeError in case of Windows with
# non-ascii hostname
pass
# If it contains an IPv6 literal, make sure to surround it with
# brackets
if ':' in server_name and '[' not in server_name:
server_name = '[{}]'.format(server_name)
return server_name
def getsockname(self):
@@ -286,6 +302,10 @@ def maintenance(self, now):
def print_listen(self, format_str): # pragma: nocover
print(format_str.format(self.effective_host, self.effective_port))
def close(self):
self.trigger.close()
return wasyncore.dispatcher.close(self)
class TcpWSGIServer(BaseWSGIServer):
@@ -353,5 +373,8 @@ def getsockname(self):
def fix_addr(self, addr):
return ('localhost', None)
def get_server_name(self, ip):
return 'localhost'
# Compatibility alias.
WSGIServer = TcpWSGIServer
View
@@ -155,6 +155,7 @@ class Task(object):
content_length = None
content_bytes_written = 0
logged_write_excess = False
logged_write_no_body = False
complete = False
chunked_response = False
logger = logger
@@ -182,6 +183,13 @@ def service(self):
finally:
pass
@property
def has_body(self):
return not (self.status.startswith('1') or
self.status.startswith('204') or
self.status.startswith('304')
)
def cancel(self):
self.close_on_finish = True
@@ -192,30 +200,41 @@ def build_response_header(self):
version = self.version
# Figure out whether the connection should be closed.
connection = self.request.headers.get('CONNECTION', '').lower()
response_headers = self.response_headers
response_headers = []
content_length_header = None
date_header = None
server_header = None
connection_close_header = None
for i, (headername, headerval) in enumerate(response_headers):
for (headername, headerval) in self.response_headers:
headername = '-'.join(
[x.capitalize() for x in headername.split('-')]
)
if headername == 'Content-Length':
content_length_header = headerval
if self.has_body:
content_length_header = headerval
else:
continue # pragma: no cover
if headername == 'Date':
date_header = headerval
if headername == 'Server':
server_header = headerval
if headername == 'Connection':
connection_close_header = headerval.lower()
# replace with properly capitalized version
response_headers[i] = (headername, headerval)
response_headers.append((headername, headerval))
if content_length_header is None and self.content_length is not None:
if (
content_length_header is None and
self.content_length is not None and
self.has_body
):
content_length_header = str(self.content_length)
self.response_headers.append(
response_headers.append(
('Content-Length', content_length_header)
)
@@ -238,8 +257,13 @@ def close_on_finish():
close_on_finish()
if not content_length_header:
response_headers.append(('Transfer-Encoding', 'chunked'))
self.chunked_response = True
# RFC 7230: MUST NOT send Transfer-Encoding or Content-Length
# for any response with a status code of 1xx, 204 or 304.
if self.has_body:
response_headers.append(('Transfer-Encoding', 'chunked'))
self.chunked_response = True
if not self.close_on_finish:
close_on_finish()
@@ -250,27 +274,38 @@ def close_on_finish():
# Set the Server and Date field, if not yet specified. This is needed
# if the server is used as a proxy.
ident = self.channel.server.adj.ident
if not server_header:
response_headers.append(('Server', ident))
if ident:
response_headers.append(('Server', ident))
else:
response_headers.append(('Via', ident))
response_headers.append(('Via', ident or 'waitress'))
if not date_header:
response_headers.append(('Date', build_http_date(self.start_time)))
self.response_headers = response_headers
first_line = 'HTTP/%s %s' % (self.version, self.status)
# NB: sorting headers needs to preserve same-named-header order
# as per RFC 2616 section 4.2; thus the key=lambda x: x[0] here;
# rely on stable sort to keep relative position of same-named headers
next_lines = ['%s: %s' % hv for hv in sorted(
self.response_headers, key=lambda x: x[0])]
self.response_headers, key=lambda x: x[0])]
lines = [first_line] + next_lines
res = '%s\r\n\r\n' % '\r\n'.join(lines)
return tobytes(res)
def remove_content_length_header(self):
for i, (header_name, header_value) in enumerate(self.response_headers):
response_headers = []
for header_name, header_value in self.response_headers:
if header_name.lower() == 'content-length':
del self.response_headers[i]
continue # pragma: nocover
response_headers.append((header_name, header_value))
self.response_headers = response_headers
def start(self):
self.start_time = time.time()
@@ -291,7 +326,8 @@ def write(self, data):
rh = self.build_response_header()
channel.write_soon(rh)
self.wrote_header = True
if data:
if data and self.has_body:
towrite = data
cl = self.content_length
if self.chunked_response:
@@ -308,6 +344,18 @@ def write(self, data):
self.logged_write_excess = True
if towrite:
channel.write_soon(towrite)
else:
# Cheat, and tell the application we have written all of the bytes,
# even though the response shouldn't have a body and we are
# ignoring it entirely.
self.content_bytes_written += len(data)
if not self.logged_write_no_body:
self.logger.warning(
'application-written content was ignored due to HTTP '
'response that may not contain a message-body: (%s)' % self.status)
self.logged_write_no_body = True
class ErrorTask(Task):
""" An error task produces an error response
@@ -530,6 +578,7 @@ def get_environment(self):
environ['wsgi.run_once'] = False
environ['wsgi.input'] = request.get_body_stream()
environ['wsgi.file_wrapper'] = ReadOnlyFileBasedBuffer
environ['wsgi.input_terminated'] = True # wsgi.input is EOF terminated
self.environ = environ
return environ
View
@@ -219,6 +219,17 @@ def test_ipv4_disabled(self):
def test_ipv6_disabled(self):
self.assertRaises(ValueError, self._makeOne, ipv6=False, listen="[::]:8080")
def test_server_header_removable(self):
inst = self._makeOne(ident=None)
self.assertEqual(inst.ident, None)
inst = self._makeOne(ident='')
self.assertEqual(inst.ident, None)
inst = self._makeOne(ident='specific_header')
self.assertEqual(inst.ident, 'specific_header')
class TestCLI(unittest.TestCase):
def parse(self, argv):
View
@@ -155,7 +155,7 @@ def test_date_and_server(self):
self.assertTrue(headers.get('date'))
def test_bad_host_header(self):
# http://corte.si/posts/code/pathod/pythonservers/index.html
# https://corte.si/posts/code/pathod/pythonservers/index.html
to_send = ("GET / HTTP/1.0\n"
" Host: 0\n\n")
to_send = tobytes(to_send)
View
@@ -14,6 +14,7 @@ def test_it(self):
self.assertEqual(result, None)
self.assertEqual(server.ran, True)
class Test_serve_paste(unittest.TestCase):
def _callFUT(self, app, **kw):
View
@@ -286,7 +286,7 @@ def test_get_header_lines_tabbed(self):
self.assertEqual(result, [b'slam\tslim'])
def test_get_header_lines_malformed(self):
# http://corte.si/posts/code/pathod/pythonservers/index.html
# https://corte.si/posts/code/pathod/pythonservers/index.html
from waitress.parser import ParsingError
self.assertRaises(ParsingError,
self._callFUT, b' Host: localhost\r\n\r\n')
View
@@ -10,14 +10,15 @@ def _makeOne(self, application=dummy_app, host='127.0.0.1', port=0,
_dispatcher=None, adj=None, map=None, _start=True,
_sock=None, _server=None):
from waitress.server import create_server
return create_server(
self.inst = create_server(
application,
host=host,
port=port,
map=map,
_dispatcher=_dispatcher,
_start=_start,
_sock=_sock)
return self.inst
def _makeOneWithMap(self, adj=None, _start=True, host='127.0.0.1',
port=0, app=dummy_app):
@@ -40,15 +41,21 @@ def _makeOneWithMulti(self, adj=None, _start=True,
task_dispatcher = DummyTaskDispatcher()
map = {}
from waitress.server import create_server
return create_server(
self.inst = create_server(
app,
listen=listen,
map=map,
_dispatcher=task_dispatcher,
_start=_start,
_sock=sock)
return self.inst
def tearDown(self):
if self.inst is not None:
self.inst.close()
def test_ctor_app_is_None(self):
self.inst = None
self.assertRaises(ValueError, self._makeOneWithMap, app=None)
def test_ctor_start_true(self):
@@ -67,8 +74,7 @@ def test_ctor_start_false(self):
def test_get_server_name_empty(self):
inst = self._makeOneWithMap(_start=False)
result = inst.get_server_name('')
self.assertTrue(result)
self.assertRaises(ValueError, inst.get_server_name, '')
def test_get_server_name_with_ip(self):
inst = self._makeOneWithMap(_start=False)
@@ -83,7 +89,17 @@ def test_get_server_name_with_hostname(self):
def test_get_server_name_0000(self):
inst = self._makeOneWithMap(_start=False)
result = inst.get_server_name('0.0.0.0')
self.assertEqual(result, 'localhost')
self.assertTrue(len(result) != 0)
def test_get_server_name_double_colon(self):
inst = self._makeOneWithMap(_start=False)
result = inst.get_server_name('::')
self.assertTrue(len(result) != 0)
def test_get_server_name_ipv6(self):
inst = self._makeOneWithMap(_start=False)
result = inst.get_server_name('2001:DB8::ffff')
self.assertEqual('[2001:DB8::ffff]', result)
def test_get_server_multi(self):
inst = self._makeOneWithMulti()
@@ -105,6 +121,7 @@ def test_run_base_server(self):
def test_pull_trigger(self):
inst = self._makeOneWithMap(_start=False)
inst.trigger.close()
inst.trigger = DummyTrigger()
inst.pull_trigger()
self.assertEqual(inst.trigger.pulled, True)
@@ -215,10 +232,10 @@ def test_backward_compatibility(self):
from waitress.server import WSGIServer, TcpWSGIServer
from waitress.adjustments import Adjustments
self.assertTrue(WSGIServer is TcpWSGIServer)
inst = WSGIServer(None, _start=False, port=1234)
self.inst = WSGIServer(None, _start=False, port=1234)
# Ensure the adjustment was actually applied.
self.assertNotEqual(Adjustments.port, 1234)
self.assertEqual(inst.adj.port, 1234)
self.assertEqual(self.inst.adj.port, 1234)
if hasattr(socket, 'AF_UNIX'):
@@ -227,7 +244,7 @@ class TestUnixWSGIServer(unittest.TestCase):
def _makeOne(self, _start=True, _sock=None):
from waitress.server import create_server
return create_server(
self.inst = create_server(
dummy_app,
map={},
_start=_start,
@@ -236,6 +253,10 @@ def _makeOne(self, _start=True, _sock=None):
unix_socket=self.unix_socket,
unix_socket_perms='600'
)
return self.inst
def tearDown(self):
self.inst.close()
def _makeDummy(self, *args, **kwargs):
sock = DummySock(*args, **kwargs)
@@ -268,13 +289,13 @@ def test_handle_accept(self):
def test_creates_new_sockinfo(self):
from waitress.server import UnixWSGIServer
inst = UnixWSGIServer(
self.inst = UnixWSGIServer(
dummy_app,
unix_socket=self.unix_socket,
unix_socket_perms='600'
)
self.assertEqual(inst.sockinfo[0], socket.AF_UNIX)
self.assertEqual(self.inst.sockinfo[0], socket.AF_UNIX)
class DummySock(object):
accepted = False
@@ -317,6 +338,9 @@ def listen(self, num):
def getsockname(self):
return self.bound
def close(self):
pass
class DummyTaskDispatcher(object):
def __init__(self):
@@ -358,6 +382,9 @@ class DummyTrigger(object):
def pull_trigger(self):
self.pulled = True
def close(self):
pass
class DummyLogger(object):
def __init__(self):
View
@@ -211,6 +211,57 @@ def test_build_response_header_v11_200_no_content_length(self):
self.assertEqual(inst.close_on_finish, True)
self.assertTrue(('Connection', 'close') in inst.response_headers)
def test_build_response_header_v11_204_no_content_length_or_transfer_encoding(self):
# RFC 7230: MUST NOT send Transfer-Encoding or Content-Length
# for any response with a status code of 1xx or 204.
inst = self._makeOne()
inst.request = DummyParser()
inst.version = '1.1'
inst.status = '204 No Content'
result = inst.build_response_header()
lines = filter_lines(result)
self.assertEqual(len(lines), 4)
self.assertEqual(lines[0], b'HTTP/1.1 204 No Content')
self.assertEqual(lines[1], b'Connection: close')
self.assertTrue(lines[2].startswith(b'Date:'))
self.assertEqual(lines[3], b'Server: waitress')
self.assertEqual(inst.close_on_finish, True)
self.assertTrue(('Connection', 'close') in inst.response_headers)
def test_build_response_header_v11_1xx_no_content_length_or_transfer_encoding(self):
# RFC 7230: MUST NOT send Transfer-Encoding or Content-Length
# for any response with a status code of 1xx or 204.
inst = self._makeOne()
inst.request = DummyParser()
inst.version = '1.1'
inst.status = '100 Continue'
result = inst.build_response_header()
lines = filter_lines(result)
self.assertEqual(len(lines), 4)
self.assertEqual(lines[0], b'HTTP/1.1 100 Continue')
self.assertEqual(lines[1], b'Connection: close')
self.assertTrue(lines[2].startswith(b'Date:'))
self.assertEqual(lines[3], b'Server: waitress')
self.assertEqual(inst.close_on_finish, True)
self.assertTrue(('Connection', 'close') in inst.response_headers)
def test_build_response_header_v11_304_no_content_length_or_transfer_encoding(self):
# RFC 7230: MUST NOT send Transfer-Encoding or Content-Length
# for any response with a status code of 1xx, 204 or 304.
inst = self._makeOne()
inst.request = DummyParser()
inst.version = '1.1'
inst.status = '304 Not Modified'
result = inst.build_response_header()
lines = filter_lines(result)
self.assertEqual(len(lines), 4)
self.assertEqual(lines[0], b'HTTP/1.1 304 Not Modified')
self.assertEqual(lines[1], b'Connection: close')
self.assertTrue(lines[2].startswith(b'Date:'))
self.assertEqual(lines[3], b'Server: waitress')
self.assertEqual(inst.close_on_finish, True)
self.assertTrue(('Connection', 'close') in inst.response_headers)
def test_build_response_header_via_added(self):
inst = self._makeOne()
inst.request = DummyParser()
@@ -257,6 +308,12 @@ def test_remove_content_length_header(self):
inst.remove_content_length_header()
self.assertEqual(inst.response_headers, [])
def test_remove_content_length_header_with_other(self):
inst = self._makeOne()
inst.response_headers = [('Content-Length', '70'), ('Content-Type', 'text/html')]
inst.remove_content_length_header()
self.assertEqual(inst.response_headers, [('Content-Type', 'text/html')])
def test_start(self):
inst = self._makeOne()
inst.start()
@@ -527,6 +584,34 @@ def app(environ, start_response):
self.assertEqual(inst.close_on_finish, True)
self.assertEqual(len(inst.logger.logged), 0)
def test_execute_app_without_body_204_logged(self):
def app(environ, start_response):
start_response('204 No Content', [('Content-Length', '3')])
return [b'abc']
inst = self._makeOne()
inst.channel.server.application = app
inst.logger = DummyLogger()
inst.execute()
self.assertEqual(inst.close_on_finish, True)
self.assertNotIn(b'abc', inst.channel.written)
self.assertNotIn(b'Content-Length', inst.channel.written)
self.assertNotIn(b'Transfer-Encoding', inst.channel.written)
self.assertEqual(len(inst.logger.logged), 1)
def test_execute_app_without_body_304_logged(self):
def app(environ, start_response):
start_response('304 Not Modified', [('Content-Length', '3')])
return [b'abc']
inst = self._makeOne()
inst.channel.server.application = app
inst.logger = DummyLogger()
inst.execute()
self.assertEqual(inst.close_on_finish, True)
self.assertNotIn(b'abc', inst.channel.written)
self.assertNotIn(b'Content-Length', inst.channel.written)
self.assertNotIn(b'Transfer-Encoding', inst.channel.written)
self.assertEqual(len(inst.logger.logged), 1)
def test_execute_app_returns_closeable(self):
class closeable(list):
def close(self):
@@ -670,8 +755,8 @@ def test_get_environment_values(self):
'PATH_INFO', 'QUERY_STRING', 'REMOTE_ADDR', 'REQUEST_METHOD',
'SCRIPT_NAME', 'SERVER_NAME', 'SERVER_PORT', 'SERVER_PROTOCOL',
'SERVER_SOFTWARE', 'wsgi.errors', 'wsgi.file_wrapper', 'wsgi.input',
'wsgi.multiprocess', 'wsgi.multithread', 'wsgi.run_once',
'wsgi.url_scheme', 'wsgi.version'])
'wsgi.input_terminated', 'wsgi.multiprocess', 'wsgi.multithread',
'wsgi.run_once', 'wsgi.url_scheme', 'wsgi.version'])
self.assertEqual(environ['REQUEST_METHOD'], 'GET')
self.assertEqual(environ['SERVER_PORT'], '80')
@@ -693,6 +778,7 @@ def test_get_environment_values(self):
self.assertEqual(environ['wsgi.multiprocess'], False)
self.assertEqual(environ['wsgi.run_once'], False)
self.assertEqual(environ['wsgi.input'], 'stream')
self.assertEqual(environ['wsgi.input_terminated'], True)
self.assertEqual(inst.environ, environ)
def test_get_environment_values_w_scheme_override_untrusted(self):
@@ -733,8 +819,8 @@ def test_get_environment_values_w_scheme_override_trusted(self):
'PATH_INFO', 'QUERY_STRING', 'REMOTE_ADDR', 'REQUEST_METHOD',
'SCRIPT_NAME', 'SERVER_NAME', 'SERVER_PORT', 'SERVER_PROTOCOL',
'SERVER_SOFTWARE', 'wsgi.errors', 'wsgi.file_wrapper', 'wsgi.input',
'wsgi.multiprocess', 'wsgi.multithread', 'wsgi.run_once',
'wsgi.url_scheme', 'wsgi.version'])
'wsgi.input_terminated', 'wsgi.multiprocess', 'wsgi.multithread',
'wsgi.run_once', 'wsgi.url_scheme', 'wsgi.version'])
self.assertEqual(environ['REQUEST_METHOD'], 'GET')
self.assertEqual(environ['SERVER_PORT'], '80')
@@ -756,6 +842,7 @@ def test_get_environment_values_w_scheme_override_trusted(self):
self.assertEqual(environ['wsgi.multiprocess'], False)
self.assertEqual(environ['wsgi.run_once'], False)
self.assertEqual(environ['wsgi.input'], 'stream')
self.assertEqual(environ['wsgi.input_terminated'], True)
self.assertEqual(inst.environ, environ)
def test_get_environment_values_w_bogus_scheme_override(self):
View
@@ -8,15 +8,19 @@ class Test_trigger(unittest.TestCase):
def _makeOne(self, map):
from waitress.trigger import trigger
return trigger(map)
self.inst = trigger(map)
return self.inst
def tearDown(self):
self.inst.close() # prevent __del__ warning from file_dispatcher
def test__close(self):
map = {}
inst = self._makeOne(map)
fd = os.open(os.path.abspath(__file__), os.O_RDONLY)
inst._fds = (fd,)
fd1, fd2 = inst._fds
inst.close()
self.assertRaises(OSError, os.read, fd, 1)
self.assertRaises(OSError, os.read, fd1, 1)
self.assertRaises(OSError, os.read, fd2, 1)
def test__physical_pull(self):
map = {}
View
@@ -89,21 +89,6 @@ def test_double_crfl(self):
def test_mixed(self):
self.assertEqual(self._callFUT(b'\n\n00\r\n\r\n'), 2)
class Test_logging_dispatcher(unittest.TestCase):
def _makeOne(self):
from waitress.utilities import logging_dispatcher
return logging_dispatcher(map={})
def test_log_info(self):
import logging
inst = self._makeOne()
logger = DummyLogger()
inst.logger = logger
inst.log_info('message', 'warning')
self.assertEqual(logger.severity, logging.WARN)
self.assertEqual(logger.message, 'message')
class TestBadRequest(unittest.TestCase):
def _makeOne(self):
@@ -114,8 +99,3 @@ def test_it(self):
inst = self._makeOne()
self.assertEqual(inst.body, 1)
class DummyLogger(object):
def log(self, severity, message):
self.severity = severity
self.message = message
View

Large diffs are not rendered by default.

Oops, something went wrong.
View
@@ -12,12 +12,13 @@
#
##############################################################################
import asyncore
import os
import socket
import errno
import threading
from . import wasyncore
# Wake up a call to select() running in the main thread.
#
# This is useful in a context where you are using Medusa's I/O
@@ -61,7 +62,7 @@ def __init__(self):
self.lock = threading.Lock()
# List of no-argument callbacks to invoke when the trigger is
# pulled. These run in the thread running the asyncore mainloop,
# pulled. These run in the thread running the wasyncore mainloop,
# regardless of which thread pulls the trigger.
self.thunks = []
@@ -77,7 +78,7 @@ def handle_connect(self):
def handle_close(self):
self.close()
# Override the asyncore close() method, because it doesn't know about
# Override the wasyncore close() method, because it doesn't know about
# (so can't close) all the gimmicks we have open. Subclass must
# supply a _close() method to do platform-specific closing work. _close()
# will be called iff we're not already closed.
@@ -103,26 +104,27 @@ def handle_read(self):
try:
thunk()
except:
nil, t, v, tbinfo = asyncore.compact_traceback()
nil, t, v, tbinfo = wasyncore.compact_traceback()
self.log_info(
'exception in trigger thunk: (%s:%s %s)' %
(t, v, tbinfo))
self.thunks = []
if os.name == 'posix':
class trigger(_triggerbase, asyncore.file_dispatcher):
class trigger(_triggerbase, wasyncore.file_dispatcher):
kind = "pipe"
def __init__(self, map):
_triggerbase.__init__(self)
r, self.trigger = self._fds = os.pipe()
asyncore.file_dispatcher.__init__(self, r, map=map)
wasyncore.file_dispatcher.__init__(self, r, map=map)
def _close(self):
for fd in self._fds:
os.close(fd)
self._fds = []
wasyncore.file_dispatcher.close(self)
def _physical_pull(self):
os.write(self.trigger, b'x')
@@ -131,20 +133,20 @@ def _physical_pull(self):
# Windows version; uses just sockets, because a pipe isn't select'able
# on Windows.
class trigger(_triggerbase, asyncore.dispatcher):
class trigger(_triggerbase, wasyncore.dispatcher):
kind = "loopback"
def __init__(self, map):
_triggerbase.__init__(self)
# Get a pair of connected sockets. The trigger is the 'w'
# end of the pair, which is connected to 'r'. 'r' is put
# in the asyncore socket map. "pulling the trigger" then
# in the wasyncore socket map. "pulling the trigger" then
# means writing something on w, which will wake up r.
w = socket.socket()
# Disable buffering -- pulling the trigger sends 1 byte,
# and we want that sent immediately, to wake up asyncore's
# and we want that sent immediately, to wake up wasyncore's
# select() ASAP.
w.setsockopt(socket.IPPROTO_TCP, socket.TCP_NODELAY, 1)
@@ -184,10 +186,10 @@ def __init__(self, map):
# sleep() here, but it didn't appear to help or hurt.
a.close()
r, addr = a.accept() # r becomes asyncore's (self.)socket
r, addr = a.accept() # r becomes wasyncore's (self.)socket
a.close()
self.trigger = w
asyncore.dispatcher.__init__(self, r, map=map)
wasyncore.dispatcher.__init__(self, r, map=map)
def _close(self):
# self.socket is r, and self.trigger is w, from __init__
View
@@ -14,7 +14,6 @@
"""Utility functions
"""
import asyncore
import errno
import logging
import os
@@ -170,16 +169,6 @@ def parse_http_date(d):
return 0
return retval
class logging_dispatcher(asyncore.dispatcher):
logger = logger
def log_info(self, message, type='info'):
severity = {
'info': logging.INFO,
'warning': logging.WARN,
'error': logging.ERROR,
}
self.logger.log(severity.get(type, logging.INFO), message)
def cleanup_unix_socket(path):
try:
View

Large diffs are not rendered by default.

Oops, something went wrong.