Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with HTTPS or Subversion.

Download ZIP
Browse files

updates to tornado 3.1 , also separates hellotornado.py from app.py

  • Loading branch information...
commit dc49b9b8d8b43f0f6accf3877425fd56ef8fb3a4 1 parent dcb11d6
@gfidente authored
Showing with 13,205 additions and 4,416 deletions.
  1. +1 −1  .openshift/action_hooks/stop
  2. +7 −20 README.creole
  3. +18 −0 diy/app.py
  4. +5 −19 diy/hellotornado.py
  5. +3 −0  misc/virtenv/lib/python2.6/site-packages/concurrent/__init__.py
  6. +18 −0 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/__init__.py
  7. +574 −0 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/_base.py
  8. +101 −0 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/_compat.py
  9. +363 −0 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/process.py
  10. +138 −0 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/thread.py
  11. +19 −0 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/PKG-INFO
  12. +16 −0 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/SOURCES.txt
  13. 0  ...on2.6/site-packages/{tornado-2.4.1-py2.6.egg-info → futures-2.1.4-py2.6.egg-info}/dependency_links.txt
  14. +24 −0 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/installed-files.txt
  15. +1 −0  misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/not-zip-safe
  16. +2 −0  misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/top_level.txt
  17. +24 −0 misc/virtenv/lib/python2.6/site-packages/futures/__init__.py
  18. +1 −0  misc/virtenv/lib/python2.6/site-packages/futures/process.py
  19. +1 −0  misc/virtenv/lib/python2.6/site-packages/futures/thread.py
  20. +0 −11 misc/virtenv/lib/python2.6/site-packages/tornado-2.4.1-py2.6.egg-info/PKG-INFO
  21. +135 −0 misc/virtenv/lib/python2.6/site-packages/tornado-3.1-py2.6.egg-info/PKG-INFO
  22. +18 −5 ...nv/lib/python2.6/site-packages/{tornado-2.4.1-py2.6.egg-info → tornado-3.1-py2.6.egg-info}/SOURCES.txt
  23. +1 −0  misc/virtenv/lib/python2.6/site-packages/tornado-3.1-py2.6.egg-info/dependency_links.txt
  24. +30 −6 ...ython2.6/site-packages/{tornado-2.4.1-py2.6.egg-info → tornado-3.1-py2.6.egg-info}/installed-files.txt
  25. 0  .../lib/python2.6/site-packages/{tornado-2.4.1-py2.6.egg-info → tornado-3.1-py2.6.egg-info}/top_level.txt
  26. +5 −5 misc/virtenv/lib/python2.6/site-packages/tornado/__init__.py
  27. +519 −335 misc/virtenv/lib/python2.6/site-packages/tornado/auth.py
  28. +56 −38 misc/virtenv/lib/python2.6/site-packages/tornado/autoreload.py
  29. +265 −0 misc/virtenv/lib/python2.6/site-packages/tornado/concurrent.py
  30. +86 −46 misc/virtenv/lib/python2.6/site-packages/tornado/curl_httpclient.py
  31. +0 −238 misc/virtenv/lib/python2.6/site-packages/tornado/database.py
  32. +97 −68 misc/virtenv/lib/python2.6/site-packages/tornado/escape.py
  33. +210 −59 misc/virtenv/lib/python2.6/site-packages/tornado/gen.py
  34. +218 −163 misc/virtenv/lib/python2.6/site-packages/tornado/httpclient.py
  35. +158 −108 misc/virtenv/lib/python2.6/site-packages/tornado/httpserver.py
  36. +196 −70 misc/virtenv/lib/python2.6/site-packages/tornado/httputil.py
  37. +478 −326 misc/virtenv/lib/python2.6/site-packages/tornado/ioloop.py
  38. +432 −201 misc/virtenv/lib/python2.6/site-packages/tornado/iostream.py
  39. +125 −121 misc/virtenv/lib/python2.6/site-packages/tornado/locale.py
  40. +205 −0 misc/virtenv/lib/python2.6/site-packages/tornado/log.py
  41. +346 −230 misc/virtenv/lib/python2.6/site-packages/tornado/netutil.py
  42. +273 −215 misc/virtenv/lib/python2.6/site-packages/tornado/options.py
  43. +12 −1 misc/virtenv/lib/python2.6/site-packages/tornado/platform/auto.py
  44. +75 −0 misc/virtenv/lib/python2.6/site-packages/tornado/platform/caresresolver.py
  45. +7 −5 misc/virtenv/lib/python2.6/site-packages/tornado/platform/common.py
  46. +26 −0 misc/virtenv/lib/python2.6/site-packages/tornado/platform/epoll.py
  47. +6 −2 misc/virtenv/lib/python2.6/site-packages/tornado/platform/interface.py
  48. +92 −0 misc/virtenv/lib/python2.6/site-packages/tornado/platform/kqueue.py
  49. +5 −3 misc/virtenv/lib/python2.6/site-packages/tornado/platform/posix.py
  50. +76 −0 misc/virtenv/lib/python2.6/site-packages/tornado/platform/select.py
  51. +236 −23 misc/virtenv/lib/python2.6/site-packages/tornado/platform/twisted.py
  52. +1 −1  misc/virtenv/lib/python2.6/site-packages/tornado/platform/windows.py
  53. +151 −17 misc/virtenv/lib/python2.6/site-packages/tornado/process.py
  54. +189 −233 misc/virtenv/lib/python2.6/site-packages/tornado/simple_httpclient.py
  55. +229 −127 misc/virtenv/lib/python2.6/site-packages/tornado/stack_context.py
  56. +244 −0 misc/virtenv/lib/python2.6/site-packages/tornado/tcpserver.py
  57. +60 −57 misc/virtenv/lib/python2.6/site-packages/tornado/template.py
  58. +185 −24 misc/virtenv/lib/python2.6/site-packages/tornado/test/auth_test.py
  59. +330 −0 misc/virtenv/lib/python2.6/site-packages/tornado/test/concurrent_test.py
  60. +82 −8 misc/virtenv/lib/python2.6/site-packages/tornado/test/curl_httpclient_test.py
  61. +81 −63 misc/virtenv/lib/python2.6/site-packages/tornado/test/escape_test.py
  62. +566 −26 misc/virtenv/lib/python2.6/site-packages/tornado/test/gen_test.py
  63. +304 −24 misc/virtenv/lib/python2.6/site-packages/tornado/test/httpclient_test.py
  64. +375 −109 misc/virtenv/lib/python2.6/site-packages/tornado/test/httpserver_test.py
  65. +89 −56 misc/virtenv/lib/python2.6/site-packages/tornado/test/httputil_test.py
  66. +7 −21 misc/virtenv/lib/python2.6/site-packages/tornado/test/import_test.py
  67. +289 −9 misc/virtenv/lib/python2.6/site-packages/tornado/test/ioloop_test.py
  68. +201 −81 misc/virtenv/lib/python2.6/site-packages/tornado/test/iostream_test.py
  69. +23 −4 misc/virtenv/lib/python2.6/site-packages/tornado/test/locale_test.py
  70. +159 −0 misc/virtenv/lib/python2.6/site-packages/tornado/test/log_test.py
  71. +84 −0 misc/virtenv/lib/python2.6/site-packages/tornado/test/netutil_test.py
  72. +2 −0  misc/virtenv/lib/python2.6/site-packages/tornado/test/options_test.cfg
  73. +212 −110 misc/virtenv/lib/python2.6/site-packages/tornado/test/options_test.py
  74. +157 −83 misc/virtenv/lib/python2.6/site-packages/tornado/test/process_test.py
  75. +60 −2 misc/virtenv/lib/python2.6/site-packages/tornado/test/runtests.py
  76. +125 −88 misc/virtenv/lib/python2.6/site-packages/tornado/test/simple_httpclient_test.py
  77. +166 −14 misc/virtenv/lib/python2.6/site-packages/tornado/test/stack_context_test.py
  78. +1 −0  misc/virtenv/lib/python2.6/site-packages/tornado/test/static/dir/index.html
  79. +95 −73 misc/virtenv/lib/python2.6/site-packages/tornado/test/template_test.py
  80. +109 −8 misc/virtenv/lib/python2.6/site-packages/tornado/test/testing_test.py
  81. +98 −38 misc/virtenv/lib/python2.6/site-packages/tornado/test/twisted_test.py
  82. +19 −0 misc/virtenv/lib/python2.6/site-packages/tornado/test/util.py
  83. +143 −5 misc/virtenv/lib/python2.6/site-packages/tornado/test/util_test.py
  84. +862 −164 misc/virtenv/lib/python2.6/site-packages/tornado/test/web_test.py
  85. +87 −0 misc/virtenv/lib/python2.6/site-packages/tornado/test/websocket_test.py
  86. +27 −23 misc/virtenv/lib/python2.6/site-packages/tornado/test/wsgi_test.py
  87. +307 −137 misc/virtenv/lib/python2.6/site-packages/tornado/testing.py
  88. +198 −29 misc/virtenv/lib/python2.6/site-packages/tornado/util.py
  89. +856 −350 misc/virtenv/lib/python2.6/site-packages/tornado/web.py
  90. +277 −69 misc/virtenv/lib/python2.6/site-packages/tornado/websocket.py
  91. +51 −44 misc/virtenv/lib/python2.6/site-packages/tornado/wsgi.py
View
2  .openshift/action_hooks/stop
@@ -1,4 +1,4 @@
#!/bin/bash
# The logic to stop your application should be put in this script.
kill `ps -ef | grep hellotornado | grep -v grep | awk '{ print $2 }'` > /dev/null 2>&1
-exit 0
+exit 0
View
27 README.creole
@@ -48,37 +48,24 @@ Activate your virtualenv and install the needed modules:
source misc/virtenv/bin/activate
pip install tornado
+pip install futures
pip install pycurl
}}}
-Now create your diy/hellotornado.py file:
+Now, assuming your app is in app.py, create your start file (diy/hellotornado.py):
{{{
#!/usr/bin/env python
import os
-
-here = os.path.dirname(os.path.abspath(__file__))
-os.environ['PYTHON_EGG_CACHE'] = os.path.join(here, '..', 'misc/virtenv/lib/python2.6/site-packages')
-virtualenv = os.path.join(here, '..', 'misc/virtenv/bin/activate_this.py')
+cwd = os.path.dirname(os.path.abspath(__file__))
+os.environ['PYTHON_EGG_CACHE'] = os.path.join(cwd, '..', 'misc/virtenv/lib/python2.6/site-packages')
+virtualenv = os.path.join(cwd, '..', 'misc/virtenv/bin/activate_this.py')
execfile(virtualenv, dict(__file__=virtualenv))
-import tornado.ioloop
-import tornado.web
-
-class MainHandler(tornado.web.RequestHandler):
- def get(self):
- self.write("Hello, world")
-
-application = tornado.web.Application([
- (r"/", MainHandler),
-])
-
-if __name__ == "__main__":
- address = os.environ['OPENSHIFT_DIY_IP']
- application.listen(8080, address=address)
- tornado.ioloop.IOLoop.instance().start()
+import app
+app.main(os.environ['OPENSHIFT_DIY_IP'])
}}}
View
18 diy/app.py
@@ -0,0 +1,18 @@
+import tornado.ioloop
+import tornado.web
+
+class MainHandler(tornado.web.RequestHandler):
+ def get(self):
+ self.write("Hello, world")
+
+application = tornado.web.Application([
+ (r"/", MainHandler),
+])
+
+def main(address):
+ application.listen(8080, address)
+ tornado.ioloop.IOLoop.instance().start()
+
+if __name__ == "__main__":
+ address = "127.0.0.1"
+ main(address)
View
24 diy/hellotornado.py
@@ -1,23 +1,9 @@
#!/usr/bin/env python
import os
-
-here = os.path.dirname(os.path.abspath(__file__))
-os.environ['PYTHON_EGG_CACHE'] = os.path.join(here, '..', 'misc/virtenv/lib/python2.6/site-packages')
-virtualenv = os.path.join(here, '..', 'misc/virtenv/bin/activate_this.py')
+cwd = os.path.dirname(os.path.abspath(__file__))
+os.environ['PYTHON_EGG_CACHE'] = os.path.join(cwd, '..', 'misc/virtenv/lib/python2.6/site-packages')
+virtualenv = os.path.join(cwd, '..', 'misc/virtenv/bin/activate_this.py')
execfile(virtualenv, dict(__file__=virtualenv))
-import tornado.ioloop
-import tornado.web
-
-class MainHandler(tornado.web.RequestHandler):
- def get(self):
- self.write("Hello, world")
-
-application = tornado.web.Application([
- (r"/", MainHandler),
-])
-
-if __name__ == "__main__":
- address = os.environ['OPENSHIFT_DIY_IP']
- application.listen(8080, address=address)
- tornado.ioloop.IOLoop.instance().start()
+import app
+app.main(os.environ['OPENSHIFT_DIY_IP'])
View
3  misc/virtenv/lib/python2.6/site-packages/concurrent/__init__.py
@@ -0,0 +1,3 @@
+from pkgutil import extend_path
+
+__path__ = extend_path(__path__, __name__)
View
18 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/__init__.py
@@ -0,0 +1,18 @@
+# Copyright 2009 Brian Quinlan. All Rights Reserved.
+# Licensed to PSF under a Contributor Agreement.
+
+"""Execute computations asynchronously using threads or processes."""
+
+__author__ = 'Brian Quinlan (brian@sweetapp.com)'
+
+from concurrent.futures._base import (FIRST_COMPLETED,
+ FIRST_EXCEPTION,
+ ALL_COMPLETED,
+ CancelledError,
+ TimeoutError,
+ Future,
+ Executor,
+ wait,
+ as_completed)
+from concurrent.futures.process import ProcessPoolExecutor
+from concurrent.futures.thread import ThreadPoolExecutor
View
574 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/_base.py
@@ -0,0 +1,574 @@
+# Copyright 2009 Brian Quinlan. All Rights Reserved.
+# Licensed to PSF under a Contributor Agreement.
+
+from __future__ import with_statement
+import logging
+import threading
+import time
+
+try:
+ from collections import namedtuple
+except ImportError:
+ from concurrent.futures._compat import namedtuple
+
+__author__ = 'Brian Quinlan (brian@sweetapp.com)'
+
+FIRST_COMPLETED = 'FIRST_COMPLETED'
+FIRST_EXCEPTION = 'FIRST_EXCEPTION'
+ALL_COMPLETED = 'ALL_COMPLETED'
+_AS_COMPLETED = '_AS_COMPLETED'
+
+# Possible future states (for internal use by the futures package).
+PENDING = 'PENDING'
+RUNNING = 'RUNNING'
+# The future was cancelled by the user...
+CANCELLED = 'CANCELLED'
+# ...and _Waiter.add_cancelled() was called by a worker.
+CANCELLED_AND_NOTIFIED = 'CANCELLED_AND_NOTIFIED'
+FINISHED = 'FINISHED'
+
+_FUTURE_STATES = [
+ PENDING,
+ RUNNING,
+ CANCELLED,
+ CANCELLED_AND_NOTIFIED,
+ FINISHED
+]
+
+_STATE_TO_DESCRIPTION_MAP = {
+ PENDING: "pending",
+ RUNNING: "running",
+ CANCELLED: "cancelled",
+ CANCELLED_AND_NOTIFIED: "cancelled",
+ FINISHED: "finished"
+}
+
+# Logger for internal use by the futures package.
+LOGGER = logging.getLogger("concurrent.futures")
+
+class Error(Exception):
+ """Base class for all future-related exceptions."""
+ pass
+
+class CancelledError(Error):
+ """The Future was cancelled."""
+ pass
+
+class TimeoutError(Error):
+ """The operation exceeded the given deadline."""
+ pass
+
+class _Waiter(object):
+ """Provides the event that wait() and as_completed() block on."""
+ def __init__(self):
+ self.event = threading.Event()
+ self.finished_futures = []
+
+ def add_result(self, future):
+ self.finished_futures.append(future)
+
+ def add_exception(self, future):
+ self.finished_futures.append(future)
+
+ def add_cancelled(self, future):
+ self.finished_futures.append(future)
+
+class _AsCompletedWaiter(_Waiter):
+ """Used by as_completed()."""
+
+ def __init__(self):
+ super(_AsCompletedWaiter, self).__init__()
+ self.lock = threading.Lock()
+
+ def add_result(self, future):
+ with self.lock:
+ super(_AsCompletedWaiter, self).add_result(future)
+ self.event.set()
+
+ def add_exception(self, future):
+ with self.lock:
+ super(_AsCompletedWaiter, self).add_exception(future)
+ self.event.set()
+
+ def add_cancelled(self, future):
+ with self.lock:
+ super(_AsCompletedWaiter, self).add_cancelled(future)
+ self.event.set()
+
+class _FirstCompletedWaiter(_Waiter):
+ """Used by wait(return_when=FIRST_COMPLETED)."""
+
+ def add_result(self, future):
+ super(_FirstCompletedWaiter, self).add_result(future)
+ self.event.set()
+
+ def add_exception(self, future):
+ super(_FirstCompletedWaiter, self).add_exception(future)
+ self.event.set()
+
+ def add_cancelled(self, future):
+ super(_FirstCompletedWaiter, self).add_cancelled(future)
+ self.event.set()
+
+class _AllCompletedWaiter(_Waiter):
+ """Used by wait(return_when=FIRST_EXCEPTION and ALL_COMPLETED)."""
+
+ def __init__(self, num_pending_calls, stop_on_exception):
+ self.num_pending_calls = num_pending_calls
+ self.stop_on_exception = stop_on_exception
+ self.lock = threading.Lock()
+ super(_AllCompletedWaiter, self).__init__()
+
+ def _decrement_pending_calls(self):
+ with self.lock:
+ self.num_pending_calls -= 1
+ if not self.num_pending_calls:
+ self.event.set()
+
+ def add_result(self, future):
+ super(_AllCompletedWaiter, self).add_result(future)
+ self._decrement_pending_calls()
+
+ def add_exception(self, future):
+ super(_AllCompletedWaiter, self).add_exception(future)
+ if self.stop_on_exception:
+ self.event.set()
+ else:
+ self._decrement_pending_calls()
+
+ def add_cancelled(self, future):
+ super(_AllCompletedWaiter, self).add_cancelled(future)
+ self._decrement_pending_calls()
+
+class _AcquireFutures(object):
+ """A context manager that does an ordered acquire of Future conditions."""
+
+ def __init__(self, futures):
+ self.futures = sorted(futures, key=id)
+
+ def __enter__(self):
+ for future in self.futures:
+ future._condition.acquire()
+
+ def __exit__(self, *args):
+ for future in self.futures:
+ future._condition.release()
+
+def _create_and_install_waiters(fs, return_when):
+ if return_when == _AS_COMPLETED:
+ waiter = _AsCompletedWaiter()
+ elif return_when == FIRST_COMPLETED:
+ waiter = _FirstCompletedWaiter()
+ else:
+ pending_count = sum(
+ f._state not in [CANCELLED_AND_NOTIFIED, FINISHED] for f in fs)
+
+ if return_when == FIRST_EXCEPTION:
+ waiter = _AllCompletedWaiter(pending_count, stop_on_exception=True)
+ elif return_when == ALL_COMPLETED:
+ waiter = _AllCompletedWaiter(pending_count, stop_on_exception=False)
+ else:
+ raise ValueError("Invalid return condition: %r" % return_when)
+
+ for f in fs:
+ f._waiters.append(waiter)
+
+ return waiter
+
+def as_completed(fs, timeout=None):
+ """An iterator over the given futures that yields each as it completes.
+
+ Args:
+ fs: The sequence of Futures (possibly created by different Executors) to
+ iterate over.
+ timeout: The maximum number of seconds to wait. If None, then there
+ is no limit on the wait time.
+
+ Returns:
+ An iterator that yields the given Futures as they complete (finished or
+ cancelled).
+
+ Raises:
+ TimeoutError: If the entire result iterator could not be generated
+ before the given timeout.
+ """
+ if timeout is not None:
+ end_time = timeout + time.time()
+
+ with _AcquireFutures(fs):
+ finished = set(
+ f for f in fs
+ if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
+ pending = set(fs) - finished
+ waiter = _create_and_install_waiters(fs, _AS_COMPLETED)
+
+ try:
+ for future in finished:
+ yield future
+
+ while pending:
+ if timeout is None:
+ wait_timeout = None
+ else:
+ wait_timeout = end_time - time.time()
+ if wait_timeout < 0:
+ raise TimeoutError(
+ '%d (of %d) futures unfinished' % (
+ len(pending), len(fs)))
+
+ waiter.event.wait(wait_timeout)
+
+ with waiter.lock:
+ finished = waiter.finished_futures
+ waiter.finished_futures = []
+ waiter.event.clear()
+
+ for future in finished:
+ yield future
+ pending.remove(future)
+
+ finally:
+ for f in fs:
+ f._waiters.remove(waiter)
+
+DoneAndNotDoneFutures = namedtuple(
+ 'DoneAndNotDoneFutures', 'done not_done')
+def wait(fs, timeout=None, return_when=ALL_COMPLETED):
+ """Wait for the futures in the given sequence to complete.
+
+ Args:
+ fs: The sequence of Futures (possibly created by different Executors) to
+ wait upon.
+ timeout: The maximum number of seconds to wait. If None, then there
+ is no limit on the wait time.
+ return_when: Indicates when this function should return. The options
+ are:
+
+ FIRST_COMPLETED - Return when any future finishes or is
+ cancelled.
+ FIRST_EXCEPTION - Return when any future finishes by raising an
+ exception. If no future raises an exception
+ then it is equivalent to ALL_COMPLETED.
+ ALL_COMPLETED - Return when all futures finish or are cancelled.
+
+ Returns:
+ A named 2-tuple of sets. The first set, named 'done', contains the
+ futures that completed (is finished or cancelled) before the wait
+ completed. The second set, named 'not_done', contains uncompleted
+ futures.
+ """
+ with _AcquireFutures(fs):
+ done = set(f for f in fs
+ if f._state in [CANCELLED_AND_NOTIFIED, FINISHED])
+ not_done = set(fs) - done
+
+ if (return_when == FIRST_COMPLETED) and done:
+ return DoneAndNotDoneFutures(done, not_done)
+ elif (return_when == FIRST_EXCEPTION) and done:
+ if any(f for f in done
+ if not f.cancelled() and f.exception() is not None):
+ return DoneAndNotDoneFutures(done, not_done)
+
+ if len(done) == len(fs):
+ return DoneAndNotDoneFutures(done, not_done)
+
+ waiter = _create_and_install_waiters(fs, return_when)
+
+ waiter.event.wait(timeout)
+ for f in fs:
+ f._waiters.remove(waiter)
+
+ done.update(waiter.finished_futures)
+ return DoneAndNotDoneFutures(done, set(fs) - done)
+
+class Future(object):
+ """Represents the result of an asynchronous computation."""
+
+ def __init__(self):
+ """Initializes the future. Should not be called by clients."""
+ self._condition = threading.Condition()
+ self._state = PENDING
+ self._result = None
+ self._exception = None
+ self._waiters = []
+ self._done_callbacks = []
+
+ def _invoke_callbacks(self):
+ for callback in self._done_callbacks:
+ try:
+ callback(self)
+ except Exception:
+ LOGGER.exception('exception calling callback for %r', self)
+
+ def __repr__(self):
+ with self._condition:
+ if self._state == FINISHED:
+ if self._exception:
+ return '<Future at %s state=%s raised %s>' % (
+ hex(id(self)),
+ _STATE_TO_DESCRIPTION_MAP[self._state],
+ self._exception.__class__.__name__)
+ else:
+ return '<Future at %s state=%s returned %s>' % (
+ hex(id(self)),
+ _STATE_TO_DESCRIPTION_MAP[self._state],
+ self._result.__class__.__name__)
+ return '<Future at %s state=%s>' % (
+ hex(id(self)),
+ _STATE_TO_DESCRIPTION_MAP[self._state])
+
+ def cancel(self):
+ """Cancel the future if possible.
+
+ Returns True if the future was cancelled, False otherwise. A future
+ cannot be cancelled if it is running or has already completed.
+ """
+ with self._condition:
+ if self._state in [RUNNING, FINISHED]:
+ return False
+
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ return True
+
+ self._state = CANCELLED
+ self._condition.notify_all()
+
+ self._invoke_callbacks()
+ return True
+
+ def cancelled(self):
+ """Return True if the future has cancelled."""
+ with self._condition:
+ return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]
+
+ def running(self):
+ """Return True if the future is currently executing."""
+ with self._condition:
+ return self._state == RUNNING
+
+ def done(self):
+ """Return True of the future was cancelled or finished executing."""
+ with self._condition:
+ return self._state in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]
+
+ def __get_result(self):
+ if self._exception:
+ raise self._exception
+ else:
+ return self._result
+
+ def add_done_callback(self, fn):
+ """Attaches a callable that will be called when the future finishes.
+
+ Args:
+ fn: A callable that will be called with this future as its only
+ argument when the future completes or is cancelled. The callable
+ will always be called by a thread in the same process in which
+ it was added. If the future has already completed or been
+ cancelled then the callable will be called immediately. These
+ callables are called in the order that they were added.
+ """
+ with self._condition:
+ if self._state not in [CANCELLED, CANCELLED_AND_NOTIFIED, FINISHED]:
+ self._done_callbacks.append(fn)
+ return
+ fn(self)
+
+ def result(self, timeout=None):
+ """Return the result of the call that the future represents.
+
+ Args:
+ timeout: The number of seconds to wait for the result if the future
+ isn't done. If None, then there is no limit on the wait time.
+
+ Returns:
+ The result of the call that the future represents.
+
+ Raises:
+ CancelledError: If the future was cancelled.
+ TimeoutError: If the future didn't finish executing before the given
+ timeout.
+ Exception: If the call raised then that exception will be raised.
+ """
+ with self._condition:
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self.__get_result()
+
+ self._condition.wait(timeout)
+
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self.__get_result()
+ else:
+ raise TimeoutError()
+
+ def exception(self, timeout=None):
+ """Return the exception raised by the call that the future represents.
+
+ Args:
+ timeout: The number of seconds to wait for the exception if the
+ future isn't done. If None, then there is no limit on the wait
+ time.
+
+ Returns:
+ The exception raised by the call that the future represents or None
+ if the call completed without raising.
+
+ Raises:
+ CancelledError: If the future was cancelled.
+ TimeoutError: If the future didn't finish executing before the given
+ timeout.
+ """
+
+ with self._condition:
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self._exception
+
+ self._condition.wait(timeout)
+
+ if self._state in [CANCELLED, CANCELLED_AND_NOTIFIED]:
+ raise CancelledError()
+ elif self._state == FINISHED:
+ return self._exception
+ else:
+ raise TimeoutError()
+
+ # The following methods should only be used by Executors and in tests.
+ def set_running_or_notify_cancel(self):
+ """Mark the future as running or process any cancel notifications.
+
+ Should only be used by Executor implementations and unit tests.
+
+ If the future has been cancelled (cancel() was called and returned
+ True) then any threads waiting on the future completing (though calls
+ to as_completed() or wait()) are notified and False is returned.
+
+ If the future was not cancelled then it is put in the running state
+ (future calls to running() will return True) and True is returned.
+
+ This method should be called by Executor implementations before
+ executing the work associated with this future. If this method returns
+ False then the work should not be executed.
+
+ Returns:
+ False if the Future was cancelled, True otherwise.
+
+ Raises:
+ RuntimeError: if this method was already called or if set_result()
+ or set_exception() was called.
+ """
+ with self._condition:
+ if self._state == CANCELLED:
+ self._state = CANCELLED_AND_NOTIFIED
+ for waiter in self._waiters:
+ waiter.add_cancelled(self)
+ # self._condition.notify_all() is not necessary because
+ # self.cancel() triggers a notification.
+ return False
+ elif self._state == PENDING:
+ self._state = RUNNING
+ return True
+ else:
+ LOGGER.critical('Future %s in unexpected state: %s',
+ id(self.future),
+ self.future._state)
+ raise RuntimeError('Future in unexpected state')
+
+ def set_result(self, result):
+ """Sets the return value of work associated with the future.
+
+ Should only be used by Executor implementations and unit tests.
+ """
+ with self._condition:
+ self._result = result
+ self._state = FINISHED
+ for waiter in self._waiters:
+ waiter.add_result(self)
+ self._condition.notify_all()
+ self._invoke_callbacks()
+
+ def set_exception(self, exception):
+ """Sets the result of the future as being the given exception.
+
+ Should only be used by Executor implementations and unit tests.
+ """
+ with self._condition:
+ self._exception = exception
+ self._state = FINISHED
+ for waiter in self._waiters:
+ waiter.add_exception(self)
+ self._condition.notify_all()
+ self._invoke_callbacks()
+
+class Executor(object):
+ """This is an abstract base class for concrete asynchronous executors."""
+
+ def submit(self, fn, *args, **kwargs):
+ """Submits a callable to be executed with the given arguments.
+
+ Schedules the callable to be executed as fn(*args, **kwargs) and returns
+ a Future instance representing the execution of the callable.
+
+ Returns:
+ A Future representing the given call.
+ """
+ raise NotImplementedError()
+
+ def map(self, fn, *iterables, **kwargs):
+ """Returns a iterator equivalent to map(fn, iter).
+
+ Args:
+ fn: A callable that will take as many arguments as there are
+ passed iterables.
+ timeout: The maximum number of seconds to wait. If None, then there
+ is no limit on the wait time.
+
+ Returns:
+ An iterator equivalent to: map(func, *iterables) but the calls may
+ be evaluated out-of-order.
+
+ Raises:
+ TimeoutError: If the entire result iterator could not be generated
+ before the given timeout.
+ Exception: If fn(*args) raises for any values.
+ """
+ timeout = kwargs.get('timeout')
+ if timeout is not None:
+ end_time = timeout + time.time()
+
+ fs = [self.submit(fn, *args) for args in zip(*iterables)]
+
+ try:
+ for future in fs:
+ if timeout is None:
+ yield future.result()
+ else:
+ yield future.result(end_time - time.time())
+ finally:
+ for future in fs:
+ future.cancel()
+
+ def shutdown(self, wait=True):
+ """Clean-up the resources associated with the Executor.
+
+ It is safe to call this method several times. Otherwise, no other
+ methods can be called after this one.
+
+ Args:
+ wait: If True then shutdown will not return until all running
+ futures have finished executing and the resources used by the
+ executor have been reclaimed.
+ """
+ pass
+
+ def __enter__(self):
+ return self
+
+ def __exit__(self, exc_type, exc_val, exc_tb):
+ self.shutdown(wait=True)
+ return False
View
101 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/_compat.py
@@ -0,0 +1,101 @@
+from keyword import iskeyword as _iskeyword
+from operator import itemgetter as _itemgetter
+import sys as _sys
+
+
+def namedtuple(typename, field_names):
+ """Returns a new subclass of tuple with named fields.
+
+ >>> Point = namedtuple('Point', 'x y')
+ >>> Point.__doc__ # docstring for the new class
+ 'Point(x, y)'
+ >>> p = Point(11, y=22) # instantiate with positional args or keywords
+ >>> p[0] + p[1] # indexable like a plain tuple
+ 33
+ >>> x, y = p # unpack like a regular tuple
+ >>> x, y
+ (11, 22)
+ >>> p.x + p.y # fields also accessable by name
+ 33
+ >>> d = p._asdict() # convert to a dictionary
+ >>> d['x']
+ 11
+ >>> Point(**d) # convert from a dictionary
+ Point(x=11, y=22)
+ >>> p._replace(x=100) # _replace() is like str.replace() but targets named fields
+ Point(x=100, y=22)
+
+ """
+
+ # Parse and validate the field names. Validation serves two purposes,
+ # generating informative error messages and preventing template injection attacks.
+ if isinstance(field_names, basestring):
+ field_names = field_names.replace(',', ' ').split() # names separated by whitespace and/or commas
+ field_names = tuple(map(str, field_names))
+ for name in (typename,) + field_names:
+ if not all(c.isalnum() or c=='_' for c in name):
+ raise ValueError('Type names and field names can only contain alphanumeric characters and underscores: %r' % name)
+ if _iskeyword(name):
+ raise ValueError('Type names and field names cannot be a keyword: %r' % name)
+ if name[0].isdigit():
+ raise ValueError('Type names and field names cannot start with a number: %r' % name)
+ seen_names = set()
+ for name in field_names:
+ if name.startswith('_'):
+ raise ValueError('Field names cannot start with an underscore: %r' % name)
+ if name in seen_names:
+ raise ValueError('Encountered duplicate field name: %r' % name)
+ seen_names.add(name)
+
+ # Create and fill-in the class template
+ numfields = len(field_names)
+ argtxt = repr(field_names).replace("'", "")[1:-1] # tuple repr without parens or quotes
+ reprtxt = ', '.join('%s=%%r' % name for name in field_names)
+ dicttxt = ', '.join('%r: t[%d]' % (name, pos) for pos, name in enumerate(field_names))
+ template = '''class %(typename)s(tuple):
+ '%(typename)s(%(argtxt)s)' \n
+ __slots__ = () \n
+ _fields = %(field_names)r \n
+ def __new__(_cls, %(argtxt)s):
+ return _tuple.__new__(_cls, (%(argtxt)s)) \n
+ @classmethod
+ def _make(cls, iterable, new=tuple.__new__, len=len):
+ 'Make a new %(typename)s object from a sequence or iterable'
+ result = new(cls, iterable)
+ if len(result) != %(numfields)d:
+ raise TypeError('Expected %(numfields)d arguments, got %%d' %% len(result))
+ return result \n
+ def __repr__(self):
+ return '%(typename)s(%(reprtxt)s)' %% self \n
+ def _asdict(t):
+ 'Return a new dict which maps field names to their values'
+ return {%(dicttxt)s} \n
+ def _replace(_self, **kwds):
+ 'Return a new %(typename)s object replacing specified fields with new values'
+ result = _self._make(map(kwds.pop, %(field_names)r, _self))
+ if kwds:
+ raise ValueError('Got unexpected field names: %%r' %% kwds.keys())
+ return result \n
+ def __getnewargs__(self):
+ return tuple(self) \n\n''' % locals()
+ for i, name in enumerate(field_names):
+ template += ' %s = _property(_itemgetter(%d))\n' % (name, i)
+
+ # Execute the template string in a temporary namespace and
+ # support tracing utilities by setting a value for frame.f_globals['__name__']
+ namespace = dict(_itemgetter=_itemgetter, __name__='namedtuple_%s' % typename,
+ _property=property, _tuple=tuple)
+ try:
+ exec(template, namespace)
+ except SyntaxError:
+ e = _sys.exc_info()[1]
+ raise SyntaxError(e.message + ':\n' + template)
+ result = namespace[typename]
+
+ # For pickling to work, the __module__ variable needs to be set to the frame
+ # where the named tuple is created. Bypass this step in enviroments where
+ # sys._getframe is not defined (Jython for example).
+ if hasattr(_sys, '_getframe'):
+ result.__module__ = _sys._getframe(1).f_globals.get('__name__', '__main__')
+
+ return result
View
363 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/process.py
@@ -0,0 +1,363 @@
+# Copyright 2009 Brian Quinlan. All Rights Reserved.
+# Licensed to PSF under a Contributor Agreement.
+
+"""Implements ProcessPoolExecutor.
+
+The follow diagram and text describe the data-flow through the system:
+
+|======================= In-process =====================|== Out-of-process ==|
+
++----------+ +----------+ +--------+ +-----------+ +---------+
+| | => | Work Ids | => | | => | Call Q | => | |
+| | +----------+ | | +-----------+ | |
+| | | ... | | | | ... | | |
+| | | 6 | | | | 5, call() | | |
+| | | 7 | | | | ... | | |
+| Process | | ... | | Local | +-----------+ | Process |
+| Pool | +----------+ | Worker | | #1..n |
+| Executor | | Thread | | |
+| | +----------- + | | +-----------+ | |
+| | <=> | Work Items | <=> | | <= | Result Q | <= | |
+| | +------------+ | | +-----------+ | |
+| | | 6: call() | | | | ... | | |
+| | | future | | | | 4, result | | |
+| | | ... | | | | 3, except | | |
++----------+ +------------+ +--------+ +-----------+ +---------+
+
+Executor.submit() called:
+- creates a uniquely numbered _WorkItem and adds it to the "Work Items" dict
+- adds the id of the _WorkItem to the "Work Ids" queue
+
+Local worker thread:
+- reads work ids from the "Work Ids" queue and looks up the corresponding
+ WorkItem from the "Work Items" dict: if the work item has been cancelled then
+ it is simply removed from the dict, otherwise it is repackaged as a
+ _CallItem and put in the "Call Q". New _CallItems are put in the "Call Q"
+ until "Call Q" is full. NOTE: the size of the "Call Q" is kept small because
+ calls placed in the "Call Q" can no longer be cancelled with Future.cancel().
+- reads _ResultItems from "Result Q", updates the future stored in the
+ "Work Items" dict and deletes the dict entry
+
+Process #1..n:
+- reads _CallItems from "Call Q", executes the calls, and puts the resulting
+ _ResultItems in "Request Q"
+"""
+
+from __future__ import with_statement
+import atexit
+import multiprocessing
+import threading
+import weakref
+import sys
+
+from concurrent.futures import _base
+
+try:
+ import queue
+except ImportError:
+ import Queue as queue
+
+__author__ = 'Brian Quinlan (brian@sweetapp.com)'
+
+# Workers are created as daemon threads and processes. This is done to allow the
+# interpreter to exit when there are still idle processes in a
+# ProcessPoolExecutor's process pool (i.e. shutdown() was not called). However,
+# allowing workers to die with the interpreter has two undesirable properties:
+# - The workers would still be running during interpretor shutdown,
+# meaning that they would fail in unpredictable ways.
+# - The workers could be killed while evaluating a work item, which could
+# be bad if the callable being evaluated has external side-effects e.g.
+# writing to a file.
+#
+# To work around this problem, an exit handler is installed which tells the
+# workers to exit when their work queues are empty and then waits until the
+# threads/processes finish.
+
+_threads_queues = weakref.WeakKeyDictionary()
+_shutdown = False
+
+def _python_exit():
+ global _shutdown
+ _shutdown = True
+ items = list(_threads_queues.items())
+ for t, q in items:
+ q.put(None)
+ for t, q in items:
+ t.join()
+
+# Controls how many more calls than processes will be queued in the call queue.
+# A smaller number will mean that processes spend more time idle waiting for
+# work while a larger number will make Future.cancel() succeed less frequently
+# (Futures in the call queue cannot be cancelled).
+EXTRA_QUEUED_CALLS = 1
+
+class _WorkItem(object):
+ def __init__(self, future, fn, args, kwargs):
+ self.future = future
+ self.fn = fn
+ self.args = args
+ self.kwargs = kwargs
+
+class _ResultItem(object):
+ def __init__(self, work_id, exception=None, result=None):
+ self.work_id = work_id
+ self.exception = exception
+ self.result = result
+
+class _CallItem(object):
+ def __init__(self, work_id, fn, args, kwargs):
+ self.work_id = work_id
+ self.fn = fn
+ self.args = args
+ self.kwargs = kwargs
+
+def _process_worker(call_queue, result_queue):
+ """Evaluates calls from call_queue and places the results in result_queue.
+
+ This worker is run in a separate process.
+
+ Args:
+ call_queue: A multiprocessing.Queue of _CallItems that will be read and
+ evaluated by the worker.
+ result_queue: A multiprocessing.Queue of _ResultItems that will written
+ to by the worker.
+ shutdown: A multiprocessing.Event that will be set as a signal to the
+ worker that it should exit when call_queue is empty.
+ """
+ while True:
+ call_item = call_queue.get(block=True)
+ if call_item is None:
+ # Wake up queue management thread
+ result_queue.put(None)
+ return
+ try:
+ r = call_item.fn(*call_item.args, **call_item.kwargs)
+ except BaseException:
+ e = sys.exc_info()[1]
+ result_queue.put(_ResultItem(call_item.work_id,
+ exception=e))
+ else:
+ result_queue.put(_ResultItem(call_item.work_id,
+ result=r))
+
+def _add_call_item_to_queue(pending_work_items,
+ work_ids,
+ call_queue):
+ """Fills call_queue with _WorkItems from pending_work_items.
+
+ This function never blocks.
+
+ Args:
+ pending_work_items: A dict mapping work ids to _WorkItems e.g.
+ {5: <_WorkItem...>, 6: <_WorkItem...>, ...}
+ work_ids: A queue.Queue of work ids e.g. Queue([5, 6, ...]). Work ids
+ are consumed and the corresponding _WorkItems from
+ pending_work_items are transformed into _CallItems and put in
+ call_queue.
+ call_queue: A multiprocessing.Queue that will be filled with _CallItems
+ derived from _WorkItems.
+ """
+ while True:
+ if call_queue.full():
+ return
+ try:
+ work_id = work_ids.get(block=False)
+ except queue.Empty:
+ return
+ else:
+ work_item = pending_work_items[work_id]
+
+ if work_item.future.set_running_or_notify_cancel():
+ call_queue.put(_CallItem(work_id,
+ work_item.fn,
+ work_item.args,
+ work_item.kwargs),
+ block=True)
+ else:
+ del pending_work_items[work_id]
+ continue
+
+def _queue_management_worker(executor_reference,
+ processes,
+ pending_work_items,
+ work_ids_queue,
+ call_queue,
+ result_queue):
+ """Manages the communication between this process and the worker processes.
+
+ This function is run in a local thread.
+
+ Args:
+ executor_reference: A weakref.ref to the ProcessPoolExecutor that owns
+ this thread. Used to determine if the ProcessPoolExecutor has been
+ garbage collected and that this function can exit.
+ process: A list of the multiprocessing.Process instances used as
+ workers.
+ pending_work_items: A dict mapping work ids to _WorkItems e.g.
+ {5: <_WorkItem...>, 6: <_WorkItem...>, ...}
+ work_ids_queue: A queue.Queue of work ids e.g. Queue([5, 6, ...]).
+ call_queue: A multiprocessing.Queue that will be filled with _CallItems
+ derived from _WorkItems for processing by the process workers.
+ result_queue: A multiprocessing.Queue of _ResultItems generated by the
+ process workers.
+ """
+ nb_shutdown_processes = [0]
+ def shutdown_one_process():
+ """Tell a worker to terminate, which will in turn wake us again"""
+ call_queue.put(None)
+ nb_shutdown_processes[0] += 1
+ while True:
+ _add_call_item_to_queue(pending_work_items,
+ work_ids_queue,
+ call_queue)
+
+ result_item = result_queue.get(block=True)
+ if result_item is not None:
+ work_item = pending_work_items[result_item.work_id]
+ del pending_work_items[result_item.work_id]
+
+ if result_item.exception:
+ work_item.future.set_exception(result_item.exception)
+ else:
+ work_item.future.set_result(result_item.result)
+ # Check whether we should start shutting down.
+ executor = executor_reference()
+ # No more work items can be added if:
+ # - The interpreter is shutting down OR
+ # - The executor that owns this worker has been collected OR
+ # - The executor that owns this worker has been shutdown.
+ if _shutdown or executor is None or executor._shutdown_thread:
+ # Since no new work items can be added, it is safe to shutdown
+ # this thread if there are no pending work items.
+ if not pending_work_items:
+ while nb_shutdown_processes[0] < len(processes):
+ shutdown_one_process()
+ # If .join() is not called on the created processes then
+ # some multiprocessing.Queue methods may deadlock on Mac OS
+ # X.
+ for p in processes:
+ p.join()
+ call_queue.close()
+ return
+ del executor
+
+_system_limits_checked = False
+_system_limited = None
+def _check_system_limits():
+ global _system_limits_checked, _system_limited
+ if _system_limits_checked:
+ if _system_limited:
+ raise NotImplementedError(_system_limited)
+ _system_limits_checked = True
+ try:
+ import os
+ nsems_max = os.sysconf("SC_SEM_NSEMS_MAX")
+ except (AttributeError, ValueError):
+ # sysconf not available or setting not available
+ return
+ if nsems_max == -1:
+ # indetermine limit, assume that limit is determined
+ # by available memory only
+ return
+ if nsems_max >= 256:
+ # minimum number of semaphores available
+ # according to POSIX
+ return
+ _system_limited = "system provides too few semaphores (%d available, 256 necessary)" % nsems_max
+ raise NotImplementedError(_system_limited)
+
+class ProcessPoolExecutor(_base.Executor):
+ def __init__(self, max_workers=None):
+ """Initializes a new ProcessPoolExecutor instance.
+
+ Args:
+ max_workers: The maximum number of processes that can be used to
+ execute the given calls. If None or not given then as many
+ worker processes will be created as the machine has processors.
+ """
+ _check_system_limits()
+
+ if max_workers is None:
+ self._max_workers = multiprocessing.cpu_count()
+ else:
+ self._max_workers = max_workers
+
+ # Make the call queue slightly larger than the number of processes to
+ # prevent the worker processes from idling. But don't make it too big
+ # because futures in the call queue cannot be cancelled.
+ self._call_queue = multiprocessing.Queue(self._max_workers +
+ EXTRA_QUEUED_CALLS)
+ self._result_queue = multiprocessing.Queue()
+ self._work_ids = queue.Queue()
+ self._queue_management_thread = None
+ self._processes = set()
+
+ # Shutdown is a two-step process.
+ self._shutdown_thread = False
+ self._shutdown_lock = threading.Lock()
+ self._queue_count = 0
+ self._pending_work_items = {}
+
+ def _start_queue_management_thread(self):
+ # When the executor gets lost, the weakref callback will wake up
+ # the queue management thread.
+ def weakref_cb(_, q=self._result_queue):
+ q.put(None)
+ if self._queue_management_thread is None:
+ self._queue_management_thread = threading.Thread(
+ target=_queue_management_worker,
+ args=(weakref.ref(self, weakref_cb),
+ self._processes,
+ self._pending_work_items,
+ self._work_ids,
+ self._call_queue,
+ self._result_queue))
+ self._queue_management_thread.daemon = True
+ self._queue_management_thread.start()
+ _threads_queues[self._queue_management_thread] = self._result_queue
+
+ def _adjust_process_count(self):
+ for _ in range(len(self._processes), self._max_workers):
+ p = multiprocessing.Process(
+ target=_process_worker,
+ args=(self._call_queue,
+ self._result_queue))
+ p.start()
+ self._processes.add(p)
+
+ def submit(self, fn, *args, **kwargs):
+ with self._shutdown_lock:
+ if self._shutdown_thread:
+ raise RuntimeError('cannot schedule new futures after shutdown')
+
+ f = _base.Future()
+ w = _WorkItem(f, fn, args, kwargs)
+
+ self._pending_work_items[self._queue_count] = w
+ self._work_ids.put(self._queue_count)
+ self._queue_count += 1
+ # Wake up queue management thread
+ self._result_queue.put(None)
+
+ self._start_queue_management_thread()
+ self._adjust_process_count()
+ return f
+ submit.__doc__ = _base.Executor.submit.__doc__
+
+ def shutdown(self, wait=True):
+ with self._shutdown_lock:
+ self._shutdown_thread = True
+ if self._queue_management_thread:
+ # Wake up queue management thread
+ self._result_queue.put(None)
+ if wait:
+ self._queue_management_thread.join()
+ # To reduce the risk of openning too many files, remove references to
+ # objects that use file descriptors.
+ self._queue_management_thread = None
+ self._call_queue = None
+ self._result_queue = None
+ self._processes = None
+ shutdown.__doc__ = _base.Executor.shutdown.__doc__
+
+atexit.register(_python_exit)
View
138 misc/virtenv/lib/python2.6/site-packages/concurrent/futures/thread.py
@@ -0,0 +1,138 @@
+# Copyright 2009 Brian Quinlan. All Rights Reserved.
+# Licensed to PSF under a Contributor Agreement.
+
+"""Implements ThreadPoolExecutor."""
+
+from __future__ import with_statement
+import atexit
+import threading
+import weakref
+import sys
+
+from concurrent.futures import _base
+
+try:
+ import queue
+except ImportError:
+ import Queue as queue
+
+__author__ = 'Brian Quinlan (brian@sweetapp.com)'
+
+# Workers are created as daemon threads. This is done to allow the interpreter
+# to exit when there are still idle threads in a ThreadPoolExecutor's thread
+# pool (i.e. shutdown() was not called). However, allowing workers to die with
+# the interpreter has two undesirable properties:
+# - The workers would still be running during interpretor shutdown,
+# meaning that they would fail in unpredictable ways.
+# - The workers could be killed while evaluating a work item, which could
+# be bad if the callable being evaluated has external side-effects e.g.
+# writing to a file.
+#
+# To work around this problem, an exit handler is installed which tells the
+# workers to exit when their work queues are empty and then waits until the
+# threads finish.
+
+_threads_queues = weakref.WeakKeyDictionary()
+_shutdown = False
+
+def _python_exit():
+ global _shutdown
+ _shutdown = True
+ items = list(_threads_queues.items())
+ for t, q in items:
+ q.put(None)
+ for t, q in items:
+ t.join()
+
+atexit.register(_python_exit)
+
+class _WorkItem(object):
+ def __init__(self, future, fn, args, kwargs):
+ self.future = future
+ self.fn = fn
+ self.args = args
+ self.kwargs = kwargs
+
+ def run(self):
+ if not self.future.set_running_or_notify_cancel():
+ return
+
+ try:
+ result = self.fn(*self.args, **self.kwargs)
+ except BaseException:
+ e = sys.exc_info()[1]
+ self.future.set_exception(e)
+ else:
+ self.future.set_result(result)
+
+def _worker(executor_reference, work_queue):
+ try:
+ while True:
+ work_item = work_queue.get(block=True)
+ if work_item is not None:
+ work_item.run()
+ continue
+ executor = executor_reference()
+ # Exit if:
+ # - The interpreter is shutting down OR
+ # - The executor that owns the worker has been collected OR
+ # - The executor that owns the worker has been shutdown.
+ if _shutdown or executor is None or executor._shutdown:
+ # Notice other workers
+ work_queue.put(None)
+ return
+ del executor
+ except BaseException:
+ _base.LOGGER.critical('Exception in worker', exc_info=True)
+
+class ThreadPoolExecutor(_base.Executor):
+ def __init__(self, max_workers):
+ """Initializes a new ThreadPoolExecutor instance.
+
+ Args:
+ max_workers: The maximum number of threads that can be used to
+ execute the given calls.
+ """
+ self._max_workers = max_workers
+ self._work_queue = queue.Queue()
+ self._threads = set()
+ self._shutdown = False
+ self._shutdown_lock = threading.Lock()
+
+ def submit(self, fn, *args, **kwargs):
+ with self._shutdown_lock:
+ if self._shutdown:
+ raise RuntimeError('cannot schedule new futures after shutdown')
+
+ f = _base.Future()
+ w = _WorkItem(f, fn, args, kwargs)
+
+ self._work_queue.put(w)
+ self._adjust_thread_count()
+ return f
+ submit.__doc__ = _base.Executor.submit.__doc__
+
+ def _adjust_thread_count(self):
+ # When the executor gets lost, the weakref callback will wake up
+ # the worker threads.
+ def weakref_cb(_, q=self._work_queue):
+ q.put(None)
+ # TODO(bquinlan): Should avoid creating new threads if there are more
+ # idle threads than items in the work queue.
+ if len(self._threads) < self._max_workers:
+ t = threading.Thread(target=_worker,
+ args=(weakref.ref(self, weakref_cb),
+ self._work_queue))
+ t.daemon = True
+ t.start()
+ self._threads.add(t)
+ _threads_queues[t] = self._work_queue
+
+ def shutdown(self, wait=True):
+ with self._shutdown_lock:
+ self._shutdown = True
+ self._work_queue.put(None)
+ if wait:
+ for t in self._threads:
+ t.join()
+ shutdown.__doc__ = _base.Executor.shutdown.__doc__
View
19 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/PKG-INFO
@@ -0,0 +1,19 @@
+Metadata-Version: 1.0
+Name: futures
+Version: 2.1.4
+Summary: Backport of the concurrent.futures package from Python 3.2
+Home-page: http://code.google.com/p/pythonfutures
+Author: Alex Gronholm
+Author-email: alex.gronholm+pypi@nextday.fi
+License: BSD
+Download-URL: http://pypi.python.org/pypi/futures/
+Description: UNKNOWN
+Platform: UNKNOWN
+Classifier: License :: OSI Approved :: BSD License
+Classifier: Development Status :: 5 - Production/Stable
+Classifier: Intended Audience :: Developers
+Classifier: Programming Language :: Python :: 2.5
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.1
View
16 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/SOURCES.txt
@@ -0,0 +1,16 @@
+setup.cfg
+setup.py
+concurrent/__init__.py
+concurrent/futures/__init__.py
+concurrent/futures/_base.py
+concurrent/futures/_compat.py
+concurrent/futures/process.py
+concurrent/futures/thread.py
+futures/__init__.py
+futures/process.py
+futures/thread.py
+futures.egg-info/PKG-INFO
+futures.egg-info/SOURCES.txt
+futures.egg-info/dependency_links.txt
+futures.egg-info/not-zip-safe
+futures.egg-info/top_level.txt
View
0  ...tornado-2.4.1-py2.6.egg-info/dependency_links.txt → ...futures-2.1.4-py2.6.egg-info/dependency_links.txt
File renamed without changes
View
24 misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/installed-files.txt
@@ -0,0 +1,24 @@
+../futures/process.py
+../futures/__init__.py
+../futures/thread.py
+../concurrent/__init__.py
+../concurrent/futures/process.py
+../concurrent/futures/_compat.py
+../concurrent/futures/__init__.py
+../concurrent/futures/_base.py
+../concurrent/futures/thread.py
+../futures/process.pyc
+../futures/__init__.pyc
+../futures/thread.pyc
+../concurrent/__init__.pyc
+../concurrent/futures/process.pyc
+../concurrent/futures/_compat.pyc
+../concurrent/futures/__init__.pyc
+../concurrent/futures/_base.pyc
+../concurrent/futures/thread.pyc
+./
+SOURCES.txt
+not-zip-safe
+PKG-INFO
+dependency_links.txt
+top_level.txt
View
1  misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/not-zip-safe
@@ -0,0 +1 @@
+
View
2  misc/virtenv/lib/python2.6/site-packages/futures-2.1.4-py2.6.egg-info/top_level.txt
@@ -0,0 +1,2 @@
+concurrent
+futures
View
24 misc/virtenv/lib/python2.6/site-packages/futures/__init__.py
@@ -0,0 +1,24 @@
+# Copyright 2009 Brian Quinlan. All Rights Reserved.
+# Licensed to PSF under a Contributor Agreement.
+
+"""Execute computations asynchronously using threads or processes."""
+
+import warnings
+
+from concurrent.futures import (FIRST_COMPLETED,
+ FIRST_EXCEPTION,
+ ALL_COMPLETED,
+ CancelledError,
+ TimeoutError,
+ Future,
+ Executor,
+ wait,
+ as_completed,
+ ProcessPoolExecutor,
+ ThreadPoolExecutor)
+
+__author__ = 'Brian Quinlan (brian@sweetapp.com)'
+
+warnings.warn('The futures package has been deprecated. '
+ 'Use the concurrent.futures package instead.',
+ DeprecationWarning)
View
1  misc/virtenv/lib/python2.6/site-packages/futures/process.py
@@ -0,0 +1 @@
+from concurrent.futures import ProcessPoolExecutor
View
1  misc/virtenv/lib/python2.6/site-packages/futures/thread.py
@@ -0,0 +1 @@
+from concurrent.futures import ThreadPoolExecutor
View
11 misc/virtenv/lib/python2.6/site-packages/tornado-2.4.1-py2.6.egg-info/PKG-INFO
@@ -1,11 +0,0 @@
-Metadata-Version: 1.0
-Name: tornado
-Version: 2.4.1
-Summary: Tornado is an open source version of the scalable, non-blocking web server and and tools that power FriendFeed
-Home-page: http://www.tornadoweb.org/
-Author: Facebook
-Author-email: python-tornado@googlegroups.com
-License: http://www.apache.org/licenses/LICENSE-2.0
-Download-URL: http://github.com/downloads/facebook/tornado/tornado-2.4.1.tar.gz
-Description: UNKNOWN
-Platform: UNKNOWN
View
135 misc/virtenv/lib/python2.6/site-packages/tornado-3.1-py2.6.egg-info/PKG-INFO
@@ -0,0 +1,135 @@
+Metadata-Version: 1.0
+Name: tornado
+Version: 3.1
+Summary: Tornado is a Python web framework and asynchronous networking library, originally developed at FriendFeed.
+Home-page: http://www.tornadoweb.org/
+Author: Facebook
+Author-email: python-tornado@googlegroups.com
+License: http://www.apache.org/licenses/LICENSE-2.0
+Description: Tornado Web Server
+ ==================
+
+ `Tornado <http://www.tornadoweb.org>`_ is a Python web framework and
+ asynchronous networking library, originally developed at `FriendFeed
+ <http://friendfeed.com>`_. By using non-blocking network I/O, Tornado
+ can scale to tens of thousands of open connections, making it ideal for
+ `long polling <http://en.wikipedia.org/wiki/Push_technology#Long_polling>`_,
+ `WebSockets <http://en.wikipedia.org/wiki/WebSocket>`_, and other
+ applications that require a long-lived connection to each user.
+
+
+ Quick links
+ -----------
+
+ * `Documentation <http://www.tornadoweb.org/en/stable/>`_
+ * `Source (github) <https://github.com/facebook/tornado>`_
+ * `Mailing list <http://groups.google.com/group/python-tornado>`_
+ * `Wiki <https://github.com/facebook/tornado/wiki/Links>`_
+
+ Hello, world
+ ------------
+
+ Here is a simple "Hello, world" example web app for Tornado::
+
+ import tornado.ioloop
+ import tornado.web
+
+ class MainHandler(tornado.web.RequestHandler):
+ def get(self):
+ self.write("Hello, world")
+
+ application = tornado.web.Application([
+ (r"/", MainHandler),
+ ])
+
+ if __name__ == "__main__":
+ application.listen(8888)
+ tornado.ioloop.IOLoop.instance().start()
+
+ This example does not use any of Tornado's asynchronous features; for
+ that see this `simple chat room
+ <https://github.com/facebook/tornado/tree/master/demos/chat>`_.
+
+ Installation
+ ------------
+
+ **Automatic installation**::
+
+ pip install tornado
+
+ Tornado is listed in `PyPI <http://pypi.python.org/pypi/tornado/>`_ and
+ can be installed with ``pip`` or ``easy_install``. Note that the
+ source distribution includes demo applications that are not present
+ when Tornado is installed in this way, so you may wish to download a
+ copy of the source tarball as well.
+
+ **Manual installation**: Download the latest source from `PyPI
+ <http://pypi.python.org/pypi/tornado/>`_.
+
+ .. parsed-literal::
+
+ tar xvzf tornado-$VERSION.tar.gz
+ cd tornado-$VERSION
+ python setup.py build
+ sudo python setup.py install
+
+ The Tornado source code is `hosted on GitHub
+ <https://github.com/facebook/tornado>`_.
+
+ **Prerequisites**: Tornado runs on Python 2.6, 2.7, 3.2, and 3.3. It has
+ no strict dependencies outside the Python standard library, although some
+ features may require one of the following libraries:
+
+ * `unittest2 <https://pypi.python.org/pypi/unittest2>`_ is needed to run
+ Tornado's test suite on Python 2.6 (it is unnecessary on more recent
+ versions of Python)
+ * `concurrent.futures <https://pypi.python.org/pypi/futures>`_ is the
+ recommended thread pool for use with Tornado and enables the use of
+ ``tornado.netutil.ThreadedResolver``. It is needed only on Python 2;
+ Python 3 includes this package in the standard library.
+ * `pycurl <http://pycurl.sourceforge.net>`_ is used by the optional
+ ``tornado.curl_httpclient``. Libcurl version 7.18.2 or higher is required;
+ version 7.21.1 or higher is recommended.
+ * `Twisted <http://www.twistedmatrix.com>`_ may be used with the classes in
+ `tornado.platform.twisted`.
+ * `pycares <https://pypi.python.org/pypi/pycares>`_ is an alternative
+ non-blocking DNS resolver that can be used when threads are not
+ appropriate.
+ * `Monotime <https://pypi.python.org/pypi/Monotime>`_ adds support for
+ a monotonic clock, which improves reliability in environments
+ where clock adjustments are frequent. No longer needed in Python 3.3.
+
+ **Platforms**: Tornado should run on any Unix-like platform, although
+ for the best performance and scalability only Linux (with ``epoll``)
+ and BSD (with ``kqueue``) are recommended (even though Mac OS X is
+ derived from BSD and supports kqueue, its networking performance is
+ generally poor so it is recommended only for development use).
+
+ Discussion and support
+ ----------------------
+
+ You can discuss Tornado on `the Tornado developer mailing list
+ <http://groups.google.com/group/python-tornado>`_, and report bugs on
+ the `GitHub issue tracker
+ <https://github.com/facebook/tornado/issues>`_. Links to additional
+ resources can be found on the `Tornado wiki
+ <https://github.com/facebook/tornado/wiki/Links>`_.
+
+ Tornado is one of `Facebook's open source technologies
+ <http://developers.facebook.com/opensource/>`_. It is available under
+ the `Apache License, Version 2.0
+ <http://www.apache.org/licenses/LICENSE-2.0.html>`_.
+
+ This web site and all documentation is licensed under `Creative
+ Commons 3.0 <http://creativecommons.org/licenses/by/3.0/>`_.
+
+Platform: UNKNOWN
+Classifier: License :: OSI Approved :: Apache Software License
+Classifier: Programming Language :: Python :: 2
+Classifier: Programming Language :: Python :: 2.6
+Classifier: Programming Language :: Python :: 2.7
+Classifier: Programming Language :: Python :: 3
+Classifier: Programming Language :: Python :: 3.2
+Classifier: Programming Language :: Python :: 3.3
+Classifier: Programming Language :: Python :: Implementation :: CPython
+Classifier: Programming Language :: Python :: Implementation :: PyPy
View
23 ...packages/tornado-2.4.1-py2.6.egg-info/SOURCES.txt → ...e-packages/tornado-3.1-py2.6.egg-info/SOURCES.txt
@@ -1,12 +1,11 @@
MANIFEST.in
-README
+README.rst
runtests.sh
setup.cfg
setup.py
demos/appengine/README
demos/appengine/app.yaml
demos/appengine/blog.py
-demos/appengine/markdown.py
demos/appengine/static/blog.css
demos/appengine/templates/archive.html
demos/appengine/templates/base.html
@@ -18,10 +17,10 @@ demos/appengine/templates/modules/entry.html
demos/auth/authdemo.py
demos/benchmark/benchmark.py
demos/benchmark/chunk_benchmark.py
+demos/benchmark/stack_context_benchmark.py
demos/benchmark/template_benchmark.py
demos/blog/README
demos/blog/blog.py
-demos/blog/markdown.py
demos/blog/schema.sql
demos/blog/static/blog.css
demos/blog/templates/archive.html
@@ -44,6 +43,8 @@ demos/facebook/templates/stream.html
demos/facebook/templates/modules/post.html
demos/helloworld/helloworld.py
demos/s3server/s3server.py
+demos/twitter/home.html
+demos/twitter/twitterdemo.py
demos/websocket/chatdemo.py
demos/websocket/static/chat.css
demos/websocket/static/chat.js
@@ -53,9 +54,8 @@ tornado/__init__.py
tornado/auth.py
tornado/autoreload.py
tornado/ca-certificates.crt
+tornado/concurrent.py
tornado/curl_httpclient.py
-tornado/database.py
-tornado/epoll.c
tornado/escape.py
tornado/gen.py
tornado/httpclient.py
@@ -64,11 +64,13 @@ tornado/httputil.py
tornado/ioloop.py
tornado/iostream.py
tornado/locale.py
+tornado/log.py
tornado/netutil.py
tornado/options.py
tornado/process.py
tornado/simple_httpclient.py
tornado/stack_context.py
+tornado/tcpserver.py
tornado/template.py
tornado/testing.py
tornado/util.py
@@ -81,14 +83,19 @@ tornado.egg-info/dependency_links.txt
tornado.egg-info/top_level.txt
tornado/platform/__init__.py
tornado/platform/auto.py
+tornado/platform/caresresolver.py
tornado/platform/common.py
+tornado/platform/epoll.py
tornado/platform/interface.py
+tornado/platform/kqueue.py
tornado/platform/posix.py
+tornado/platform/select.py
tornado/platform/twisted.py
tornado/platform/windows.py
tornado/test/README
tornado/test/__init__.py
tornado/test/auth_test.py
+tornado/test/concurrent_test.py
tornado/test/curl_httpclient_test.py
tornado/test/escape_test.py
tornado/test/gen_test.py
@@ -99,6 +106,9 @@ tornado/test/import_test.py
tornado/test/ioloop_test.py
tornado/test/iostream_test.py
tornado/test/locale_test.py
+tornado/test/log_test.py
+tornado/test/netutil_test.py
+tornado/test/options_test.cfg
tornado/test/options_test.py
tornado/test/process_test.py
tornado/test/runtests.py
@@ -109,11 +119,14 @@ tornado/test/test.crt
tornado/test/test.key
tornado/test/testing_test.py
tornado/test/twisted_test.py
+tornado/test/util.py
tornado/test/util_test.py
tornado/test/web_test.py
+tornado/test/websocket_test.py
tornado/test/wsgi_test.py
tornado/test/csv_translations/fr_FR.csv
tornado/test/gettext_translations/fr_FR/LC_MESSAGES/tornado_test.mo
tornado/test/gettext_translations/fr_FR/LC_MESSAGES/tornado_test.po
tornado/test/static/robots.txt
+tornado/test/static/dir/index.html
tornado/test/templates/utf8.html
View
1  misc/virtenv/lib/python2.6/site-packages/tornado-3.1-py2.6.egg-info/dependency_links.txt
@@ -0,0 +1 @@
+
View
36 .../tornado-2.4.1-py2.6.egg-info/installed-files.txt → ...es/tornado-3.1-py2.6.egg-info/installed-files.txt
@@ -7,18 +7,20 @@
../tornado/httpserver.py
../tornado/escape.py
../tornado/httputil.py
+../tornado/tcpserver.py
../tornado/websocket.py
../tornado/iostream.py
../tornado/template.py
../tornado/auth.py
+../tornado/log.py
../tornado/__init__.py
../tornado/stack_context.py
-../tornado/database.py
../tornado/simple_httpclient.py
../tornado/netutil.py
../tornado/curl_httpclient.py
../tornado/wsgi.py
../tornado/locale.py
+../tornado/concurrent.py
../tornado/httpclient.py
../tornado/web.py
../tornado/gen.py
@@ -30,37 +32,48 @@
../tornado/test/testing_test.py
../tornado/test/auth_test.py
../tornado/test/web_test.py
+../tornado/test/websocket_test.py
+../tornado/test/util.py
../tornado/test/httpserver_test.py
../tornado/test/escape_test.py
../tornado/test/process_test.py
../tornado/test/__init__.py
../tornado/test/wsgi_test.py
../tornado/test/options_test.py
+../tornado/test/concurrent_test.py
../tornado/test/ioloop_test.py
../tornado/test/curl_httpclient_test.py
../tornado/test/stack_context_test.py
+../tornado/test/netutil_test.py
../tornado/test/template_test.py
../tornado/test/import_test.py
../tornado/test/runtests.py
../tornado/test/gen_test.py
../tornado/test/locale_test.py
../tornado/test/httputil_test.py
+../tornado/test/log_test.py
../tornado/platform/interface.py
../tornado/platform/twisted.py
../tornado/platform/posix.py
../tornado/platform/common.py
+../tornado/platform/kqueue.py
+../tornado/platform/caresresolver.py
+../tornado/platform/epoll.py
../tornado/platform/__init__.py
../tornado/platform/auto.py
+../tornado/platform/select.py
../tornado/platform/windows.py
../tornado/ca-certificates.crt
../tornado/test/README
-../tornado/test/test.crt
-../tornado/test/test.key
-../tornado/test/static/robots.txt
-../tornado/test/templates/utf8.html
../tornado/test/csv_translations/fr_FR.csv
../tornado/test/gettext_translations/fr_FR/LC_MESSAGES/tornado_test.mo
../tornado/test/gettext_translations/fr_FR/LC_MESSAGES/tornado_test.po
+../tornado/test/options_test.cfg
+../tornado/test/static/robots.txt
+../tornado/test/static/dir/index.html
+../tornado/test/templates/utf8.html
+../tornado/test/test.crt
+../tornado/test/test.key
../tornado/options.pyc
../tornado/process.pyc
../tornado/testing.pyc
@@ -70,18 +83,20 @@
../tornado/httpserver.pyc
../tornado/escape.pyc
../tornado/httputil.pyc
+../tornado/tcpserver.pyc
../tornado/websocket.pyc
../tornado/iostream.pyc
../tornado/template.pyc
../tornado/auth.pyc
+../tornado/log.pyc
../tornado/__init__.pyc
../tornado/stack_context.pyc
-../tornado/database.pyc
../tornado/simple_httpclient.pyc
../tornado/netutil.pyc
../tornado/curl_httpclient.pyc
../tornado/wsgi.pyc
../tornado/locale.pyc
+../tornado/concurrent.pyc
../tornado/httpclient.pyc
../tornado/web.pyc
../tornado/gen.pyc
@@ -93,27 +108,36 @@
../tornado/test/testing_test.pyc
../tornado/test/auth_test.pyc
../tornado/test/web_test.pyc
+../tornado/test/websocket_test.pyc
+../tornado/test/util.pyc
../tornado/test/httpserver_test.pyc
../tornado/test/escape_test.pyc
../tornado/test/process_test.pyc
../tornado/test/__init__.pyc
../tornado/test/wsgi_test.pyc
../tornado/test/options_test.pyc
+../tornado/test/concurrent_test.pyc
../tornado/test/ioloop_test.pyc
../tornado/test/curl_httpclient_test.pyc
../tornado/test/stack_context_test.pyc
+../tornado/test/netutil_test.pyc
../tornado/test/template_test.pyc
../tornado/test/import_test.pyc
../tornado/test/runtests.pyc
../tornado/test/gen_test.pyc
../tornado/test/locale_test.pyc
../tornado/test/httputil_test.pyc
+../tornado/test/log_test.pyc
../tornado/platform/interface.pyc
../tornado/platform/twisted.pyc
../tornado/platform/posix.pyc
../tornado/platform/common.pyc
+../tornado/platform/kqueue.pyc
+../tornado/platform/caresresolver.pyc
+../tornado/platform/epoll.pyc
../tornado/platform/__init__.pyc
../tornado/platform/auto.pyc
+../tornado/platform/select.pyc
../tornado/platform/windows.pyc
./
SOURCES.txt
View
0  ...ckages/tornado-2.4.1-py2.6.egg-info/top_level.txt → ...packages/tornado-3.1-py2.6.egg-info/top_level.txt
File renamed without changes
View
10 misc/virtenv/lib/python2.6/site-packages/tornado/__init__.py
@@ -16,14 +16,14 @@
"""The Tornado web server and tools."""
-from __future__ import absolute_import, division, with_statement
+from __future__ import absolute_import, division, print_function, with_statement
# version is a human-readable version number.
# version_info is a four-tuple for programmatic comparison. The first
# three numbers are the components of the version number. The fourth
# is zero for an official release, positive for a development branch,
-# or negative for a release candidate (after the base version number
-# has been incremented)
-version = "2.4.1"
-version_info = (2, 4, 1, 0)
+# or negative for a release candidate or beta (after the base version
+# number has been incremented)
+version = "3.1"
+version_info = (3, 1, 0, 0)
View
854 misc/virtenv/lib/python2.6/site-packages/tornado/auth.py
@@ -14,15 +14,19 @@
# License for the specific language governing permissions and limitations
# under the License.
-"""Implementations of various third-party authentication schemes.
+"""This module contains implementations of various third-party
+authentication schemes.
-All the classes in this file are class Mixins designed to be used with
-web.py RequestHandler classes. The primary methods for each service are
-authenticate_redirect(), authorize_redirect(), and get_authenticated_user().
-The former should be called to redirect the user to, e.g., the OpenID
-authentication page on the third party service, and the latter should
-be called upon return to get the user data from the data returned by
-the third party service.
+All the classes in this file are class mixins designed to be used with
+the `tornado.web.RequestHandler` class. They are used in two ways:
+
+* On a login handler, use methods such as ``authenticate_redirect()``,
+ ``authorize_redirect()``, and ``get_authenticated_user()`` to
+ establish the user's identity and store authentication tokens to your
+ database and/or cookies.
+* In non-login handlers, use methods such as ``facebook_request()``
+ or ``twitter_request()`` to use the authentication tokens to make
+ requests to the respective services.
They all take slightly different arguments due to the fact all these
services implement authentication and authorization slightly differently.
@@ -30,84 +34,146 @@
Example usage for Google OpenID::
- class GoogleHandler(tornado.web.RequestHandler, tornado.auth.GoogleMixin):
+ class GoogleLoginHandler(tornado.web.RequestHandler,
+ tornado.auth.GoogleMixin):
@tornado.web.asynchronous
+ @tornado.gen.coroutine
def get(self):
if self.get_argument("openid.mode", None):
- self.get_authenticated_user(self.async_callback(self._on_auth))
- return
- self.authenticate_redirect()
-
- def _on_auth(self, user):
- if not user:
- raise tornado.web.HTTPError(500, "Google auth failed")
- # Save the user with, e.g., set_secure_cookie()
+ user = yield self.get_authenticated_user()
+ # Save the user with e.g. set_secure_cookie()
+ else:
+ yield self.authenticate_redirect()
"""
-from __future__ import absolute_import, division, with_statement
+from __future__ import absolute_import, division, print_function, with_statement
import base64
import binascii
+import functools
import hashlib
import hmac
-import logging
import time
-import urllib
-import urlparse
import uuid
+from tornado.concurrent import Future, chain_future, return_future
+from tornado import gen
from tornado import httpclient
from tornado import escape
from tornado.httputil import url_concat
-from tornado.util import bytes_type, b
+from tornado.log import gen_log
+from tornado.util import bytes_type, u, unicode_type, ArgReplacer
+
+try:
+ import urlparse # py2
+except ImportError:
+ import urllib.parse as urlparse # py3
+
+try:
+ import urllib.parse as urllib_parse # py3
+except ImportError:
+ import urllib as urllib_parse # py2
+
+
+class AuthError(Exception):
+ pass
+
+
+def _auth_future_to_callback(callback, future):
+ try:
+ result = future.result()
+ except AuthError as e:
+ gen_log.warning(str(e))
+ result = None
+ callback(result)
+
+
+def _auth_return_future(f):
+ """Similar to tornado.concurrent.return_future, but uses the auth
+ module's legacy callback interface.
+
+ Note that when using this decorator the ``callback`` parameter
+ inside the function will actually be a future.
+ """
+ replacer = ArgReplacer(f, 'callback')
+
+ @functools.wraps(f)
+ def wrapper(*args, **kwargs):
+ future = Future()
+ callback, args, kwargs = replacer.replace(future, args, kwargs)
+ if callback is not None:
+ future.add_done_callback(
+ functools.partial(_auth_future_to_callback, callback))
+ f(*args, **kwargs)
+ return future
+ return wrapper
class OpenIdMixin(object):
"""Abstract implementation of OpenID and Attribute Exchange.
- See GoogleMixin below for example implementations.
+ See `GoogleMixin` below for a customized example (which also
+ includes OAuth support).
+
+ Class attributes:
+
+ * ``_OPENID_ENDPOINT``: the identity provider's URI.
"""
+ @return_future
def authenticate_redirect(self, callback_uri=None,
- ax_attrs=["name", "email", "language", "username"]):
- """Returns the authentication URL for this service.
+ ax_attrs=["name", "email", "language", "username"],
+ callback=None):
+ """Redirects to the authentication URL for this service.
After authentication, the service will redirect back to the given
- callback URI.
+ callback URI with additional parameters including ``openid.mode``.
We request the given attributes for the authenticated user by
default (name, email, language, and username). If you don't need
all those attributes for your app, you can request fewer with
the ax_attrs keyword argument.
+
+ .. versionchanged:: 3.1
+ Returns a `.Future` and takes an optional callback. These are
+ not strictly necessary as this method is synchronous,
+ but they are supplied for consistency with
+ `OAuthMixin.authorize_redirect`.
"""
callback_uri = callback_uri or self.request.uri
args = self._openid_args(callback_uri, ax_attrs=ax_attrs)
- self.redirect(self._OPENID_ENDPOINT + "?" + urllib.urlencode(args))
+ self.redirect(self._OPENID_ENDPOINT + "?" + urllib_parse.urlencode(args))
+ callback()
+ @_auth_return_future
def get_authenticated_user(self, callback, http_client=None):
"""Fetches the authenticated user data upon redirect.
This method should be called by the handler that receives the
- redirect from the authenticate_redirect() or authorize_redirect()
- methods.
+ redirect from the `authenticate_redirect()` method (which is
+ often the same as the one that calls it; in that case you would
+ call `get_authenticated_user` if the ``openid.mode`` parameter
+ is present and `authenticate_redirect` if it is not).
+
+ The result of this method will generally be used to set a cookie.
"""
# Verify the OpenID response via direct request to the OP
- args = dict((k, v[-1]) for k, v in self.request.arguments.iteritems())
- args["openid.mode"] = u"check_authentication"
+ args = dict((k, v[-1]) for k, v in self.request.arguments.items())
+ args["openid.mode"] = u("check_authentication")
url = self._OPENID_ENDPOINT
if http_client is None:
http_client = self.get_auth_http_client()
http_client.fetch(url, self.async_callback(
self._on_authentication_verified, callback),
- method="POST", body=urllib.urlencode(args))
+ method="POST", body=urllib_parse.urlencode(args))
def _openid_args(self, callback_uri, ax_attrs=[], oauth_scope=None):
url = urlparse.urljoin(self.request.full_url(), callback_uri)
args = {
"openid.ns": "http://specs.openid.net/auth/2.0",
"openid.claimed_id":
- "http://specs.openid.net/auth/2.0/identifier_select",
+ "http://specs.openid.net/auth/2.0/identifier_select",
"openid.identity":
- "http://specs.openid.net/auth/2.0/identifier_select",
+ "http://specs.openid.net/auth/2.0/identifier_select",
"openid.return_to": url,
"openid.realm": urlparse.urljoin(url, '/'),
"openid.mode": "checkid_setup",
@@ -124,11 +190,11 @@ def _openid_args(self, callback_uri, ax_attrs=[], oauth_scope=None):
required += ["firstname", "fullname", "lastname"]
args.update({
"openid.ax.type.firstname":
- "http://axschema.org/namePerson/first",
+ "http://axschema.org/namePerson/first",
"openid.ax.type.fullname":
- "http://axschema.org/namePerson",
+ "http://axschema.org/namePerson",
"openid.ax.type.lastname":
- "http://axschema.org/namePerson/last",
+ "http://axschema.org/namePerson/last",
})
known_attrs = {
"email": "http://axschema.org/contact/email",
@@ -142,40 +208,40 @@ def _openid_args(self, callback_uri, ax_attrs=[], oauth_scope=None):
if oauth_scope:
args.update({
"openid.ns.oauth":
- "http://specs.openid.net/extensions/oauth/1.0",
+ "http://specs.openid.net/extensions/oauth/1.0",
"openid.oauth.consumer": self.request.host.split(":")[0],
"openid.oauth.scope": oauth_scope,
})
return args
- def _on_authentication_verified(self, callback, response):
- if response.error or b("is_valid:true") not in response.body:
- logging.warning("Invalid OpenID response: %s", response.error or
- response.body)
- callback(None)
+ def _on_authentication_verified(self, future, response):
+ if response.error or b"is_valid:true" not in response.body:
+ future.set_exception(AuthError(
+ "Invalid OpenID response: %s" % (response.error or
+ response.body)))
return
# Make sure we got back at least an email from attribute exchange
ax_ns = None
- for name in self.request.arguments.iterkeys():
+ for name in self.request.arguments:
if name.startswith("openid.ns.") and \
- self.get_argument(name) == u"http://openid.net/srv/ax/1.0":
+ self.get_argument(name) == u("http://openid.net/srv/ax/1.0"):
ax_ns = name[10:]
break
def get_ax_arg(uri):
if not ax_ns:
- return u""
+ return u("")
prefix = "openid." + ax_ns + ".type."
ax_name = None
- for name in self.request.arguments.iterkeys():
+ for name in self.request.arguments.keys():
if self.get_argument(name) == uri and name.startswith(prefix):
part = name[len(prefix):]
ax_name = "openid." + ax_ns + ".value." + part
break
if not ax_name:
- return u""
- return self.get_argument(ax_name, u"")
+ return u("")
+ return self.get_argument(ax_name, u(""))
email = get_ax_arg("http://axschema.org/contact/email")
name = get_ax_arg("http://axschema.org/namePerson")
@@ -194,7 +260,7 @@ def get_ax_arg(uri):
if name:
user["name"] = name
elif name_parts:
- user["name"] = u" ".join(name_parts)
+ user["name"] = u(" ").join(name_parts)
elif email:
user["name"] = email.split("@")[0]
if email:
@@ -206,36 +272,59 @@ def get_ax_arg(uri):
claimed_id = self.get_argument("openid.claimed_id", None)
if claimed_id:
user["claimed_id"] = claimed_id
- callback(user)
+ future.set_result(user)
def get_auth_http_client(self):
- """Returns the AsyncHTTPClient instance to be used for auth requests.
+ """Returns the `.AsyncHTTPClient` instance to be used for auth requests.
- May be overridden by subclasses to use an http client other than
+ May be overridden by subclasses to use an HTTP client other than
the default.
"""
return httpclient.AsyncHTTPClient()
class OAuthMixin(object):
- """Abstract implementation of OAuth.
+ """Abstract implementation of OAuth 1.0 and 1.0a.
- See TwitterMixin and FriendFeedMixin below for example implementations.
- """
+ See `TwitterMixin` and `FriendFeedMixin` below for example implementations,
+ or `GoogleMixin` for an OAuth/OpenID hybrid.
+ Class attributes:
+
+ * ``_OAUTH_AUTHORIZE_URL``: The service's OAuth authorization url.
+ * ``_OAUTH_ACCESS_TOKEN_URL``: The service's OAuth access token url.
+ * ``_OAUTH_VERSION``: May be either "1.0" or "1.0a".
+ * ``_OAUTH_NO_CALLBACKS``: Set this to True if the service requires
+ advance registration of callbacks.
+
+ Subclasses must also override the `_oauth_get_user_future` and
+ `_oauth_consumer_token` methods.
+ """
+ @return_future
def authorize_redirect(self, callback_uri=None, extra_params=None,
- http_client=None):
+ http_client=None, callback=None):
"""Redirects the user to obtain OAuth authorization for this service.
- Twitter and FriendFeed both require that you register a Callback
- URL with your application. You should call this method to log the
- user in, and then call get_authenticated_user() in the handler
- you registered as your Callback URL to complete the authorization
- process.
+ The ``callback_uri`` may be omitted if you have previously
+ registered a callback URI with the third-party service. For
+ some sevices (including Friendfeed), you must use a
+ previously-registered callback URI and cannot specify a
+ callback via this method.
- This method sets a cookie called _oauth_request_token which is
- subsequently used (and cleared) in get_authenticated_user for
+ This method sets a cookie called ``_oauth_request_token`` which is
+ subsequently used (and cleared) in `get_authenticated_user` for
security purposes.
+
+ Note that this method is asynchronous, although it calls
+ `.RequestHandler.finish` for you so it may not be necessary
+ to pass a callback or use the `.Future` it returns. However,
+ if this method is called from a function decorated with
+ `.gen.coroutine`, you must call it with ``yield`` to keep the