View
@@ -1,5 +1,5 @@
=================================
celery - Distributed Task Queue
Celery - Distributed Task Queue
=================================
.. image:: http://cloud.github.com/downloads/celery/celery/celery_128.png
@@ -15,8 +15,8 @@
--
What is a Task Queue?
=====================
What's a Task Queue?
====================
Task queues are used as a mechanism to distribute work across threads or
machines.
@@ -25,14 +25,14 @@ A task queue's input is a unit of work, called a task, dedicated worker
processes then constantly monitor the queue for new work to perform.
Celery communicates via messages, usually using a broker
to mediate between clients and workers. To initiate a task a client puts a
to mediate between clients and workers. To initiate a task a client puts a
message on the queue, the broker then delivers the message to a worker.
A Celery system can consist of multiple workers and brokers, giving way
to high availability and horizontal scaling.
Celery is written in Python, but the protocol can be implemented in any
language. In addition to Python there's node-celery_ for Node.js,
language. In addition to Python there's node-celery_ for Node.js,
and a `PHP client`_.
Language interoperability can also be achieved
@@ -55,16 +55,16 @@ Celery version 4.0 runs on,
This is the last version to support Python 2.7,
and from the next version (Celery 5.x) Python 3.6 or newer is required.
If you are running an older version of Python, you need to be running
If you're running an older version of Python, you need to be running
an older version of Celery:
- Python 2.6: Celery series 3.1 or earlier.
- Python 2.5: Celery series 3.0 or earlier.
- Python 2.4 was Celery series 2.2 or earlier.
Celery is a project with minimal funding,
so we do not support Microsoft Windows.
Please do not open any issues related to that platform.
so we don't support Microsoft Windows.
Please don't open any issues related to that platform.
*Celery* is usually used with a message broker to send and receive messages.
The RabbitMQ, Redis transports are feature complete,
@@ -77,7 +77,7 @@ across datacenters.
Get Started
===========
If this is the first time you're trying to use Celery, or you are
If this is the first time you're trying to use Celery, or you're
new to Celery 4.0 coming from previous versions then you should read our
getting started tutorials:
@@ -184,7 +184,7 @@ integration packages:
| `Tornado`_ | `tornado-celery`_ |
+--------------------+------------------------+
The integration packages are not strictly necessary, but they can make
The integration packages aren't strictly necessary, but they can make
development easier, and sometimes they add important hooks like closing
database connections at ``fork``.
@@ -238,7 +238,7 @@ Celery also defines a group of bundles that can be used
to install Celery and the dependencies for a given feature.
You can specify these in your requirements or on the ``pip``
command-line by using brackets. Multiple bundles can be specified by
command-line by using brackets. Multiple bundles can be specified by
separating them by commas.
::
@@ -334,7 +334,7 @@ You can install it by doing the following,:
# python setup.py install
The last command must be executed as a privileged user if
you are not currently using a virtualenv.
you aren't currently using a virtualenv.
.. _celery-installing-from-git:
@@ -372,7 +372,7 @@ Getting Help
Mailing list
------------
For discussions about the usage, development, and future of celery,
For discussions about the usage, development, and future of Celery,
please join the `celery-users`_ mailing list.
.. _`celery-users`: http://groups.google.com/group/celery-users/
@@ -409,7 +409,7 @@ Contributing
Development of `celery` happens at GitHub: https://github.com/celery/celery
You are highly encouraged to participate in the development
You're highly encouraged to participate in the development
of `celery`. If you don't like GitHub (for some reason) you're welcome
to send regular patches.
View
@@ -113,7 +113,7 @@ def _patch_gevent():
monkey.patch_all()
if version_info[0] == 0: # pragma: no cover
# Signals aren't working in gevent versions <1.0,
# and are not monkey patched by patch_all()
# and aren't monkey patched by patch_all()
_signal = __import__('signal')
_signal.signal = gsignal
View
@@ -25,12 +25,11 @@
#: Global default app used when no current app.
default_app = None
#: List of all app instances (weakrefs), must not be used directly.
#: List of all app instances (weakrefs), mustn't be used directly.
_apps = weakref.WeakSet()
#: global set of functions to call whenever a new app is finalized
#: E.g. Shared tasks, and built-in tasks are created
#: by adding callbacks here.
#: Global set of functions to call whenever a new app is finalized.
#: Shared tasks, and built-in tasks are created by adding callbacks here.
_on_app_finalizers = set()
_task_join_will_block = False
View
@@ -88,7 +88,7 @@ def disable_trace():
def shared_task(*args, **kwargs):
"""Create shared tasks (decorator).
This can be used by library authors to create tasks that will work
This can be used by library authors to create tasks that'll work
for any app environment.
Returns:
View
@@ -163,7 +163,7 @@ def format(self, indent=0, indent_first=True):
return info[0] + '\n' + textindent('\n'.join(info[1:]), indent)
def select_add(self, queue, **kwargs):
"""Add new task queue that will be consumed from even when
"""Add new task queue that'll be consumed from even when
a subset has been selected using the
:option:`celery worker -Q` option."""
q = self.add(queue, **kwargs)
@@ -184,7 +184,7 @@ def select(self, include):
}
def deselect(self, exclude):
"""Deselect queues so that they will not be consumed from.
"""Deselect queues so that they won't be consumed from.
Arguments:
exclude (Sequence[str], str): Names of queues to avoid
View
@@ -117,7 +117,7 @@ class Celery(object):
loader (str, type): The loader class, or the name of the loader
class to use. Default is :class:`celery.loaders.app.AppLoader`.
backend (str, type): The result store backend class, or the name of the
backend class to use. Default is the value of the
backend class to use. Default is the value of the
:setting:`result_backend` setting.
amqp (str, type): AMQP object or class name.
events (str, type): Events object or class name.
@@ -336,7 +336,7 @@ def refresh_feed(url):
a proxy object, so that the act of creating the task is not
performed until the task is used or the task registry is accessed.
If you are depending on binding to be deferred, then you must
If you're depending on binding to be deferred, then you must
not access any attributes on the returned object until the
application is fully set up (finalized).
"""
@@ -538,7 +538,7 @@ def setup_security(self, allowed_serializers=None, key=None, cert=None,
digest (str): Digest algorithm used when signing messages.
Default is ``sha1``.
serializer (str): Serializer used to encode messages after
they have been signed. See :setting:`task_serializer` for
they've been signed. See :setting:`task_serializer` for
the serializers supported. Default is ``json``.
"""
from celery.security import setup_security
@@ -578,7 +578,7 @@ def autodiscover_tasks(self, packages=None,
to "tasks", which means it look for "module.tasks" for every
module in ``packages``.
force (bool): By default this call is lazy so that the actual
auto-discovery will not happen until an application imports
auto-discovery won't happen until an application imports
the default modules. Forcing will cause the auto-discovery
to happen immediately.
"""
@@ -916,7 +916,7 @@ def subclass_with_self(self, Class, name=None, attribute='app',
reverse (str): Reverse path to this object used for pickling
purposes. E.g. for ``app.AsyncResult`` use ``"AsyncResult"``.
keep_reduce (bool): If enabled a custom ``__reduce__``
implementation will not be provided.
implementation won't be provided.
"""
Class = symbol_by_name(Class)
reverse = reverse if reverse else Class.__name__
@@ -1054,7 +1054,7 @@ def pool(self):
@property
def current_task(self):
"""The instance of the task that is being executed, or
"""The instance of the task that's being executed, or
:const:`None`."""
return _task_stack.top
View
@@ -202,7 +202,7 @@ def supports_color(self, colorize=None, logfile=None):
# Windows does not support ANSI color codes.
return False
if colorize or colorize is None:
# Only use color if there is no active log file
# Only use color if there's no active log file
# and stderr is an actual terminal.
return logfile is None and isatty(sys.stderr)
return colorize
View
@@ -175,7 +175,7 @@ class Task(object):
#: a minute),`'100/h'` (hundred tasks an hour)
rate_limit = None
#: If enabled the worker will not store task state and return values
#: If enabled the worker won't store task state and return values
#: for this task. Defaults to the :setting:`task_ignore_result`
#: setting.
ignore_result = None
@@ -213,7 +213,7 @@ class Task(object):
#: finished, or waiting to be retried.
#:
#: Having a 'started' status can be useful for when there are long
#: running tasks and there is a need to report which task is currently
#: running tasks and there's a need to report which task is currently
#: running.
#:
#: The application default can be overridden using the
@@ -247,9 +247,9 @@ class Task(object):
#: Tuple of expected exceptions.
#:
#: These are errors that are expected in normal operation
#: and that should not be regarded as a real error by the worker.
#: and that shouldn't be regarded as a real error by the worker.
#: Currently this means that the state will be updated to an error
#: state, but the worker will not log the event as an error.
#: state, but the worker won't log the event as an error.
throws = ()
#: Default task expiry time.
@@ -261,7 +261,7 @@ class Task(object):
#: Task request stack, the current request will be the topmost.
request_stack = None
#: Some may expect a request to exist even if the task has not been
#: Some may expect a request to exist even if the task hasn't been
#: called. This should probably be deprecated.
_default_request = None
@@ -362,7 +362,7 @@ def __reduce__(self):
# - simply grabs it from the local registry.
# - in later versions the module of the task is also included,
# - and the receiving side tries to import that module so that
# - it will work even if the task has not been registered.
# - it will work even if the task hasn't been registered.
mod = type(self).__module__
mod = mod if mod and mod in sys.modules else None
return (_unpickle_task_v2, (self.name, mod), None)
@@ -405,7 +405,7 @@ def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
expires (float, ~datetime.datetime): Datetime or
seconds in the future for the task should expire.
The task will not be executed after the expiration time.
The task won't be executed after the expiration time.
shadow (str): Override task name used in logs/monitoring.
Default is retrieved from :meth:`shadow_name`.
@@ -433,23 +433,23 @@ def apply_async(self, args=None, kwargs=None, task_id=None, producer=None,
argument.
routing_key (str): Custom routing key used to route the task to a
worker server. If in combination with a ``queue`` argument
worker server. If in combination with a ``queue`` argument
only used to specify custom routing keys to topic exchanges.
priority (int): The task priority, a number between 0 and 9.
Defaults to the :attr:`priority` attribute.
serializer (str): Serialization method to use.
Can be `pickle`, `json`, `yaml`, `msgpack` or any custom
serialization method that has been registered
serialization method that's been registered
with :mod:`kombu.serialization.registry`.
Defaults to the :attr:`serializer` attribute.
compression (str): Optional compression method
to use. Can be one of ``zlib``, ``bzip2``,
or any custom compression methods registered with
:func:`kombu.compression.register`. Defaults to
the :setting:`task_compression` setting.
:func:`kombu.compression.register`.
Defaults to the :setting:`task_compression` setting.
link (~@Signature): A single, or a list of tasks signatures
to apply if the task returns successfully.
@@ -559,7 +559,7 @@ def retry(self, args=None, kwargs=None, exc=None, throw=True,
Note:
Although the task will never return above as `retry` raises an
exception to notify the worker, we use `raise` in front of the
retry to convey that the rest of the block will not be executed.
retry to convey that the rest of the block won't be executed.
Arguments:
args (Tuple): Positional arguments to retry with.
@@ -578,15 +578,15 @@ def retry(self, args=None, kwargs=None, exc=None, throw=True,
eta (~datetime.dateime): Explicit time and date to run the
retry at.
max_retries (int): If set, overrides the default retry limit for
this execution. Changes to this parameter do not propagate to
subsequent task retry attempts. A value of :const:`None`, means
"use the default", so if you want infinite retries you would
this execution. Changes to this parameter don't propagate to
subsequent task retry attempts. A value of :const:`None`, means
"use the default", so if you want infinite retries you'd
have to set the :attr:`max_retries` attribute of the task to
:const:`None` first.
time_limit (int): If set, overrides the default time limit.
soft_time_limit (int): If set, overrides the default soft
time limit.
throw (bool): If this is :const:`False`, do not raise the
throw (bool): If this is :const:`False`, don't raise the
:exc:`~@Retry` exception, that tells the worker to mark
the task as being retried. Note that this means the task
will be marked as failed if the task raises an exception,
@@ -760,7 +760,7 @@ def replace(self, sig):
Raises:
~@Ignore: This is always raised, so the best practice
is to always use ``raise self.replace(...)`` to convey
to the reader that the task will not continue after being replaced.
to the reader that the task won't continue after being replaced.
"""
chord = self.request.chord
if 'chord' in sig.options:
@@ -798,7 +798,7 @@ def add_to_chord(self, sig, lazy=False):
Arguments:
sig (~@Signature): Signature to extend chord with.
lazy (bool): If enabled the new task will not actually be called,
lazy (bool): If enabled the new task won't actually be called,
and ``sig.delay()`` must be called manually.
"""
if not self.request.chord:
View
@@ -322,7 +322,7 @@ def trace_task(uuid, args, kwargs, request=None):
# retval - is the always unmodified return value.
# state - is the resulting task state.
# This function is very long because we have unrolled all the calls
# This function is very long because we've unrolled all the calls
# for performance reasons, and because the function is so long
# we want the main variables (I, and R) to stand out visually from the
# the rest of the variables, so breaking PEP8 is worth it ;)
@@ -539,7 +539,7 @@ def setup_worker_optimizations(app, hostname=None):
hostname = hostname or gethostname()
# make sure custom Task.__call__ methods that calls super
# will not mess up the request/task stack.
# won't mess up the request/task stack.
_install_stack_protection()
# all new threads start without a current app, so if an app is not
@@ -593,7 +593,7 @@ def _install_stack_protection():
# they work when tasks are called directly.
#
# The worker only optimizes away __call__ in the case
# where it has not been overridden, so the request/task stack
# where it hasn't been overridden, so the request/task stack
# will blow if a custom task class defines __call__ and also
# calls super().
if not getattr(BaseTask, '_stackprotected', False):
View
@@ -216,7 +216,7 @@ def detect_settings(conf, preconf={}, ignore_keys=set(), prefix=None,
# always use new format if prefix is used.
info, left = _settings_info, set()
# only raise error for keys that the user did not provide two keys
# only raise error for keys that the user didn't provide two keys
# for (e.g. both ``result_expires`` and ``CELERY_TASK_RESULT_EXPIRES``).
really_left = {key for key in left if info.convert[key] not in have}
if really_left:
View
@@ -14,7 +14,7 @@
def repair_uuid(s):
# Historically the dashes in UUIDS are removed from AMQ entity names,
# but there is no known reason to. Hopefully we'll be able to fix
# but there's no known reason to. Hopefully we'll be able to fix
# this in v4.0.
return '%s-%s-%s-%s-%s' % (s[:8], s[8:12], s[12:16], s[16:20], s[20:])
View
@@ -83,7 +83,7 @@ class Backend(object):
supports_native_join = False
#: If true the backend must automatically expire results.
#: The daily backend_cleanup periodic task will not be triggered
#: The daily backend_cleanup periodic task won't be triggered
#: in this case.
supports_autoexpire = False
@@ -141,7 +141,7 @@ def mark_as_failure(self, task_id, exc,
traceback=None, request=None,
store_result=True, call_errbacks=True,
state=states.FAILURE):
"""Mark task as executed with failure. Stores the exception."""
"""Mark task as executed with failure."""
if store_result:
self.store_result(task_id, exc, state,
traceback=traceback, request=request)
@@ -179,8 +179,11 @@ def mark_as_revoked(self, task_id, reason='',
def mark_as_retry(self, task_id, exc, traceback=None,
request=None, store_result=True, state=states.RETRY):
"""Mark task as being retries. Stores the current
exception (if any)."""
"""Mark task as being retries.
Note:
Stores the current exception (if any).
"""
return self.store_result(task_id, exc, state,
traceback=traceback, request=request)
@@ -364,8 +367,11 @@ def delete_group(self, group_id):
return self._delete_group(group_id)
def cleanup(self):
"""Backend cleanup. Is run by
:class:`celery.task.DeleteExpiredTaskMetaTask`."""
"""Backend cleanup.
Note:
This is run by :class:`celery.task.DeleteExpiredTaskMetaTask`.
"""
pass
def process_cleanup(self):
View
@@ -21,7 +21,7 @@
E_NO_CASSANDRA = """
You need to install the cassandra-driver library to
use the Cassandra backend. See https://github.com/datastax/python-driver
use the Cassandra backend. See https://github.com/datastax/python-driver
"""
E_NO_SUCH_CASSANDRA_AUTH_PROVIDER = """
@@ -145,8 +145,8 @@ def _get_connection(self, write=False):
auth_provider=self.auth_provider)
self._session = self._connection.connect(self.keyspace)
# We are forced to do concatenation below, as formatting would
# blow up on superficial %s that will be processed by Cassandra
# We're forced to do concatenation below, as formatting would
# blow up on superficial %s that'll be processed by Cassandra
self._write_stmt = cassandra.query.SimpleStatement(
Q_INSERT_RESULT.format(
table=self.table, expires=self.cqlexpires),
@@ -160,7 +160,7 @@ def _get_connection(self, write=False):
if write:
# Only possible writers "workers" are allowed to issue
# CREATE TABLE. This is to prevent conflicting situations
# CREATE TABLE. This is to prevent conflicting situations
# where both task-creator and task-executor would issue it
# at the same time.
View
@@ -53,7 +53,7 @@ def _inner(*args, **kwargs):
return fun(*args, **kwargs)
except (DatabaseError, InvalidRequestError, StaleDataError):
logger.warning(
'Failed operation %s. Retrying %s more times.',
'Failed operation %s. Retrying %s more times.',
fun.__name__, max_retries - retries - 1,
exc_info=True)
if retries + 1 >= max_retries:
View
@@ -50,7 +50,7 @@ def __init__(self, url=None, open=open, unlink=os.unlink, sep=os.sep,
self.open = open
self.unlink = unlink
# Lets verify that we have everything setup right
# Lets verify that we've everything setup right
self._do_directory_test(b'.fs-backend-' + uuid().encode(encoding))
def _find_path(self, url):
View
@@ -96,7 +96,7 @@ def __init__(self, app=None, **kwargs):
if not isinstance(config, dict):
raise ImproperlyConfigured(
'MongoDB backend settings should be grouped in a dict')
config = dict(config) # do not modify original
config = dict(config) # don't modify original
if 'host' in config or 'port' in config:
# these should take over uri conf
@@ -134,7 +134,7 @@ def _get_connection(self):
if not host:
# The first pymongo.Connection() argument (host) can be
# a list of ['host:port'] elements or a mongodb connection
# URI. If this is the case, don't use self.port
# URI. If this is the case, don't use self.port
# but let pymongo get the port(s) from the URI instead.
# This enables the use of replica sets and sharding.
# See pymongo.Connection() for more info.
@@ -268,7 +268,7 @@ def collection(self):
collection = self.database[self.taskmeta_collection]
# Ensure an index on date_done is there, if not process the index
# in the background. Once completed cleanup will be much faster
# in the background. Once completed cleanup will be much faster
collection.ensure_index('date_done', background='true')
return collection
@@ -278,7 +278,7 @@ def group_collection(self):
collection = self.database[self.groupmeta_collection]
# Ensure an index on date_done is there, if not process the index
# in the background. Once completed cleanup will be much faster
# in the background. Once completed cleanup will be much faster
collection.ensure_index('date_done', background='true')
return collection
View
@@ -171,7 +171,7 @@ def get_task_meta(self, task_id, backlog_limit=1000):
tid = self._get_message_task_id(acc)
prev, latest_by_id[tid] = latest_by_id.get(tid), acc
if prev:
# backends are not expected to keep history,
# backends aren't expected to keep history,
# so we delete everything except the most recent state.
prev.ack()
prev = None
View
@@ -150,7 +150,7 @@ def __lt__(self, other):
# in the scheduler heap, the order is decided by the
# preceding members of the tuple ``(time, priority, entry)``.
#
# If all that is left to order on is the entry then it can
# If all that's left to order on is the entry then it can
# just as well be random.
return id(self) < id(other)
return NotImplemented
@@ -161,13 +161,13 @@ class Scheduler(object):
The :program:`celery beat` program may instantiate this class
multiple times for introspection purposes, but then with the
``lazy`` argument set. It is important for subclasses to
``lazy`` argument set. It's important for subclasses to
be idempotent when this argument is set.
Arguments:
schedule (~celery.schedules.schedule): see :attr:`schedule`.
max_interval (int): see :attr:`max_interval`.
lazy (bool): Do not set up the schedule.
lazy (bool): Don't set up the schedule.
"""
Entry = ScheduleEntry
@@ -236,7 +236,7 @@ def is_due(self, entry):
def tick(self, event_t=event_t, min=min,
heappop=heapq.heappop, heappush=heapq.heappush,
heapify=heapq.heapify, mktime=time.mktime):
"""Run a tick, that is one iteration of the scheduler.
"""Run a tick - one iteration of the scheduler.
Executes one due task per call.
@@ -423,8 +423,8 @@ def setup_schedule(self):
try:
self._store = self._open_schedule()
# In some cases there may be different errors from a storage
# backend for corrupted files. Example - DBPageNotFoundError
# exception from bsddb. In such case the file will be
# backend for corrupted files. Example - DBPageNotFoundError
# exception from bsddb. In such case the file will be
# successfully opened but the error will be raised on first key
# retrieving.
self._store.keys()
View
@@ -190,7 +190,7 @@ def __init__(self, *args, **kwargs):
self._reconnect()
def note(self, m):
"""Say something to the user. Disabled if :attr:`silent`."""
"""Say something to the user. Disabled if :attr:`silent`."""
if not self.silent:
say(m, file=self.out)
View
@@ -285,7 +285,7 @@ def ask(self, q, choices, default=None):
Matching is case insensitive.
Arguments:
q (str): the question to ask (do not include questionark)
q (str): the question to ask (don't include questionark)
choice (Tuple[str]): tuple of possible choices, must be lowercase.
default (Any): Default value if any.
"""
View
@@ -13,7 +13,7 @@
.. cmdoption:: -s, --schedule
Path to the schedule database. Defaults to `celerybeat-schedule`.
Path to the schedule database. Defaults to `celerybeat-schedule`.
The extension '.db' may be appended to the filename.
Default is {default}.
@@ -28,7 +28,7 @@
.. cmdoption:: -f, --logfile
Path to log file. If no logfile is specified, `stderr` is used.
Path to log file. If no logfile is specified, `stderr` is used.
.. cmdoption:: -l, --loglevel
@@ -39,7 +39,7 @@
Optional file used to store the process pid.
The program will not start if this file already exists
The program won't start if this file already exists
and the pid is still alive.
.. cmdoption:: --uid
View
@@ -50,13 +50,13 @@
.. cmdoption:: -f, --logfile
Path to log file. If no logfile is specified, `stderr` is used.
Path to log file. If no logfile is specified, `stderr` is used.
.. cmdoption:: --pidfile
Optional file used to store the process pid.
The program will not start if this file already exists
The program won't start if this file already exists
and the pid is still alive.
.. cmdoption:: --uid
@@ -471,7 +471,7 @@ class purge(Command):
option_list = Command.option_list + (
Option('--force', '-f', action='store_true',
help='Do not prompt for verification'),
help="Don't prompt for verification"),
Option('--queues', '-Q', default=[],
help='Comma separated list of queue names to purge.'),
Option('--exclude-queues', '-X', default=[],
@@ -601,7 +601,7 @@ def call(self, *args, **kwargs):
def run(self, *args, **kwargs):
if not args:
raise self.UsageError(
'Missing {0.name} method. See --help'.format(self))
'Missing {0.name} method. See --help'.format(self))
return self.do_call_method(args, **kwargs)
def _ensure_fanout_supported(self):
@@ -1106,7 +1106,7 @@ def _relocate_args_from_start(self, argv, index=0):
elif value.startswith('-'):
# we eat the next argument even though we don't know
# if this option takes an argument or not.
# instead we will assume what is the command name in the
# instead we'll assume what's the command name in the
# return statements below.
try:
nxt = argv[index + 1]
View
@@ -34,13 +34,13 @@
.. cmdoption:: -f, --logfile
Path to log file. If no logfile is specified, `stderr` is used.
Path to log file. If no logfile is specified, `stderr` is used.
.. cmdoption:: --pidfile
Optional file used to store the process pid.
The program will not start if this file already exists
The program won't start if this file already exists
and the pid is still alive.
.. cmdoption:: --uid
View
@@ -20,7 +20,7 @@
$ # You need to add the same arguments when you restart,
$ # as these are not persisted anywhere.
$ # as these aren't persisted anywhere.
$ celery multi restart Leslie -E --pidfile=/var/run/celery/%n.pid
--logfile=/var/run/celery/%n%I.log
View
@@ -11,7 +11,7 @@
.. cmdoption:: -c, --concurrency
Number of child processes processing the queue. The default
Number of child processes processing the queue. The default
is the number of CPUs available on your system.
.. cmdoption:: -P, --pool
@@ -22,12 +22,12 @@
.. cmdoption:: -n, --hostname
Set custom hostname, e.g. 'w1.%h'. Expands: %h (hostname),
Set custom hostname, e.g. 'w1.%h'. Expands: %h (hostname),
%n (name) and %d, (domain).
.. cmdoption:: -B, --beat
Also run the `celery beat` periodic task scheduler. Please note that
Also run the `celery beat` periodic task scheduler. Please note that
there must only be one instance of this service.
.. cmdoption:: -Q, --queues
@@ -50,7 +50,7 @@
.. cmdoption:: -s, --schedule
Path to the schedule database if running with the `-B` option.
Defaults to `celerybeat-schedule`. The extension ".db" may be
Defaults to `celerybeat-schedule`. The extension ".db" may be
appended to the filename.
.. cmdoption:: -O
@@ -63,13 +63,13 @@
.. cmdoption:: --scheduler
Scheduler class to use. Default is
Scheduler class to use. Default is
:class:`celery.beat.PersistentScheduler`
.. cmdoption:: -S, --statedb
Path to the state database. The extension '.db' may
be appended to the filename. Default: {default}
Path to the state database. The extension '.db' may
be appended to the filename. Default: {default}
.. cmdoption:: -E, --events
@@ -78,15 +78,15 @@
.. cmdoption:: --without-gossip
Do not subscribe to other workers events.
Don't subscribe to other workers events.
.. cmdoption:: --without-mingle
Do not synchronize with other workers at start-up.
Don't synchronize with other workers at start-up.
.. cmdoption:: --without-heartbeat
Do not send event heartbeats.
Don't send event heartbeats.
.. cmdoption:: --heartbeat-interval
@@ -114,7 +114,7 @@
.. cmdoption:: --maxmemperchild
Maximum amount of resident memory, in KiB, that may be consumed by a
child process before it will be replaced by a new one. If a single
child process before it will be replaced by a new one. If a single
task causes a child process to exceed this limit, the task will be
completed and the child process will be replaced afterwards.
Default: no limit.
@@ -125,7 +125,7 @@
.. cmdoption:: -f, --logfile
Path to log file. If no logfile is specified, `stderr` is used.
Path to log file. If no logfile is specified, `stderr` is used.
.. cmdoption:: -l, --loglevel
@@ -136,7 +136,7 @@
Optional file used to store the process pid.
The program will not start if this file already exists
The program won't start if this file already exists
and the pid is still alive.
.. cmdoption:: --uid
@@ -231,7 +231,7 @@ def run(self, hostname=None, pool_cls=None, app=None, uid=None, gid=None,
try:
loglevel = mlevel(loglevel)
except KeyError: # pragma: no cover
self.die('Unknown level {0!r}. Please use one of {1}.'.format(
self.die('Unknown level {0!r}. Please use one of {1}.'.format(
loglevel, '|'.join(
l for l in LOG_LEVELS if isinstance(l, string_t))))
View
@@ -294,8 +294,8 @@ def freeze(self, _id=None, group_id=None, chord=None,
root_id=None, parent_id=None):
"""Finalize the signature by adding a concrete task id.
The task will not be called and you should not call the signature
twice after freezing it as that will result in two task messages
The task won't be called and you shouldn't call the signature
twice after freezing it as that'll result in two task messages
using the same task id.
Returns:
@@ -542,7 +542,7 @@ class chain(Signature):
Arguments:
*tasks (Signature): List of task signatures to chain.
If only one argument is passed and that argument is
an iterable, then that will be used as the list of signatures
an iterable, then that'll be used as the list of signatures
to chain instead. This means that you can use a generator
expression.
@@ -853,7 +853,7 @@ class group(Signature):
Note:
If only one argument is passed, and that argument is an iterable
then that will be used as the list of tasks instead, which
then that'll be used as the list of tasks instead, which
means you can use ``group`` with generator expressions.
Example:
@@ -864,8 +864,8 @@ class group(Signature):
Arguments:
*tasks (Signature): A list of signatures that this group will call.
If there is only one argument, and that argument is an iterable,
then that will define the list of signatures instead.
If there's only one argument, and that argument is an iterable,
then that'll define the list of signatures instead.
**options (Any): Execution options applied to all tasks
in the group.
@@ -904,7 +904,7 @@ def _prepared(self, tasks, partial_args, group_id, root_id, app,
for task in tasks:
if isinstance(task, CallableSignature):
# local sigs are always of type Signature, and we
# clone them to make sure we do not modify the originals.
# clone them to make sure we don't modify the originals.
task = task.clone()
else:
# serialized sigs must be converted to Signature.
@@ -969,7 +969,7 @@ def apply_async(self, args=(), kwargs=None, add_to_parent=True,
p.finalize()
# - Special case of group(A.s() | group(B.s(), C.s()))
# That is, group with single item that is a chain but the
# That is, group with single item that's a chain but the
# last task in that chain is a group.
#
# We cannot actually support arbitrary GroupResults in chains,
View
@@ -82,7 +82,7 @@ def unpack_from(fmt, iobuf, unpack=struct.unpack): # noqa
#: Constant sent by child process when started (ready to accept work)
WORKER_UP = 15
#: A process must have started before this timeout (in secs.) expires.
#: A process must've started before this timeout (in secs.) expires.
PROC_ALIVE_TIMEOUT = 4.0
SCHED_STRATEGY_PREFETCH = 1
@@ -163,7 +163,7 @@ def _select(readers=None, writers=None, err=None, timeout=0,
Returns:
Tuple[Set, Set, Set]: of ``(readable, writable, again)``, where
``readable`` is a set of fds that have data available for read,
``writable`` is a set of fds that is ready to be written to
``writable`` is a set of fds that's ready to be written to
and ``again`` is a flag that if set means the caller must
throw away the result and call us again.
"""
@@ -307,7 +307,7 @@ def on_stop_not_started(self):
on_state_change = self.on_state_change
join_exited_workers = self.join_exited_workers
# flush the processes outqueues until they have all terminated.
# flush the processes outqueues until they've all terminated.
outqueues = set(fileno_to_outq)
while cache and outqueues and self._state != TERMINATE:
if check_timeouts is not None:
@@ -386,7 +386,7 @@ def __init__(self, processes=None, synack=False,
# synqueue fileno -> process mapping
self._fileno_to_synq = {}
# We keep track of processes that have not yet
# We keep track of processes that haven't yet
# sent a WORKER_UP message. If a process fails to send
# this message within proc_up_timeout we terminate it
# and hope the next process will recover.
@@ -564,7 +564,7 @@ def verify_process_alive(proc):
def on_process_up(proc):
"""Called when a process has started."""
# If we got the same fd as a previous process then we will also
# If we got the same fd as a previous process then we'll also
# receive jobs in the old buffer, so we need to reset the
# job._write_to and job._scheduled_for attributes used to recover
# message boundaries when processes exit.
@@ -603,7 +603,7 @@ def _remove_from_index(obj, proc, index, remove_fun, callback=None):
try:
if index[fd] is proc:
# fd has not been reused so we can remove it from index.
# fd hasn't been reused so we can remove it from index.
index.pop(fd, None)
except KeyError:
pass
@@ -927,7 +927,7 @@ def _write_ack(fd, ack, callback=None):
def flush(self):
if self._state == TERMINATE:
return
# cancel all tasks that have not been accepted so that NACK is sent.
# cancel all tasks that haven't been accepted so that NACK is sent.
for job in values(self._cache):
if not job._accepted:
job._cancel()
@@ -957,7 +957,7 @@ def flush(self):
for gen in writers:
if (gen.__name__ == '_write_job' and
gen_not_started(gen)):
# has not started writing the job so can
# hasn't started writing the job so can
# discard the task, but we must also remove
# it from the Pool._cache.
try:
@@ -1006,7 +1006,7 @@ def _flush_writer(self, proc, writer):
def get_process_queues(self):
"""Get queues for a new process.
Here we will find an unused slot, as there should always
Here we'll find an unused slot, as there should always
be one available when we start a new process.
"""
return next(q for q, owner in items(self._queues)
@@ -1028,8 +1028,8 @@ def create_process_queues(self):
"""Creates new in, out (and optionally syn) queues,
returned as a tuple."""
# NOTE: Pipes must be set O_NONBLOCK at creation time (the original
# fd), otherwise it will not be possible to change the flags until
# there is an actual reader/writer on the other side.
# fd), otherwise it won't be possible to change the flags until
# there's an actual reader/writer on the other side.
inq = _SimpleQueue(wnonblock=True)
outq = _SimpleQueue(rnonblock=True)
synq = None
@@ -1106,7 +1106,7 @@ def _process_cleanup_queues(self, proc):
@staticmethod
def _stop_task_handler(task_handler):
"""Called at shutdown to tell processes that we are shutting down."""
"""Called at shutdown to tell processes that we're shutting down."""
for proc in task_handler.pool:
try:
setblocking(proc.inq._writer, 1)
@@ -1145,14 +1145,14 @@ def _setup_queues(self):
# this is only used by the original pool which uses a shared
# queue for all processes.
# these attributes makes no sense for us, but we will still
# these attributes makes no sense for us, but we'll still
# have to initialize them.
self._inqueue = self._outqueue = \
self._quick_put = self._quick_get = self._poll_result = None
def process_flush_queues(self, proc):
"""Flushes all queues, including the outbound buffer, so that
all tasks that have not been started will be discarded.
all tasks that haven't been started will be discarded.
In Celery this is called whenever the transport connection is lost
(consumer restart), and when a process is terminated.
View
@@ -50,7 +50,7 @@ def process_initializer(app, hostname):
platforms.signals.ignore(*WORKER_SIGIGNORE)
platforms.set_mp_process_title('celeryd', hostname=hostname)
# This is for Windows and other platforms not supporting
# fork(). Note that init_worker makes sure it's only
# fork(). Note that init_worker makes sure it's only
# run once per process.
app.loader.init_worker()
app.loader.init_worker_process()
View
@@ -5,7 +5,7 @@
=========================
For long-running :class:`Task`'s, it can be desirable to support
aborting during execution. Of course, these tasks should be built to
aborting during execution. Of course, these tasks should be built to
support abortion specifically.
The :class:`AbortableTask` serves as a base class for all :class:`Task`
@@ -16,7 +16,7 @@
* Consumers (workers) should periodically check (and honor!) the
:meth:`is_aborted` method at controlled points in their task's
:meth:`run` method. The more often, the better.
:meth:`run` method. The more often, the better.
The necessary intermediate communication is dealt with by the
:class:`AbortableTask` implementation.
@@ -71,9 +71,9 @@ def myview(request):
time.sleep(10)
result.abort()
After the `result.abort()` call, the task execution is not
aborted immediately. In fact, it is not guaranteed to abort at all. Keep
checking `result.state` status, or call `result.get(timeout=)` to
After the `result.abort()` call, the task execution isn't
aborted immediately. In fact, it's not guaranteed to abort at all.
Keep checking `result.state` status, or call `result.get(timeout=)` to
have it block until the task is finished.
.. note::
View
@@ -127,7 +127,7 @@ def move(predicate, connection=None, exchange=None, routing_key=None,
Arguments:
predicate (Callable): Filter function used to decide which messages
to move. Must accept the standard signature of ``(body, message)``
used by Kombu consumer callbacks. If the predicate wants the
used by Kombu consumer callbacks. If the predicate wants the
message to be moved it must return either:
1) a tuple of ``(exchange, routing_key)``, or
View
@@ -172,7 +172,7 @@ def do_quit(self, arg):
do_q = do_exit = do_quit
def set_quit(self):
# this raises a BdbQuit exception that we are unable to catch.
# this raises a BdbQuit exception that we're unable to catch.
sys.settrace(None)
View
@@ -14,7 +14,7 @@
extensions = (...,
'celery.contrib.sphinx')
If you would like to change the prefix for tasks in reference documentation
If you'd like to change the prefix for tasks in reference documentation
then you can change the ``celery_task_prefix`` configuration value:
.. code-block:: python
View
@@ -85,8 +85,8 @@ class EventDispatcher(object):
groups (Sequence[str]): List of groups to send events for.
:meth:`send` will ignore send requests to groups not in this list.
If this is :const:`None`, all events will be sent. Example groups
include ``"task"`` and ``"worker"``.
If this is :const:`None`, all events will be sent.
Example groups include ``"task"`` and ``"worker"``.
enabled (bool): Set to :const:`False` to not actually publish any
events, making :meth:`send` a no-op.
@@ -180,7 +180,7 @@ def publish(self, type, fields, producer,
retry (bool): Retry in the event of connection failure.
retry_policy (Mapping): Map of custom retry policy options.
See :meth:`~kombu.Connection.ensure`.
blind (bool): Don't set logical clock value (also do not forward
blind (bool): Don't set logical clock value (also don't forward
the internal logical clock).
Event (Callable): Event type used to create event.
Defaults to :func:`Event`.
@@ -223,7 +223,7 @@ def send(self, type, blind=False, utcoffset=utcoffset, retry=False,
retry (bool): Retry in the event of connection failure.
retry_policy (Mapping): Map of custom retry policy options.
See :meth:`~kombu.Connection.ensure`.
blind (bool): Don't set logical clock value (also do not forward
blind (bool): Don't set logical clock value (also don't forward
the internal logical clock).
Event (Callable): Event type used to create event,
defaults to :func:`Event`.
View
@@ -499,7 +499,7 @@ def run(self):
def capture_events(app, state, display): # pragma: no cover
def on_connection_error(exc, interval):
print('Connection Error: {0!r}. Retry in {1}s.'.format(
print('Connection Error: {0!r}. Retry in {1}s.'.format(
exc, interval), file=sys.stderr)
while 1:
View
@@ -2,7 +2,7 @@
"""Utility to dump events to screen.
This is a simple program that dumps events to the console
as they happen. Think of it like a `tcpdump` for Celery events.
as they happen. Think of it like a `tcpdump` for Celery events.
"""
from __future__ import absolute_import, print_function, unicode_literals
View
@@ -1,9 +1,9 @@
# -*- coding: utf-8 -*-
"""Periodically store events in a database.
Consuming the events as a stream is not always suitable
Consuming the events as a stream isn't always suitable
so this module implements a system to take snapshots of the
state of a cluster at regular intervals. There is a full
state of a cluster at regular intervals. There's a full
implementation of this writing the snapshots to a database
in :mod:`djcelery.snapshots` in the `django-celery` distribution.
"""
View
@@ -110,7 +110,7 @@ def heartbeat_expires(timestamp, freq=60,
expire_window=HEARTBEAT_EXPIRE_WINDOW,
Decimal=Decimal, float=float, isinstance=isinstance):
# some json implementations returns decimal.Decimal objects,
# which are not compatible with float.
# which aren't compatible with float.
freq = float(freq) if isinstance(freq, Decimal) else freq
if isinstance(timestamp, Decimal):
timestamp = float(timestamp)
@@ -261,7 +261,7 @@ class Task(object):
#: How to merge out of order events.
#: Disorder is detected by logical ordering (e.g. :event:`task-received`
#: must have happened before a :event:`task-failed` event).
#: must've happened before a :event:`task-failed` event).
#:
#: A merge rule consists of a state and a list of fields to keep from
#: that state. ``(RECEIVED, ('name', 'args')``, means the name and args
View
@@ -24,7 +24,7 @@
]
UNREGISTERED_FMT = """\
Task of kind {0} is not registered, please make sure it's imported.\
Task of kind {0} never registered, please make sure it's imported.\
"""
@@ -125,7 +125,7 @@ class ImproperlyConfigured(ImportError):
@python_2_unicode_compatible
class NotRegistered(KeyError, CeleryError):
"""The task is not registered."""
"""The task ain't registered."""
def __repr__(self):
return UNREGISTERED_FMT.format(self)
@@ -148,19 +148,19 @@ class TaskRevokedError(CeleryError):
class NotConfigured(CeleryWarning):
"""Celery has not been configured, as no config module has been found."""
"""Celery hasn't been configured, as no config module has been found."""
class AlwaysEagerIgnored(CeleryWarning):
"""send_task ignores :setting:`task_always_eager` option"""
class InvalidTaskError(CeleryError):
"""The task has invalid data or is not properly constructed."""
"""The task has invalid data or ain't properly constructed."""
class IncompleteStream(CeleryError):
"""Found the end of a stream of data, but the data is not yet complete."""
"""Found the end of a stream of data, but the data isn't complete."""
class ChordError(CeleryError):
View
@@ -25,7 +25,7 @@
ERR_NOT_INSTALLED = """\
Environment variable DJANGO_SETTINGS_MODULE is defined
but Django is not installed. Will not apply Django fix-ups!
but Django isn't installed. Won't apply Django fix-ups!
"""
View
@@ -298,7 +298,7 @@ def __unicode__(self):
class PromiseProxy(Proxy):
"""This is a proxy to an object that has not yet been evaulated.
"""This is a proxy to an object that hasn't yet been evaulated.
:class:`Proxy` will evaluate the object each time, while the
promise will only evaluate it once.
View
@@ -79,7 +79,7 @@
"""
ROOT_DISCOURAGED = """\
You are running the worker with superuser privileges, which is
You're running the worker with superuser privileges, which is
absolutely not recommended!
Please specify a different user using the -u option.
@@ -177,12 +177,12 @@ def remove(self):
os.unlink(self.path)
def remove_if_stale(self):
"""Remove the lock if the process is not running.
"""Remove the lock if the process isn't running.
(does not respond to signals)."""
try:
pid = self.read_pid()
except ValueError as exc:
print('Broken pidfile found. Removing it.', file=sys.stderr)
print('Broken pidfile found - Removing it.', file=sys.stderr)
self.remove()
return True
if not pid:
@@ -193,7 +193,7 @@ def remove_if_stale(self):
os.kill(pid, 0)
except os.error as exc:
if exc.errno == errno.ESRCH:
print('Stale pidfile exists. Removing it.', file=sys.stderr)
print('Stale pidfile exists - Removing it.', file=sys.stderr)
self.remove()
return True
return False
@@ -229,7 +229,7 @@ def create_pidlock(pidfile):
"""Create and verify pidfile.
If the pidfile already exists the program exits with an error message,
however if the process it refers to is not running anymore, the pidfile
however if the process it refers to isn't running anymore, the pidfile
is deleted and the program continues.
This function will automatically install an :mod:`atexit` handler
@@ -363,14 +363,14 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
The ability to write to this file
will be verified before the process is detached.
pidfile (str): Optional pid file.
The pidfile will not be created,
The pidfile won't be created,
as this is the responsibility of the child. But the process will
exit if the pid lock exists and the pid written is still running.
uid (int, str): Optional user id or user name to change
effective privileges to.
gid (int, str): Optional group id or group name to change
effective privileges to.
umask (str, int): Optional umask that will be effective in
umask (str, int): Optional umask that'll be effective in
the child process.
workdir (str): Optional new working directory.
fake (bool): Don't actually detach, intended for debugging purposes.
@@ -384,7 +384,7 @@ def detached(logfile=None, pidfile=None, uid=None, gid=None, umask=0,
... uid='nobody'):
... # Now in detached child process with effective user set to nobody,
... # and we know that our logfile can be written to, and that
... # the pidfile is not locked.
... # the pidfile isn't locked.
... pidlock = create_pidlock('/var/run/app.pid')
...
... # Run the program
@@ -446,7 +446,7 @@ def parse_gid(gid):
def _setgroups_hack(groups):
""":fun:`setgroups` may have a platform-dependent limit,
and it is not always possible to know in advance what this limit
and it's not always possible to know in advance what this limit
is, so we use this ugly hack stolen from glibc."""
groups = groups[:]
@@ -559,7 +559,7 @@ def maybe_drop_privileges(uid=None, gid=None):
class Signals(object):
"""Convenience interface to :mod:`signals`.
If the requested signal is not supported on the current platform,
If the requested signal isn't supported on the current platform,
the operation will be ignored.
Example:
View
@@ -149,7 +149,7 @@ def get(self, timeout=None, propagate=True, interval=0.5,
propagate (bool): Re-raise exception if the task failed.
interval (float): Time to wait (in seconds) before retrying to
retrieve the result. Note that this does not have any effect
when using the RPC/redis result store backends, as they do not
when using the RPC/redis result store backends, as they don't
use polling.
no_ack (bool): Enable amqp no ack (automatically acknowledge
message). If this is :const:`False` then the message will
@@ -158,7 +158,7 @@ def get(self, timeout=None, propagate=True, interval=0.5,
parent tasks.
Raises:
celery.exceptions.TimeoutError: if `timeout` is not
celery.exceptions.TimeoutError: if `timeout` isn't
:const:`None` and the result does not arrive within
`timeout` seconds.
Exception: If the remote call raised an exception then that
@@ -416,7 +416,7 @@ def state(self):
*SUCCESS*
The task executed successfully. The :attr:`result` attribute
The task executed successfully. The :attr:`result` attribute
then contains the tasks return value.
"""
return self._get_task_meta()['status']
@@ -474,7 +474,7 @@ def remove(self, result):
"""Remove result from the set; it must be a member.
Raises:
KeyError: if the result is not a member.
KeyError: if the result isn't a member.
"""
if isinstance(result, string_t):
result = self.app.AsyncResult(result)
@@ -505,7 +505,7 @@ def successful(self):
Returns:
bool: true if all of the tasks finished
successfully (i.e. did not raise an exception).
successfully (i.e. didn't raise an exception).
"""
return all(result.successful() for result in self.results)
@@ -647,7 +647,7 @@ def join(self, timeout=None, propagate=True, interval=0.5,
No results will be returned by this function if a callback
is specified. The order of results is also arbitrary when a
callback is used. To get access to the result object for
a particular id you will have to generate an index first:
a particular id you'll have to generate an index first:
``index = {r.id: r for r in gres.results.values()}``
Or you can create new result objects on the fly:
``result = app.AsyncResult(task_id)`` (both will
@@ -657,7 +657,7 @@ def join(self, timeout=None, propagate=True, interval=0.5,
*will not be acknowledged*).
Raises:
celery.exceptions.TimeoutError: if ``timeout`` is not
celery.exceptions.TimeoutError: if ``timeout`` isn't
:const:`None` and the operation takes longer than ``timeout``
seconds.
"""
@@ -953,7 +953,7 @@ def result_from_tuple(r, app=None):
return app.GroupResult(
res, [result_from_tuple(child, app) for child in nodes],
)
# previously did not include parent
# previously didn't include parent
id, parent = res if isinstance(res, (list, tuple)) else (res, None)
if parent:
parent = result_from_tuple(parent, app)
View
@@ -27,7 +27,7 @@
schedstate = namedtuple('schedstate', ('is_due', 'next'))
CRON_PATTERN_INVALID = """\
Invalid crontab pattern. Valid range is {min}-{max}. \
Invalid crontab pattern. Valid range is {min}-{max}. \
'{value}' was found.\
"""
@@ -174,7 +174,7 @@ def to_local(self, dt):
class crontab_parser(object):
"""Parser for Crontab expressions. Any expression of the form 'groups'
"""Parser for Crontab expressions. Any expression of the form 'groups'
(see BNF grammar below) is accepted and expanded to a set of numbers.
These numbers represent the units of time that the Crontab needs to
run on:
@@ -300,7 +300,7 @@ class crontab(schedule):
periodic task entry to add :manpage:`crontab(5)`-like scheduling.
Like a :manpage:`cron(5)`-job, you can specify units of time of when
you would like the task to execute. It is a reasonably complete
you'd like the task to execute. It's a reasonably complete
implementation of :command:`cron`'s features, so it should provide a fair
degree of scheduling needs.
@@ -361,7 +361,7 @@ class crontab(schedule):
The Celery app instance.
It is important to realize that any day on which execution should
It's important to realize that any day on which execution should
occur must be represented by entries in all three of the day and
month attributes. For example, if ``day_of_week`` is 0 and
``day_of_month`` is every seventh day, only months that begin
@@ -399,8 +399,8 @@ def _expand_cronspec(cronspec, max_, min_=0):
And convert it to an (expanded) set representing all time unit
values on which the Crontab triggers. Only in case of the base
type being :class:`str`, parsing occurs. (It is fast and
happens only once for each Crontab instance, so there is no
type being :class:`str`, parsing occurs. (It's fast and
happens only once for each Crontab instance, so there's no
significant performance overhead involved.)
For the other base types, merely Python type conversions happen.
@@ -740,7 +740,7 @@ def remaining_estimate(self, last_run_at):
start=last_run_at_utc, use_center=self.use_center,
)
except self.ephem.CircumpolarError: # pragma: no cover
# Sun will not rise/set today. Check again tomorrow
# Sun won't rise/set today. Check again tomorrow
# (specifically, after the next anti-transit).
next_utc = (
self.cal.next_antitransit(self.ephem.Sun()) +
View
@@ -25,7 +25,7 @@
UNREADY_STATES
~~~~~~~~~~~~~~
Set of states meaning the task result is not ready (has not been executed).
Set of states meaning the task result is not ready (hasn't been executed).
.. state:: EXCEPTION_STATES
View
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Old deprecated task module.
This is the old task module, it should not be used anymore,
This is the old task module, it shouldn't be used anymore,
import from the main 'celery' module instead.
If you're looking for the decorator implementation then that's in
``celery.app.base.Celery.task``.
View
@@ -62,7 +62,7 @@ def __new__(cls, name, bases, attrs):
new = super(TaskType, cls).__new__
task_module = attrs.get('__module__') or '__main__'
# - Abstract class: abstract attribute should not be inherited.
# - Abstract class: abstract attribute shouldn't be inherited.
abstract = attrs.pop('abstract', None)
if abstract or not attrs.get('autoregister', True):
return new(cls, name, bases, attrs)
@@ -92,13 +92,13 @@ def __new__(cls, name, bases, attrs):
# an app is created multiple times due to modules
# imported under multiple names.
# Hairy stuff, here to be compatible with 2.x.
# People should not use non-abstract task classes anymore,
# People shouldn't use non-abstract task classes anymore,
# use the task decorator.
from celery._state import connect_on_app_finalize
unique_name = '.'.join([task_module, name])
if unique_name not in cls._creation_count:
# the creation count is used as a safety
# so that the same task is not added recursively
# so that the same task isn't added recursively
# to the set of constructors.
cls._creation_count[unique_name] = 1
connect_on_app_finalize(_CompatShared(
View
@@ -126,7 +126,7 @@ def shutdown(self):
self.assertIsNone(x._connection)
self.assertIsNone(x._session)
x.process_cleanup() # should not raise
x.process_cleanup() # shouldn't raise
def test_please_free_memory(self):
# Ensure that Cluster object IS shut down.
View
@@ -44,7 +44,7 @@
should be: "teardown"\
"""
CASE_LOG_REDIRECT_EFFECT = """\
Test {0} did not disable LoggingProxy for {1}\
Test {0} didn't disable LoggingProxy for {1}\
"""
CASE_LOG_LEVEL_EFFECT = """\
Test {0} Modified the level of the root logger\
View
@@ -208,7 +208,7 @@ def test_on_task_postrun(self):
f.close_database.assert_called()
f.close_cache.assert_called()
# when a task is eager, do not close connections
# when a task is eager, don't close connections
with patch.object(f, 'close_cache'):
task.request.is_eager = True
with patch.object(f, 'close_database'):
View
@@ -76,7 +76,7 @@ class AlwaysReady(TSR):
cb.type.apply_async.assert_called_with(
([2, 4, 8, 6],), {}, task_id=cb.id,
)
# did not retry
# didn't retry
self.assertFalse(retry.call_count)
def test_deps_ready_fails(self):
@@ -114,7 +114,7 @@ class Failed(TSR):
with self._chord_context(Failed) as (cb, retry, fail_current):
cb.type.apply_async.assert_not_called()
# did not retry
# didn't retry
self.assertFalse(retry.call_count)
fail_current.assert_called()
self.assertEqual(
View
@@ -1,7 +1,7 @@
# -*- coding: utf-8 -*-
"""Utility functions.
Do not import from here directly anymore, as these are only
Don't import from here directly anymore, as these are only
here for backwards compatibility.
"""
from __future__ import absolute_import, print_function, unicode_literals
View
@@ -424,7 +424,7 @@ class LimitedSet(object):
but the set should not grow unbounded.
``maxlen`` is enforced at all times, so if the limit is reached
we will also remove non-expired items.
we'll also remove non-expired items.
You can also configure ``minlen``, which is the minimal residual size
of the set.
@@ -495,7 +495,7 @@ def __init__(self, maxlen=0, expires=0, data=None, minlen=0):
raise ValueError('expires cannot be negative!')
def _refresh_heap(self):
"""Time consuming recreating of heap. Do not run this too often."""
"""Time consuming recreating of heap. Don't run this too often."""
self._heap[:] = [entry for entry in values(self._data)]
heapify(self._heap)
@@ -546,7 +546,7 @@ def update(self, other):
self.add(obj)
def discard(self, item):
# mark an existing item as removed. If KeyError is not found, pass.
# mark an existing item as removed. If KeyError is not found, pass.
self._data.pop(item, None)
self._maybe_refresh_heap()
pop_value = discard
@@ -568,7 +568,7 @@ def purge(self, now=None):
while len(self._data) > self.minlen >= 0:
inserted_time, _ = self._heap[0]
if inserted_time + self.expires > now:
break # oldest item has not expired yet
break # oldest item hasn't expired yet
self.pop()
def pop(self, default=None):
View
@@ -43,11 +43,11 @@ def Callable(deprecation=None, removal=None,
Arguments:
deprecation (str): Version that marks first deprecation, if this
argument is not set a ``PendingDeprecationWarning`` will be
argument isn't set a ``PendingDeprecationWarning`` will be
emitted instead.
removal (str): Future version when this feature will be removed.
alternative (str): Instructions for an alternative solution (if any).
description (str): Description of what is being deprecated.
description (str): Description of what's being deprecated.
"""
def _inner(fun):
View
@@ -76,7 +76,7 @@ class BoundMethodWeakref(object): # pragma: no cover
class attribute pointing to all live
BoundMethodWeakref objects indexed by the class's
`calculate_key(target)` method applied to the target
objects. This weak value dictionary is used to
objects. This weak value dictionary is used to
short-circuit creation so that multiple references
to the same (object, function) pair produce the
same BoundMethodWeakref instance.
@@ -91,7 +91,7 @@ def __new__(cls, target, on_delete=None, *arguments, **named):
Basically this method of construction allows us to
short-circuit creation of references to already-
referenced instance methods. The key corresponding
to the target is calculated, and if there is already
to the target is calculated, and if there's already
an existing reference, that is returned, with its
deletionMethods attribute updated. Otherwise the
new instance is created and registered in the table
@@ -174,7 +174,7 @@ def __repr__(self):
return str(self)
def __bool__(self):
"""Whether we are still a valid reference"""
"""Whether we're still a valid reference"""
return self() is not None
__nonzero__ = __bool__ # py2
@@ -222,7 +222,7 @@ class BoundNonDescriptorMethodWeakref(BoundMethodWeakref): # pragma: no cover
... return 'foo'
>>> A.bar = foo
But this shouldn't be a common use case. So, on platforms where methods
But this shouldn't be a common use case. So, on platforms where methods
aren't descriptors (such as Jython) this implementation has the
advantage of working in the most cases.
"""
@@ -241,7 +241,7 @@ def __init__(self, target, on_delete=None):
on_delete (Callable): Optional callback which will be called
when this weak reference ceases to be valid
(i.e. either the object or the function is garbage
collected). Should take a single argument,
collected). Should take a single argument,
which will be passed a pointer to this object.
"""
assert getattr(target.__self__, target.__name__) == target
@@ -265,7 +265,7 @@ def __call__(self):
function = self.weak_fun()
if function is not None:
# Using curry() would be another option, but it erases the
# "signature" of the function. That is, after a function is
# "signature" of the function. That is, after a function is
# curried, the inspect module can't be used to determine how
# many arguments the function expects, nor what keyword
# arguments it supports, and pydispatcher needs this
View
@@ -57,7 +57,7 @@ def connect(self, *args, **kwargs):
Arguments:
receiver (Callable): A function or an instance method which is to
receive signals. Receivers must be hashable objects.
receive signals. Receivers must be hashable objects.
if weak is :const:`True`, then receiver must be
weak-referenceable (more precisely :func:`saferef.safe_ref()`
@@ -75,11 +75,11 @@ def connect(self, *args, **kwargs):
weak (bool): Whether to use weak references to the receiver.
By default, the module will attempt to use weak references to
the receiver objects. If this parameter is false, then strong
the receiver objects. If this parameter is false, then strong
references will be used.
dispatch_uid (Hashable): An identifier used to uniquely identify a
particular instance of a receiver. This will usually be a
particular instance of a receiver. This will usually be a
string, though it may be anything hashable.
"""
def _handle_options(sender=None, weak=True, dispatch_uid=None):
@@ -121,12 +121,12 @@ def disconnect(self, receiver=None, sender=None, weak=True,
dispatch_uid=None):
"""Disconnect receiver from sender for signal.
If weak references are used, disconnect need not be called. The
receiver will be removed from dispatch automatically.
If weak references are used, disconnect needn't be called.
The receiver will be removed from dispatch automatically.
Arguments:
receiver (Callable): The registered receiver to disconnect. May be
none if `dispatch_uid` is specified.
receiver (Callable): The registered receiver to disconnect.
May be none if `dispatch_uid` is specified.
sender (Any): The registered sender to disconnect.
@@ -154,8 +154,8 @@ def send(self, sender, **named):
have all receivers called if a raises an error.
Arguments:
sender (Any): The sender of the signal. Either a specific
object or :const:`None`.
sender (Any): The sender of the signal.
Either a specific object or :const:`None`.
**named (Any): Named arguments which will be passed to receivers.
Returns:
View
@@ -83,7 +83,7 @@ def first(predicate, it):
"""Return the first element in ``iterable`` that ``predicate`` gives a
:const:`True` value for.
If ``predicate`` is None it will return the first item that is not
If ``predicate`` is None it will return the first item that's not
:const:`None`.
"""
return next(
View
@@ -240,7 +240,7 @@ def close(self):
self.closed = True
def isatty(self):
"""Always return :const:`False`. Just here for file support."""
"""Always return :const:`False`. Just here for file support."""
return False
View
@@ -33,7 +33,7 @@
def worker_direct(hostname):
"""Return :class:`kombu.Queue` that is a direct route to
"""Return :class:`kombu.Queue` that's a direct route to
a worker by hostname.
Arguments:
View
@@ -21,7 +21,7 @@ def mro_lookup(cls, attr, stop=set(), monkey_patched=[]):
stop (Set[Any]): A set of types that if reached will stop
the search.
monkey_patched (Sequence): Use one of the stop classes
if the attributes module origin is not in this list.
if the attributes module origin isn't in this list.
Used to detect monkey patched attributes.
Returns:
@@ -53,11 +53,11 @@ class FallbackContext(object):
@contextmanager
def connection_or_default_connection(connection=None):
if connection:
# user already has a connection, should not close
# user already has a connection, shouldn't close
# after use
yield connection
else:
# must have new connection, and also close the connection
# must've new connection, and also close the connection
# after the block returns
with create_new_connection() as connection:
yield connection
View
@@ -7,7 +7,7 @@
- Sets are represented the Python 3 way: ``{1, 2}`` vs ``set([1, 2])``.
- Unicode strings does not have the ``u'`` prefix, even on Python 2.
- Empty set formatted as ``set()`` (Python 3), not ``set([])`` (Python 2).
- Longs do not have the ``L`` suffix.
- Longs don't have the ``L`` suffix.
Very slow with no limits, super quick with limits.
"""
View
@@ -45,7 +45,7 @@ def subclass_exception(name, parent, module): # noqa
def find_pickleable_exception(exc, loads=pickle.loads,
dumps=pickle.dumps):
"""With an exception instance, iterate over its super classes (by MRO)
and find the first super exception that is pickleable. It does
and find the first super exception that's pickleable. It does
not go below :exc:`Exception` (i.e. it skips :exc:`Exception`,
:class:`BaseException` and :class:`object`). If that happens
you should use :exc:`UnpickleableException` instead.
View
@@ -245,7 +245,7 @@ def top(self):
@python_2_unicode_compatible
class LocalManager(object):
"""Local objects cannot manage themselves. For that you need a local
"""Local objects cannot manage themselves. For that you need a local
manager. You can pass a local manager multiple locals or add them
later by appending them to ``manager.locals``. Every time the manager
cleans up, it will clean up all the data left in the locals for this
@@ -311,6 +311,6 @@ def __len__(self):
else:
# - See #706
# since each thread has its own greenlet we can just use those as
# identifiers for the context. If greenlets are not available we
# identifiers for the context. If greenlets aren't available we
# fall back to the current thread ident.
LocalStack = _LocalStack # noqa
View
@@ -49,15 +49,15 @@
SHUTDOWN_SOCKET_TIMEOUT = 5.0
SELECT_UNKNOWN_QUEUE = """\
Trying to select queue subset of {0!r}, but queue {1} is not
Trying to select queue subset of {0!r}, but queue {1} isn't
defined in the `task_queues` setting.
If you want to automatically declare unknown queues you can
enable the `task_create_missing_queues` setting.
"""
DESELECT_UNKNOWN_QUEUE = """\
Trying to deselect queue subset of {0!r}, but queue {1} is not
Trying to deselect queue subset of {0!r}, but queue {1} isn't
defined in the `task_queues` setting.
"""
@@ -120,7 +120,7 @@ def setup_instance(self, queues=None, ready_callback=None, pidfile=None,
self.loglevel = mlevel(self.loglevel)
self.ready_callback = ready_callback or self.on_consumer_ready
# this connection is not established, only used for params
# this connection won't establish, only used for params
self._conninfo = self.app.connection_for_read()
self.use_eventloop = (
self.should_use_eventloop() if use_eventloop is None
View
@@ -24,7 +24,7 @@
"""
W_POOL_SETTING = """
The worker_pool setting should not be used to select the eventlet/gevent
The worker_pool setting shouldn't be used to select the eventlet/gevent
pools, instead you *must use the -P* argument so that patches are applied
as early as possible.
"""
View
@@ -65,7 +65,7 @@
"""
UNKNOWN_FORMAT = """\
Received and deleted unknown message. Wrong destination?!?
Received and deleted unknown message. Wrong destination?!?
The full contents of the message body was: %s
"""
@@ -76,7 +76,7 @@
The message has been ignored and discarded.
Did you remember to import the module containing this task?
Or maybe you are using relative imports?
Or maybe you're using relative imports?
Please see http://bit.ly/gLye1c for more information.
The full contents of the message body was:
View
@@ -144,7 +144,7 @@ def revoke(state, task_id, terminate=False, signal=None, **kwargs):
Keyword Arguments:
terminate (bool): Also terminate the process if the task is active.
signal (str): Name of signal to use for terminate. E.g. ``KILL``.
signal (str): Name of signal to use for terminate. E.g. ``KILL``.
"""
# supports list argument since 3.1
task_ids, task_id = set(maybe_list(task_id) or []), None
View
@@ -52,7 +52,7 @@ def asynloop(obj, connection, consumer, blueprint, hub, qos,
raise WorkerLostError('Could not start worker processes')
# consumer.consume() may have prefetched up to our
# limit - drain an event so we are in a clean state
# limit - drain an event so we're in a clean state
# prior to starting our event loop.
if connection.transport.driver_type == 'amqp':
hub.call_soon(_quick_drain, connection)
@@ -74,7 +74,7 @@ def asynloop(obj, connection, consumer, blueprint, hub, qos,
elif should_terminate is not None and should_stop is not False:
raise WorkerTerminate(should_terminate)
# We only update QoS when there is no more messages to read.
# We only update QoS when there's no more messages to read.
# This groups together qos calls, and makes sure that remote
# control commands will be prioritized over task messages.
if qos.prev != qos.value:
View
@@ -341,7 +341,7 @@ def on_failure(self, exc_info, send_failed_event=True, return_ok=False):
if isinstance(exc, Retry):
return self.on_retry(exc_info)
# These are special cases where the process would not have had
# These are special cases where the process wouldn't've had
# time to write the result.
if isinstance(exc, Terminated):
self._announce_revoked(
View
@@ -6,7 +6,7 @@
Welcome!
This document is fairly extensive and you are not really expected
This document is fairly extensive and you aren't really expected
to study this in detail for small contributions;
The most important rule is that contributing must be easy
@@ -17,7 +17,7 @@ If you're reporting a bug you should read the Reporting bugs section
below to ensure that your bug report contains enough information
to successfully diagnose the issue, and if you're contributing code
you should try to mimic the conventions you see surrounding the code
you are working on, but in the end all patches will be cleaned up by
you're working on, but in the end all patches will be cleaned up by
the person merging the changes so don't worry too much.
.. contents::
@@ -28,8 +28,8 @@ the person merging the changes so don't worry too much.
Community Code of Conduct
=========================
The goal is to maintain a diverse community that is pleasant for everyone.
That is why we would greatly appreciate it if everyone contributing to and
The goal is to maintain a diverse community that's pleasant for everyone.
That's why we would greatly appreciate it if everyone contributing to and
interacting with the community also followed this Code of Conduct.
The Code of Conduct covers our behavior as members of the community,
@@ -46,22 +46,22 @@ Be considerate.
---------------
Your work will be used by other people, and you in turn will depend on the
work of others. Any decision you take will affect users and colleagues, and
work of others. Any decision you take will affect users and colleagues, and
we expect you to take those consequences into account when making decisions.
Even if it's not obvious at the time, our contributions to Celery will impact
the work of others. For example, changes to code, infrastructure, policy,
the work of others. For example, changes to code, infrastructure, policy,
documentation and translations during a release may negatively impact
others work.
Be respectful.
--------------
The Celery community and its members treat one another with respect. Everyone
can make a valuable contribution to Celery. We may not always agree, but
disagreement is no excuse for poor behavior and poor manners. We might all
The Celery community and its members treat one another with respect. Everyone
can make a valuable contribution to Celery. We may not always agree, but
disagreement is no excuse for poor behavior and poor manners. We might all
experience some frustration now and then, but we cannot allow that frustration
to turn into a personal attack. It's important to remember that a community
where people feel uncomfortable or threatened is not a productive one. We
to turn into a personal attack. It's important to remember that a community
where people feel uncomfortable or threatened isn't a productive one. We
expect members of the Celery community to be respectful when dealing with
other contributors as well as with people outside the Celery project and with
users of Celery.
@@ -70,11 +70,11 @@ Be collaborative.
-----------------
Collaboration is central to Celery and to the larger free software community.
We should always be open to collaboration. Your work should be done
We should always be open to collaboration. Your work should be done
transparently and patches from Celery should be given back to the community
when they are made, not just when the distribution releases. If you wish
when they're made, not just when the distribution releases. If you wish
to work on new code for existing upstream projects, at least keep those
projects informed of your ideas and progress. It many not be possible to
projects informed of your ideas and progress. It many not be possible to
get consensus from upstream, or even from your colleagues about the correct
implementation for an idea, so don't feel obliged to have that agreement
before you begin, but at least keep the outside world informed of your work,
@@ -85,29 +85,29 @@ When you disagree, consult others.
----------------------------------
Disagreements, both political and technical, happen all the time and
the Celery community is no exception. It is important that we resolve
the Celery community is no exception. It's important that we resolve
disagreements and differing views constructively and with the help of the
community and community process. If you really want to go a different
community and community process. If you really want to go a different
way, then we encourage you to make a derivative distribution or alternate
set of packages that still build on the work we've done to utilize as common
of a core as possible.
When you are unsure, ask for help.
----------------------------------
When you're unsure, ask for help.
---------------------------------
Nobody knows everything, and nobody is expected to be perfect. Asking
Nobody knows everything, and nobody is expected to be perfect. Asking
questions avoids many problems down the road, and so questions are
encouraged. Those who are asked questions should be responsive and helpful.
encouraged. Those who are asked questions should be responsive and helpful.
However, when asking a question, care must be taken to do so in an appropriate
forum.
Step down considerately.
------------------------
Developers on every project come and go and Celery is no different. When you
Developers on every project come and go and Celery is no different. When you
leave or disengage from the project, in whole or in part, we ask that you do
so in a way that minimizes disruption to the project. This means you should
tell people you are leaving and take the proper steps to ensure that others
so in a way that minimizes disruption to the project. This means you should
tell people you're leaving and take the proper steps to ensure that others
can pick up where you leave off.
.. _reporting-bugs:
@@ -174,12 +174,12 @@ and participate in the discussion.
2) **Determine if your bug is really a bug.**
You should not file a bug if you are requesting support. For that you can use
You shouldn't file a bug if you're requesting support. For that you can use
the :ref:`mailing-list`, or :ref:`irc-channel`.
3) **Make sure your bug hasn't already been reported.**
Search through the appropriate Issue tracker. If a bug like yours was found,
Search through the appropriate Issue tracker. If a bug like yours was found,
check if you have new information that could be reported to help
the developers fix the bug.
@@ -192,7 +192,7 @@ celery, billiard, kombu, amqp and vine.
5) **Collect information about the bug.**
To have the best chance of having a bug fixed, we need to be able to easily
reproduce the conditions that caused it. Most of the time this information
reproduce the conditions that caused it. Most of the time this information
will be from a Python traceback message, though some bugs might be in design,
spelling or other errors on the website/docs/code.
@@ -202,12 +202,12 @@ spelling or other errors on the website/docs/code.
etc.), the version of your Python interpreter, and the version of Celery,
and related packages that you were running when the bug occurred.
C) If you are reporting a race condition or a deadlock, tracebacks can be
C) If you're reporting a race condition or a deadlock, tracebacks can be
hard to get or might not be that useful. Try to inspect the process to
get more diagnostic data. Some ideas:
* Enable celery's :ref:`breakpoint signal <breakpoint_signal>` and use it
to inspect the process's state. This will allow you to open a
* Enable Celery's :ref:`breakpoint signal <breakpoint_signal>` and use it
to inspect the process's state. This will allow you to open a
:mod:`pdb` session.
* Collect tracing data using `strace`_(Linux),
:command:`dtruss` (macOS), and :command:`ktrace` (BSD),
@@ -252,7 +252,7 @@ issue tracker.
* :pypi:`librabbitmq`: https://github.com/celery/librabbitmq/issues
* :pypi:`django-celery`: https://github.com/celery/django-celery/issues
If you are unsure of the origin of the bug you can ask the
If you're unsure of the origin of the bug you can ask the
:ref:`mailing-list`, or just use the Celery issue tracker.
Contributors guide to the code base
@@ -328,7 +328,7 @@ Maintenance branches
--------------------
Maintenance branches are named after the version, e.g. the maintenance branch
for the 2.2.x series is named ``2.2``. Previously these were named
for the 2.2.x series is named ``2.2``. Previously these were named
``releaseXX-maint``.
The versions we currently maintain is:
@@ -346,7 +346,7 @@ Archived branches
Archived branches are kept for preserving history only,
and theoretically someone could provide patches for these if they depend
on a series that is no longer officially supported.
on a series that's no longer officially supported.
An archived version is named ``X.Y-archived``.
@@ -368,17 +368,17 @@ Feature branches
----------------
Major new features are worked on in dedicated branches.
There is no strict naming requirement for these branches.
There's no strict naming requirement for these branches.
Feature branches are removed once they have been merged into a release branch.
Feature branches are removed once they've been merged into a release branch.
Tags
====
Tags are used exclusively for tagging releases. A release tag is
Tags are used exclusively for tagging releases. A release tag is
named with the format ``vX.Y.Z``, e.g. ``v2.3.1``.
Experimental releases contain an additional identifier ``vX.Y.Z-id``, e.g.
``v3.0.0-rc1``. Experimental tags may be removed after the official release.
``v3.0.0-rc1``. Experimental tags may be removed after the official release.
.. _contributing-changes:
@@ -390,7 +390,7 @@ Working on Features & Patches
Contributing to Celery should be as simple as possible,
so none of these steps should be considered mandatory.
You can even send in patches by email if that is your preferred
You can even send in patches by email if that's your preferred
work method. We won't like you any less, any contribution you make
is always appreciated!
@@ -506,7 +506,7 @@ When your feature/bugfix is complete you may want to submit
a pull requests so that it can be reviewed by the maintainers.
Creating pull requests is easy, and also let you track the progress
of your contribution. Read the `Pull Requests`_ section in the GitHub
of your contribution. Read the `Pull Requests`_ section in the GitHub
Guide to learn how this is done.
You can also attach pull requests to existing issues by following
@@ -549,7 +549,7 @@ The coverage XML output will then be located at :file:`coverage.xml`
Running the tests on all supported Python versions
~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~~
There is a :pypi:`tox` configuration file in the top directory of the
There's a :pypi:`tox` configuration file in the top directory of the
distribution.
To run the tests for all supported Python versions simply execute:
@@ -591,7 +591,7 @@ After building succeeds the documentation is available at :file:`_build/html`.
Verifying your contribution
---------------------------
To use these tools you need to install a few dependencies. These dependencies
To use these tools you need to install a few dependencies. These dependencies
can be found in :file:`requirements/pkgutils.txt`.
Installing the dependencies:
@@ -631,7 +631,7 @@ reference please execute:
If files are missing you can add them by copying an existing reference file.
If the module is internal it should be part of the internal reference
located in :file:`docs/internals/reference/`. If the module is public
located in :file:`docs/internals/reference/`. If the module is public
it should be located in :file:`docs/reference/`.
For example if reference is missing for the module ``celery.worker.awesome``
@@ -724,7 +724,7 @@ is following the conventions.
.. _`PEP-257`: http://www.python.org/dev/peps/pep-0257/
* Lines should not exceed 78 columns.
* Lines shouldn't exceed 78 columns.
You can enforce this in :command:`vim` by setting the ``textwidth`` option:
@@ -777,12 +777,12 @@ is following the conventions.
from __future__ import absolute_import
* If the module uses the :keyword:`with` statement and must be compatible
with Python 2.5 (celery is not) then it must also enable that::
with Python 2.5 (celery isn't) then it must also enable that::
from __future__ import with_statement
* Every future import must be on its own line, as older Python 2.5
releases did not support importing multiple features on the
releases didn't support importing multiple features on the
same future import line::
# Good
@@ -792,12 +792,12 @@ is following the conventions.
# Bad
from __future__ import absolute_import, with_statement
(Note that this rule does not apply if the package does not include
(Note that this rule doesn't apply if the package doesn't include
support for Python 2.5)
* Note that we use "new-style` relative imports when the distribution
does not support Python versions below 2.5
doesn't support Python versions below 2.5
This requires Python 2.5 or later:
@@ -827,7 +827,7 @@ that require third-party libraries must be added.
pycassa
These are pip requirement files so you can have version specifiers and
multiple packages are separated by newline. A more complex example could
multiple packages are separated by newline. A more complex example could
be:
.. code-block:: text
@@ -862,7 +862,7 @@ that require third-party libraries must be added.
That's all that needs to be done, but remember that if your feature
adds additional configuration options then these needs to be documented
in :file:`docs/configuration.rst`. Also all settings need to be added to the
in :file:`docs/configuration.rst`. Also all settings need to be added to the
:file:`celery/app/defaults.py` module.
Result backends require a separate section in the :file:`docs/configuration.rst`
@@ -877,7 +877,7 @@ This is a list of people that can be contacted for questions
regarding the official git repositories, PyPI packages
Read the Docs pages.
If the issue is not an emergency then it is better
If the issue isn't an emergency then it's better
to :ref:`report an issue <reporting-bugs>`.
@@ -990,7 +990,7 @@ Promise/deferred implementation.
------------
Fork of multiprocessing containing improvements
that will eventually be merged into the Python stdlib.
that'll eventually be merged into the Python stdlib.
:git: https://github.com/celery/billiard
:CI: http://travis-ci.org/#!/celery/billiard/
@@ -1087,7 +1087,7 @@ The version number must be updated two places:
* :file:`docs/include/introduction.txt`
After you have changed these files you must render
the :file:`README` files. There is a script to convert sphinx syntax
the :file:`README` files. There's a script to convert sphinx syntax
to generic reStructured Text syntax, and the make target `readme`
does this for you:
View
@@ -9,7 +9,7 @@ by Ask Solem
Copyright |copy| 2009-2016, Ask Solem.
All rights reserved. This material may be copied or distributed only
All rights reserved. This material may be copied or distributed only
subject to the terms and conditions set forth in the `Creative Commons
Attribution-ShareAlike 4.0 International`
<http://creativecommons.org/licenses/by-sa/4.0/legalcode>`_ license.
View
@@ -12,9 +12,9 @@ Using Celery with Django
Previous versions of Celery required a separate library to work with Django,
but since 3.1 this is no longer the case. Django is supported out of the
box now so this document only contains a basic way to integrate Celery and
Django. You will use the same API as non-Django users so it's recommended that
you read the :ref:`first-steps` tutorial
first and come back to this tutorial. When you have a working example you can
Django. You'll use the same API as non-Django users so you're recommended
to read the :ref:`first-steps` tutorial
first and come back to this tutorial. When you have a working example you can
continue to the :ref:`next-steps` guide.
To use Celery with your Django project you must first define
@@ -36,7 +36,7 @@ that defines the Celery instance:
.. literalinclude:: ../../examples/django/proj/celery.py
Then you need to import this app in your :file:`proj/proj/__init__.py`
module. This ensures that the app is loaded when Django starts
module. This ensures that the app is loaded when Django starts
so that the ``@shared_task`` decorator (mentioned later) will use it:
:file:`proj/proj/__init__.py`:
@@ -49,7 +49,7 @@ both the app and tasks, like in the :ref:`tut-celery` tutorial.
Let's break down what happens in the first module,
first we import absolute imports from the future, so that our
``celery.py`` module will not clash with the library:
``celery.py`` module won't clash with the library:
.. code-block:: python
@@ -63,7 +63,7 @@ for the :program:`celery` command-line program:
os.environ.setdefault('DJANGO_SETTINGS_MODULE', 'proj.settings')
You don't need this line, but it saves you from always passing in the
settings module to the celery program. It must always come before
settings module to the ``celery`` program. It must always come before
creating the app instances, which is what we do next:
.. code-block:: python
@@ -74,7 +74,7 @@ This is our instance of the library, you can have many instances
but there's probably no reason for that when using Django.
We also add the Django settings module as a configuration source
for Celery. This means that you don't have to use multiple
for Celery. This means that you don't have to use multiple
configuration files, and instead configure Celery directly
from the Django settings; but you can also separate them if wanted.
@@ -110,13 +110,13 @@ of your installed apps, following the ``tasks.py`` convention::
- models.py
This way you do not have to manually add the individual modules
to the :setting:`CELERY_IMPORTS <imports>` setting. The ``lambda`` so that the
This way you don't have to manually add the individual modules
to the :setting:`CELERY_IMPORTS <imports>` setting. The ``lambda`` so that the
auto-discovery can happen only when needed, and so that importing your
module will not evaluate the Django settings object.
module won't evaluate the Django settings object.
Finally, the ``debug_task`` example is a task that dumps
its own request information. This is using the new ``bind=True`` task option
its own request information. This is using the new ``bind=True`` task option
introduced in Celery 3.1 to easily refer to the current task instance.
Using the ``@shared_task`` decorator
@@ -155,27 +155,20 @@ To use this with your project you need to follow these four steps:
2. Add ``djcelery`` to ``INSTALLED_APPS``.
3. Create the celery database tables.
3. Create the Celery database tables.
This step will create the tables used to store results
when using the database result backend and the tables used
by the database periodic task scheduler. You can skip
by the database periodic task scheduler. You can skip
this step if you don't use these.
If you are using Django 1.7+ or south_, you'll want to:
Create the tables by migrating your database:
.. code-block:: console
$ python manage.py migrate djcelery
For those who are on Django 1.6 or lower and not using south, a normal
``syncdb`` will work:
.. code-block:: console
$ python manage.py syncdb
4. Configure celery to use the :pypi:`django-celery` backend.
4. Configure Celery to use the :pypi:`django-celery` backend.
For the database backend you must use:
@@ -213,10 +206,10 @@ To use this with your project you need to follow these four steps:
Starting the worker process
===========================
In a production environment you will want to run the worker in the background
In a production environment you'll want to run the worker in the background
as a daemon - see :ref:`daemonizing` - but for testing and
development it is useful to be able to start a worker instance by using the
:program:`celery worker` manage command, much as you would use Django's
:program:`celery worker` manage command, much as you'd use Django's
:command:`manage.py runserver`:
.. code-block:: console
Oops, something went wrong.