Skip to content
master
Switch branches/tags
Code

django-db-queue

Build Status pypi release

Simple databased-backed job queue. Jobs are defined in your settings, and are processed by management commands.

Asynchronous tasks are run via a job queue. This system is designed to support multi-step job workflows.

Supported and tested against:

  • Django 1.11 and 2.2
  • Python 3.5, 3.6, 3.7 and 3.8

This package may still work with older versions of Django and Python but they aren't explicitly supported.

Getting Started

Installation

Install from PIP

pip install django-db-queue

Add django_dbq to your installed apps

INSTALLED_APPS = (
    ...
    'django_dbq',
)

Describe your job

In e.g. project.common.jobs:

import time

def my_task(job):
    logger.info("Working hard...")
    time.sleep(10)
    logger.info("Job's done!")

Set up your job

In project.settings:

JOBS = {
    'my_job': {
        'tasks': ['project.common.jobs.my_task']
    },
}

Hooks

Failure Hooks

When an unhandled exception is raised by a job, a failure hook will be called if one exists enabling you to clean up any state left behind by your failed job. Failure hooks are run in your worker process (if your job fails).

A failure hook receives the failed Job instance along with the unhandled exception raised by your failed job as its arguments. Here's an example:

def my_task_failure_hook(job, e):
    # delete some temporary files on the filesystem

To ensure this hook gets run, simply add a failure_hook key to your job config like so:

JOBS = {
    'my_job': {
        'tasks': ['project.common.jobs.my_task'],
        'failure_hook': 'project.common.jobs.my_task_failure_hook'
    },
}

Creation Hooks

You can also run creation hooks, which happen just after the creation of your Job instances and are executed in the process in which the job was created, not the worker process.

A creation hook receives your Job instance as its only argument. Here's an example:

def my_task_creation_hook(job):
    # configure something before running your job

To ensure this hook gets run, simply add a creation_hook key to your job config like so:

JOBS = {
    'my_job': {
        'tasks': ['project.common.jobs.my_task'],
        'creation_hook': 'project.common.jobs.my_task_creation_hook'
    },
}

Start the worker

In another terminal:

python manage.py worker

Create a job

Using the name you configured for your job in your settings, create an instance of Job.

Job.objects.create(name='my_job')

Prioritising jobs

Sometimes it is necessary for certain jobs to take precedence over others. For example; you may have a worker which has a primary purpose of dispatching somewhat important emails to users. However, once an hour, you may need to run a really important job which needs to be done on time and cannot wait in the queue for dozens of emails to be dispatched before it can begin.

In order to make sure that an important job is run before others, you can set the priority field to an integer higher than 0 (the default). For example:

Job.objects.create(name='normal_job')
Job.objects.create(name='important_job', priority=1)
Job.objects.create(name='critical_job', priority=2)

Jobs will be ordered by their priority (highest to lowest) and then the time which they were created (oldest to newest) and processed in that order.

Terminology

Job

The top-level abstraction of a standalone piece of work. Jobs are stored in the database (ie they are represented as Django model instances).

Task

Jobs are processed to completion by tasks. These are simply Python functions, which must take a single argument - the Job instance being processed. A single job will often require processing by more than one task to be completed fully. Creating the task functions is the responsibility of the developer. For example:

def my_task(job):
    logger.info("Doing some hard work")
    do_some_hard_work()

Workspace

The workspace is an area that tasks within a single job can use to communicate with each other. It is implemented as a Python dictionary, available on the job instance passed to tasks as job.workspace. The initial workspace of a job can be empty, or can contain some parameters that the tasks require (for example, API access tokens, account IDs etc). A single task can edit the workspace, and the modified workspace will be passed on to the next task in the sequence. For example:

def my_first_task(job):
    job.workspace['message'] = 'Hello, task 2!'

def my_second_task(job):
    logger.info("Task 1 says: %s" % job.workspace['message'])

When creating a Job, the workspace is passed as a keyword argument:

Job.objects.create(name='my_job', workspace={'key': value})

Worker process

A worker process is a long-running process, implemented as a Django management command, which is responsible for executing the tasks associated with a job. There may be many worker processes running concurrently in the final system. Worker processes wait for a new job to be created in the database, and call the each associated task in the correct sequeunce.. A worker can be started using python manage.py worker, and a single worker instance is included in the development procfile.

Configuration

Jobs are configured in the Django settings.py file. The JOBS setting is a dictionary mapping a job name (eg import_cats) to a list of one or more task function paths. For example:

JOBS = {
    'import_cats': ['apps.cat_importer.import_cats.step_one', 'apps.cat_importer.import_cats.step_two'],
}

Job states

Jobs have a state field which can have one of the following values:

  • NEW (has been created, waiting for a worker process to run the next task)
  • READY (has run a task before, awaiting a worker process to run the next task)
  • PROCESSING (a task is currently being processed by a worker)
  • COMPLETED (all job tasks have completed successfully)
  • FAILED (a job task failed)

API

Model methods

Job.get_queue_depths

If you need to programatically get the depth of any queue you can run the following:

from django_dbq.models import Job

...

Job.objects.create(name='do_work', workspace={})
Job.objects.create(name='do_other_work', queue_name='other_queue', workspace={})

queue_depths = Job.get_queue_depths()
print(queue_depths)  # {"default": 1, "other_queue": 1}

Important: When checking queue depths, do not assume that the key for your queue will always be available. Queue depths of zero won't be included in the dict returned by this method.

Management commands

manage.py delete_old_jobs

There is a management command, manage.py delete_old_jobs, which deletes any jobs from the database which are in state COMPLETE or FAILED and were created more than 24 hours ago. This could be run, for example, as a cron task, to ensure the jobs table remains at a reasonable size.

manage.py create_job

For debugging/development purposes, a simple management command is supplied to create jobs:

manage.py create_job <job_name> --queue_name 'my_queue_name' --workspace '{"key": "value"}'

The workspace flag is optional. If supplied, it must be a valid JSON string.

queue_name is optional and defaults to default

manage.py worker

To start a worker:

manage.py worker [queue_name] [--rate_limit]
  • queue_name is optional, and will default to default
  • The --rate_limit flag is optional, and will default to 1. It is the minimum number of seconds that must have elapsed before a subsequent job can be run.
manage.py queue_depth

If you'd like to check your queue depth from the command line, you can run manage.py queue_depth [queue_name [queue_name ...]] and any jobs in the "NEW" or "READY" states will be returned.

Important: If you misspell or provide a queue name which does not have any jobs, a depth of 0 will always be returned.

Testing

It may be necessary to supply a DATABASE_PORT environment variable.

Code of conduct

For guidelines regarding the code of conduct when contributing to this repository please review https://www.dabapps.com/open-source/code-of-conduct/