Database based asynchronously priority queue system -- Extracted from Shopify
Ruby
Pull request Compare This branch is 2 commits ahead, 2 commits behind dsander:master.
Fetching latest commit…
Cannot retrieve the latest commit at this time.
Permalink
Failed to load latest commit information.
lib/delayed
spec
tasks
.gitignore
MIT-LICENSE
README.textile
init.rb

README.textile

Delayed::Job

Delayed_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.

It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:

  • sending massive newsletters
  • image resizing
  • http downloads
  • updating smart collections
  • updating solr, our search server, after product changes
  • batch imports
  • spam checks

Changes

  • Added check for perform arity, if it takes one method it will pass in an info hash
  • Added recurring tasks with option to execute a certain number of times and with a given period in between each run.
  • 1.7 Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
  • 1.6 Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
  • 1.5 Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking, which enables us to run as many worker processes as we need to speed up queue processing.
  • 1.0 Initial release

Setup

The library evolves around a delayed_jobs table which looks as follows:

create_table :delayed_jobs, :force => true do |table| table.integer :priority, :default => 0 table.integer :attempts, :default => 0 table.text :handler table.string :last_error table.datetime :run_at table.datetime :locked_at table.datetime :failed_at table.string :locked_by table.string :description table.boolean :recur, :default => false table.integer :period, :limit => 11 table.integer :executions_left table.timestamps end

Usage

Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table. If the perform method takes one parameter, a hash is passed in with some read-only info about the job being run (specifically, the locked_by, recur, period, executions_left, description, attempts, and last_error attributes).

Job objects are serialized to yaml so that they can later be resurrected by the job runner.

class NewsletterJob < Struct.new(:description) def perform puts “Hello World!” #Code here end end Delayed::Job.enqueue NewsletterJob.new(“Here is the description.”)

There is also a second way to get jobs in the queue: send_later.

BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, description, massive_csv)

And another way: recur_later. This schedules a recurring job to start at time and repeat every period.

BatchImporter.new(Shop.find(1)).recur_later(:import_massive_csv, description, time, period, executions_left, priority, massive_csv)

This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
which are stored as their text representation and loaded from the database fresh when the job is actually run later.

Running the jobs

You can invoke rake jobs:work which will start working off jobs. You can cancel the rake task with CTRL-C.

You can also run by writing a simple script/job_runner, and invoking it externally:


  #!/usr/bin/env ruby
  require File.dirname(__FILE__) + '/../config/environment'
  
  Delayed::Worker.new.start  

Cleaning up

You can invoke rake jobs:clear to delete all jobs in the queue.