Skip to content

HTTPS clone URL

Subversion checkout URL

You can clone with
or
.
Download ZIP
Newer
Older
100644 82 lines (52 sloc) 3.403 kB
d0378cd @tobi Lets try this again as textile instead of markdown
authored
1 h1. Delayed::Job
75b49dc @tobi Initial extraction
authored
2
3 Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background.
4
5 It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks. Amongst those tasks are:
6
7 * sending massive newsletters
8 * image resizing
9 * http downloads
10 * updating smart collections
11 * updating solr, our search server, after product changes
12 * batch imports
13 * spam checks
14
d0378cd @tobi Lets try this again as textile instead of markdown
authored
15 h2. Setup
75b49dc @tobi Initial extraction
authored
16
17 The library evolves around a delayed_jobs table which looks as follows:
18
19 create_table :delayed_jobs, :force => true do |table|
ce63517 @jbarnette Update the README's create_table to match the schema.
jbarnette authored
20 table.integer :priority, :default => 0
21 table.integer :attempts, :default => 0
22 table.text :handler
23 table.string :last_error
7931ef1 @technoweenie update delayed job schema in the readme
technoweenie authored
24 table.datetime :run_at
25 table.datetime :locked_at
a137def @defunkt Changes extracted from GitHub:
defunkt authored
26 table.datetime :failed_at
7931ef1 @technoweenie update delayed job schema in the readme
technoweenie authored
27 table.string :locked_by
28 table.timestamps
ce63517 @jbarnette Update the README's create_table to match the schema.
jbarnette authored
29 end
75b49dc @tobi Initial extraction
authored
30
d0378cd @tobi Lets try this again as textile instead of markdown
authored
31 h2. Usage
75b49dc @tobi Initial extraction
authored
32
33 Jobs are simple ruby objects with a method called perform. Any object which responds to perform can be stuffed into the jobs table.
34 Job objects are serialized to yaml so that they can later be resurrected by the job runner.
35
36 class NewsletterJob < Struct.new(:text, :emails)
37 def perform
38 emails.each { |e| NewsletterMailer.deliver_text_to_email(text, e) }
39 end
40 end
41
42 Delayed::Job.enqueue NewsletterJob.new('lorem ipsum...', Customers.find(:all).collect(&:email))
43
44 There is also a second way to get jobs in the queue: send_later.
45
46
47 BatchImporter.new(Shop.find(1)).send_later(:import_massive_csv, massive_csv)
48
49
50 This will simply create a Delayed::PerformableMethod job in the jobs table which serializes all the parameters you pass to it. There are some special smarts for active record objects
51 which are stored as their text representation and loaded from the database fresh when the job is actually run later.
52
53
450908d @jbarnette Refactored jobs:work, added jobs:clear.
jbarnette authored
54 h2. Running the jobs
75b49dc @tobi Initial extraction
authored
55
450908d @jbarnette Refactored jobs:work, added jobs:clear.
jbarnette authored
56 You can invoke @rake jobs:work@ which will start working off jobs. You can cancel the rake task with @CTRL-C@.
75b49dc @tobi Initial extraction
authored
57
450908d @jbarnette Refactored jobs:work, added jobs:clear.
jbarnette authored
58 You can also run by writing a simple @script/job_runner@, and invoking it externally:
0880b0f @tobi More formatting updates for readme
authored
59
60 <pre><code>
75b49dc @tobi Initial extraction
authored
61 #!/usr/bin/env ruby
62 require File.dirname(__FILE__) + '/../config/environment'
0880b0f @tobi More formatting updates for readme
authored
63
450908d @jbarnette Refactored jobs:work, added jobs:clear.
jbarnette authored
64 Delayed::Worker.new.start
0880b0f @tobi More formatting updates for readme
authored
65 </code></pre>
450908d @jbarnette Refactored jobs:work, added jobs:clear.
jbarnette authored
66
67 h3. Cleaning up
68
69 You can invoke @rake jobs:clear@ to delete all jobs in the queue.
f2ea93c @tobi Small changes in the way th egems are build
authored
70
71 h3. Changes
72
73 * 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
74
75 * 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
76
77 * 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.
78
79 * 1.2.0: Added #send_later to Object for simpler job creation
80
81 * 1.0.0: Initial release
Something went wrong with that request. Please try again.