Skip to content

Commit

Permalink
Small changes in the way th egems are build
Browse files Browse the repository at this point in the history
  • Loading branch information
Tobias Lütke committed Dec 16, 2008
1 parent b419b69 commit f2ea93c
Show file tree
Hide file tree
Showing 3 changed files with 17 additions and 13 deletions.
2 changes: 0 additions & 2 deletions HISTORY.txt

This file was deleted.

19 changes: 12 additions & 7 deletions README.textile
Expand Up @@ -11,13 +11,6 @@ It is a direct extraction from Shopify where the job table is responsible for a
* updating solr, our search server, after product changes
* batch imports
* spam checks

h2. Changes

* 1.7 Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.
* 1.6 Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.
* 1.5 Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking, which enables us to run as many worker processes as we need to speed up queue processing.
* 1.0 Initial release

h2. Setup

Expand Down Expand Up @@ -74,3 +67,15 @@ You can also run by writing a simple @script/job_runner@, and invoking it extern
h3. Cleaning up

You can invoke @rake jobs:clear@ to delete all jobs in the queue.

h3. Changes

* 1.7.0: Added failed_at column which can optionally be set after a certain amount of failed job attempts. By default failed job attempts are destroyed after about a month.

* 1.6.0: Renamed locked_until to locked_at. We now store when we start a given job instead of how long it will be locked by the worker. This allows us to get a reading on how long a job took to execute.

* 1.5.0: Job runners can now be run in parallel. Two new database columns are needed: locked_until and locked_by. This allows us to use pessimistic locking instead of relying on row level locks. This enables us to run as many worker processes as we need to speed up queue processing.

* 1.2.0: Added #send_later to Object for simpler job creation

* 1.0.0: Initial release
9 changes: 5 additions & 4 deletions delayed_job.gemspec
@@ -1,24 +1,25 @@
version = File.read('README.textile').scan(/^\*\s+([\d\.]+)/).flatten

Gem::Specification.new do |s|
s.name = "delayed_job"
s.version = "0.1.7"
s.version = version.first
s.date = "2008-11-28"
s.summary = "Database-backed asynchronous priority queue system -- Extracted from Shopify"
s.email = "tobi@leetsoft.com"
s.homepage = "http://github.com/tobi/delayed_job/tree/master"
s.description = "Delated_job (or DJ) encapsulates the common pattern of asynchronously executing longer tasks in the background. It is a direct extraction from Shopify where the job table is responsible for a multitude of core tasks."
s.authors = ["Tobias Lütke", "Justin Knowlden"]
s.authors = ["Tobias Lütke"]

# s.bindir = "bin"
# s.executables = ["delayed_job"]
# s.default_executable = "delayed_job"

s.has_rdoc = false
s.rdoc_options = ["--main", "README.textile"]
s.extra_rdoc_files = ["HISTORY.txt", "README.textile"]
s.extra_rdoc_files = ["README.textile"]

# run git ls-files to get an updated list
s.files = %w[
HISTORY.txt
MIT-LICENSE
README.textile
delayed_job.gemspec
Expand Down

0 comments on commit f2ea93c

Please sign in to comment.