Skip to content

Commit

Permalink
this works with SQLServer and appears to be generally less database s…
Browse files Browse the repository at this point in the history
…pecific and fragile
  • Loading branch information
superchris committed Feb 27, 2013
1 parent a0d9807 commit b015618
Showing 1 changed file with 7 additions and 15 deletions.
22 changes: 7 additions & 15 deletions lib/delayed/backend/active_record.rb
Expand Up @@ -52,22 +52,14 @@ def self.reserve(worker, max_run_time = Worker.max_run_time)
nextScope = nextScope.scoped.by_priority.limit(1)

now = self.db_time_now

if ::ActiveRecord::Base.connection.adapter_name == "PostgreSQL"
# Custom SQL required for PostgreSQL because postgres does not support UPDATE...LIMIT
# This locks the single record 'FOR UPDATE' in the subquery (http://www.postgresql.org/docs/9.0/static/sql-select.html#SQL-FOR-UPDATE-SHARE)
# Note: active_record would attempt to generate UPDATE...LIMIT like sql for postgres if we use a .limit() filter, but it would not use
# 'FOR UPDATE' and we would have many locking conflicts
quotedTableName = ::ActiveRecord::Base.connection.quote_table_name(self.table_name)
subquerySql = nextScope.lock(true).select('id').to_sql
reserved = self.find_by_sql(["UPDATE #{quotedTableName} SET locked_at = ?, locked_by = ? WHERE id IN (#{subquerySql}) RETURNING *",now,worker.name])
return reserved[0]
else
# This works on MySQL and other DBs that support UPDATE...LIMIT. It uses separate queries to lock and return the job
count = nextScope.update_all(:locked_at => now, :locked_by => worker.name)
return nil if count == 0
return self.where(:locked_at => now, :locked_by => worker.name).first
job = nextScope.first
return unless job
job.with_lock do
job.locked_at = now
job.locked_by = worker.name
job.save!
end
job
end

# Lock this job for this worker.
Expand Down

1 comment on commit b015618

@aripollak
Copy link

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think this still contains a race condition - two workers can get to the with_lock at the same time with the same job, and they'll both update the same job & return it in sequence, so the job would get processed twice. Since with_lock reloads the job with the lock, I think this can be fixed by just putting this after the with_lock:

return if job.locked_at || job.locked_by

Please sign in to comment.