Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

INFO keys not persisted when job is enqueued #602

Closed
kochalex opened this issue May 7, 2021 · 3 comments · Fixed by #645
Closed

INFO keys not persisted when job is enqueued #602

kochalex opened this issue May 7, 2021 · 3 comments · Fixed by #645
Assignees

Comments

@kochalex
Copy link

kochalex commented May 7, 2021

Describe the bug
:INFO keys do not remain in redis when a job is first created - however they do remain after attempting to enqueue an identical job runs into an existing lock, and then show in the web UI.

Expected behavior
With lock_info enabled, the INFO key is created when the job is created and remains until the lock expires or is deleted.

Current behavior
The INFO is set to expire in 1000 msec when a lock is created. If the job is called a second time, the :INFO key is created in a persistent manner.

Worker class

module Jobs
  class SyncDevice
    include Sidekiq::Worker
    sidekiq_options queue: :deploy,
                    lock: :until_and_while_executing,
                    retry: 0,
                    lock_info: true

    def perform(device_id, trigger, retry_count = 0)
      ...

Additional context

Thanks for the awesome gem and this is not a huge issue as it is debugging info, but does seem to be a bug. From reading the code, I'm not sure if lock.lua should set the TTL or not or if that is a concern for the ruby code to update that after. Here's some debugging information which shows what is happening:

Enqueue the job for the first time & print some debug info:

puts Jobs::SyncDevice.perform_async(1, 'test'); puts redis.keys; sleep(2); puts redis.keys;
b78b6988b47d990124d37517
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO
uniquejobs:digests
queues
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a
uniquejobs:changelog
queue:deploy
uniquejobs:digests
queues
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a
uniquejobs:changelog
queue:deploy

output from monitoring redis:

1620344154.538431 [0 127.0.0.1:62371] "select" "1"
1620344154.538795 [1 127.0.0.1:62371] "hexists" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "b78b6988b47d990124d37517"
1620344154.539771 [1 127.0.0.1:62371] "info"
1620344154.588174 [1 127.0.0.1:62371] "script" "load" "-------- BEGIN keys ---------\nlocal digest    = KEYS[1]\nlocal queued    = KEYS[2]\nlocal primed    = KEYS[3]\nlocal locked    = KEYS[4]\nlocal info      = KEYS[5]\nlocal changelog = KEYS[6]\nlocal digests   = KEYS[7]\n-------- END keys ---------\n\n\n-------- BEGIN lock arguments ---------\nlocal job_id    = ARGV[1]      -- The job_id that was previously primed\nlocal pttl      = tonumber(ARGV[2])\nlocal lock_type = ARGV[3]\nlocal limit     = tonumber(ARGV[4])\n-------- END lock arguments -----------\n\n\n--------  BEGIN injected arguments --------\nlocal current_time = tonumber(ARGV[5])\nlocal debug_lua    = ARGV[6] == \"true\"\nlocal max_history  = tonumber(ARGV[7])\nlocal script_name  = tostring(ARGV[8]) .. \".lua\"\n---------  END injected arguments ---------\n\n\n--------  BEGIN Variables --------\nlocal queued_count = redis.call(\"LLEN\", queued)\nlocal locked_count = redis.call(\"HLEN\", locked)\nlocal within_limit = limit > locked_count\nlocal limit_exceeded = not within_limit\n--------   END Variables  --------\n\n\n--------  BEGIN local functions --------\nlocal function toversion(version)\n  local _, _, maj, min, pat = string.find(version, \"(%d+)%.(%d+)%.(%d+)\")\n\n  return {\n    [\"version\"] = version,\n    [\"major\"]   = tonumber(maj),\n    [\"minor\"]   = tonumber(min),\n    [\"patch\"]   = tonumber(pat)\n  }\nend\n\nlocal function toboolean(val)\n  val = tostring(val)\n  return val == \"1\" or val == \"true\"\nend\n\nlocal function log_debug( ... )\n  if debug_lua ~= true then return end\n\n  local result = \"\"\n  for _,v in ipairs(arg) do\n    result = result .. \" \" .. tostring(v)\n  end\n  redis.log(redis.LOG_DEBUG, script_name .. \" -\" ..  result)\nend\n\nlocal function log(message, prev_jid)\n  if not max_history or max_history == 0 then return end\n  local entry = cjson.encode({digest = digest, job_id = job_id, script = script_name, message = message, time = current_time, prev_jid = prev_jid })\n\n  log_debug(\"ZADD\", changelog, current_time, entry);\n  redis.call(\"ZADD\", changelog, current_time, entry);\n  local total_entries = redis.call(\"ZCARD\", changelog)\n  local removed_entries = redis.call(\"ZREMRANGEBYRANK\", changelog, max_history, -1)\n  if removed_entries > 0 then\n    log_debug(\"Removing\", removed_entries , \"entries from changelog (total entries\", total_entries, \"exceeds max_history:\", max_history ..\")\");\n  end\n  log_debug(\"PUBLISH\", changelog, entry);\n  redis.call(\"PUBLISH\", changelog, entry);\nend\n\n----------  END local functions ----------\n\n\n--------  BEGIN queue.lua --------\nlog_debug(\"BEGIN queue with key:\", digest, \"for job:\", job_id)\n\nif redis.call(\"HEXISTS\", locked, job_id) == 1 then\n  log_debug(\"HEXISTS\", locked, job_id, \"== 1\")\n  log(\"Duplicate\")\n  return job_id\nend\n\nlocal prev_jid = redis.call(\"GET\", digest)\nlog_debug(\"job_id:\", job_id, \"prev_jid:\", prev_jid)\nif not prev_jid or prev_jid == false then\n  log_debug(\"SET\", digest, job_id)\n  redis.call(\"SET\", digest, job_id)\nelseif prev_jid == job_id then\n  log_debug(digest, \"already queued with job_id:\", job_id)\n  log(\"Duplicate\")\n  return job_id\nelse\n  -- TODO: Consider constraining the total count of both locked and queued?\n  if within_limit and queued_count < limit then\n    log_debug(\"Within limit:\", digest, \"(\",  locked_count, \"of\", limit, \")\", \"queued (\", queued_count, \"of\", limit, \")\")\n    log_debug(\"SET\", digest, job_id, \"(was\", prev_jid, \")\")\n    redis.call(\"SET\", digest, job_id)\n  else\n    log_debug(\"Limit exceeded:\", digest, \"(\",  locked_count, \"of\", limit, \")\")\n    log(\"Limit exceeded\", prev_jid)\n    return prev_jid\n  end\nend\n\nlog_debug(\"LPUSH\", queued, job_id)\nredis.call(\"LPUSH\", queued, job_id)\n\n-- The Sidekiq client should only set pttl for until_expired\n-- The Sidekiq server should set pttl for all other jobs\nif pttl and pttl > 0 then\n  log_debug(\"PEXPIRE\", digest, pttl)\n  redis.call(\"PEXPIRE\", digest, pttl)\n  log_debug(\"PEXPIRE\", queued, pttl)\n  redis.call(\"PEXPIRE\", queued, pttl)\nend\n\nlog(\"Queued\")\nlog_debug(\"END queue with key:\", digest, \"for job:\", job_id)\nreturn job_id\n--------  END queue.lua --------\n"
1620344154.589922 [1 127.0.0.1:62371] "evalsha" "abfb0cdb69a6141c58e02b5b32eb398e6ec23327" "7" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "uniquejobs:changelog" "uniquejobs:digests" "b78b6988b47d990124d37517" "0" "until_and_while_executing" "1" "1620344154.5395799" "false" "1000" "queue" "6.2.2"
1620344154.593629 [1 lua] "LLEN" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED"
1620344154.593914 [1 lua] "HLEN" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED"
1620344154.593937 [1 lua] "HEXISTS" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "b78b6988b47d990124d37517"
1620344154.593948 [1 lua] "GET" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a"
1620344154.593958 [1 lua] "SET" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a" "b78b6988b47d990124d37517"
1620344154.593973 [1 lua] "LPUSH" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "b78b6988b47d990124d37517"
1620344154.595343 [1 lua] "ZADD" "uniquejobs:changelog" "1620344154.5395799" "{\"message\":\"Queued\",\"job_id\":\"b78b6988b47d990124d37517\",\"time\":1620344154.5396,\"script\":\"queue.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344154.595395 [1 lua] "ZCARD" "uniquejobs:changelog"
1620344154.595457 [1 lua] "ZREMRANGEBYRANK" "uniquejobs:changelog" "1000" "-1"
1620344154.595470 [1 lua] "PUBLISH" "uniquejobs:changelog" "{\"message\":\"Queued\",\"job_id\":\"b78b6988b47d990124d37517\",\"time\":1620344154.5396,\"script\":\"queue.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344154.595939 [1 127.0.0.1:62371] "set" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "{\"worker\":\"Jobs::SyncDevice\",\"queue\":\"deploy\",\"limit\":null,\"timeout\":0,\"ttl\":null,\"type\":\"until_and_while_executing\",\"lock_args\":[1,\"test\"],\"time\":1620344154.595767}"
1620344154.596150 [1 127.0.0.1:62371] "rpoplpush" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED"
1620344154.637286 [1 127.0.0.1:62371] "script" "load" "-------- BEGIN keys ---------\nlocal digest    = KEYS[1]\nlocal queued    = KEYS[2]\nlocal primed    = KEYS[3]\nlocal locked    = KEYS[4]\nlocal info      = KEYS[5]\nlocal changelog = KEYS[6]\nlocal digests   = KEYS[7]\n-------- END keys ---------\n\n\n-------- BEGIN lock arguments ---------\nlocal job_id       = ARGV[1]\nlocal pttl         = tonumber(ARGV[2])\nlocal lock_type    = ARGV[3]\nlocal limit        = tonumber(ARGV[4])\n-------- END lock arguments -----------\n\n\n--------  BEGIN injected arguments --------\nlocal current_time = tonumber(ARGV[5])\nlocal debug_lua    = ARGV[6] == \"true\"\nlocal max_history  = tonumber(ARGV[7])\nlocal script_name  = tostring(ARGV[8]) .. \".lua\"\nlocal redisversion = ARGV[9]\n---------  END injected arguments ---------\n\n\n--------  BEGIN local functions --------\nlocal function toversion(version)\n  local _, _, maj, min, pat = string.find(version, \"(%d+)%.(%d+)%.(%d+)\")\n\n  return {\n    [\"version\"] = version,\n    [\"major\"]   = tonumber(maj),\n    [\"minor\"]   = tonumber(min),\n    [\"patch\"]   = tonumber(pat)\n  }\nend\n\nlocal function toboolean(val)\n  val = tostring(val)\n  return val == \"1\" or val == \"true\"\nend\n\nlocal function log_debug( ... )\n  if debug_lua ~= true then return end\n\n  local result = \"\"\n  for _,v in ipairs(arg) do\n    result = result .. \" \" .. tostring(v)\n  end\n  redis.log(redis.LOG_DEBUG, script_name .. \" -\" ..  result)\nend\n\nlocal function log(message, prev_jid)\n  if not max_history or max_history == 0 then return end\n  local entry = cjson.encode({digest = digest, job_id = job_id, script = script_name, message = message, time = current_time, prev_jid = prev_jid })\n\n  log_debug(\"ZADD\", changelog, current_time, entry);\n  redis.call(\"ZADD\", changelog, current_time, entry);\n  local total_entries = redis.call(\"ZCARD\", changelog)\n  local removed_entries = redis.call(\"ZREMRANGEBYRANK\", changelog, max_history, -1)\n  if removed_entries > 0 then\n    log_debug(\"Removing\", removed_entries , \"entries from changelog (total entries\", total_entries, \"exceeds max_history:\", max_history ..\")\");\n  end\n  log_debug(\"PUBLISH\", changelog, entry);\n  redis.call(\"PUBLISH\", changelog, entry);\nend\n\n----------  END local functions ----------\n\n\n---------  BEGIN lock.lua ---------\nlog_debug(\"BEGIN lock digest:\", digest, \"job_id:\", job_id)\n\nif redis.call(\"HEXISTS\", locked, job_id) == 1 then\n  log_debug(locked, \"already locked with job_id:\", job_id)\n  log(\"Duplicate\")\n\n  log_debug(\"LREM\", queued, -1, job_id)\n  redis.call(\"LREM\", queued, -1, job_id)\n\n  log_debug(\"LREM\", primed, 1, job_id)\n  redis.call(\"LREM\", primed, 1, job_id)\n\n  return job_id\nend\n\nlocal locked_count   = redis.call(\"HLEN\", locked)\nlocal within_limit   = limit > locked_count\nlocal limit_exceeded = not within_limit\n\nif limit_exceeded then\n  log_debug(\"Limit exceeded:\", digest, \"(\",  locked_count, \"of\", limit, \")\")\n  log(\"Limited\")\n  return nil\nend\n\nlog_debug(\"ZADD\", digests, current_time, digest)\nredis.call(\"ZADD\", digests, current_time, digest)\n\n-- set the locked key\nlog_debug(\"HSET\", locked, job_id, current_time)\nredis.call(\"HSET\", locked, job_id, current_time)\n\nlog_debug(\"LREM\", queued, -1, job_id)\nredis.call(\"LREM\", queued, -1, job_id)\n\nlog_debug(\"LREM\", primed, 1, job_id)\nredis.call(\"LREM\", primed, 1, job_id)\n\n-- The Sidekiq client sets pttl\nif pttl and pttl > 0 then\n  log_debug(\"PEXPIRE\", digest, pttl)\n  redis.call(\"PEXPIRE\", digest, pttl)\n\n  log_debug(\"PEXPIRE\", locked, pttl)\n  redis.call(\"PEXPIRE\", locked, pttl)\nend\n\nlog_debug(\"PEXPIRE\", queued, 1000)\nredis.call(\"PEXPIRE\", queued, 1000)\n\nlog_debug(\"PEXPIRE\", primed, 1000)\nredis.call(\"PEXPIRE\", primed, 1000)\n\nlog_debug(\"PEXPIRE\", info, 1000)\nredis.call(\"PEXPIRE\", info, 1000)\n\nlog(\"Locked\")\nlog_debug(\"END lock digest:\", digest, \"job_id:\", job_id)\nreturn job_id\n----------  END lock.lua  ----------\n"
1620344154.643260 [1 127.0.0.1:62371] "evalsha" "5d9a536538eb41d3b1b3b8b2f4997ae8f347562f" "7" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "uniquejobs:changelog" "uniquejobs:digests" "b78b6988b47d990124d37517" "0" "until_and_while_executing" "1" "1620344154.596246" "false" "1000" "lock" "6.2.2"
1620344154.643325 [1 lua] "HEXISTS" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "b78b6988b47d990124d37517"
1620344154.643339 [1 lua] "HLEN" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED"
1620344154.643349 [1 lua] "ZADD" "uniquejobs:digests" "1620344154.596246" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a"
1620344154.643365 [1 lua] "HSET" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "b78b6988b47d990124d37517" "1620344154.596246"
1620344154.643385 [1 lua] "LREM" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "-1" "b78b6988b47d990124d37517"
1620344154.643396 [1 lua] "LREM" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED" "1" "b78b6988b47d990124d37517"
1620344154.643410 [1 lua] "PEXPIRE" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "1000"
1620344154.643419 [1 lua] "PEXPIRE" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED" "1000"
1620344154.643426 [1 lua] "PEXPIRE" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "1000"
1620344154.643441 [1 lua] "ZADD" "uniquejobs:changelog" "1620344154.596246" "{\"message\":\"Locked\",\"job_id\":\"b78b6988b47d990124d37517\",\"time\":1620344154.5962,\"script\":\"lock.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344154.643464 [1 lua] "ZCARD" "uniquejobs:changelog"
1620344154.643469 [1 lua] "ZREMRANGEBYRANK" "uniquejobs:changelog" "1000" "-1"
1620344154.643476 [1 lua] "PUBLISH" "uniquejobs:changelog" "{\"message\":\"Locked\",\"job_id\":\"b78b6988b47d990124d37517\",\"time\":1620344154.5962,\"script\":\"lock.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344154.643739 [1 127.0.0.1:62371] "multi"
1620344154.643749 [1 127.0.0.1:62371] "sadd" "queues" "deploy"
1620344154.643760 [1 127.0.0.1:62371] "lpush" "queue:deploy" "{\"class\":\"Jobs::SyncDevice\",\"args\":[1,\"test\"],\"retry\":0,\"queue\":\"deploy\",\"lock\":\"until_and_while_executing\",\"lock_info\":true,\"jid\":\"b78b6988b47d990124d37517\",\"created_at\":1620344154.532043,\"lock_timeout\":0,\"lock_ttl\":null,\"lock_prefix\":\"uniquejobs\",\"lock_args\":[1,\"test\"],\"lock_digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\",\"enqueued_at\":1620344154.643614}"
1620344154.643800 [1 127.0.0.1:62371] "exec"
1620344154.644346 [0 127.0.0.1:62372] "select" "1"
1620344154.644438 [1 127.0.0.1:62372] "keys" "*"
1620344156.647645 [1 127.0.0.1:62372] "keys" "*"

enqueue the job again:

puts Jobs::SyncDevice.perform_async(1, 'test'); puts redis.keys; sleep(2); puts redis.keys;

uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO
uniquejobs:digests
queues
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a
uniquejobs:changelog
queue:deploy
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO
uniquejobs:digests
queues
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED
uniquejobs:fa68834fe2208f9f94ab236dd5ef392a
uniquejobs:changelog
queue:deploy

redis monitoring output:

1620344378.968682 [1 127.0.0.1:62371] "hexists" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "0750f1b1783085a4143d75cc"
1620344378.968915 [1 127.0.0.1:62371] "evalsha" "abfb0cdb69a6141c58e02b5b32eb398e6ec23327" "7" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "uniquejobs:changelog" "uniquejobs:digests" "0750f1b1783085a4143d75cc" "0" "until_and_while_executing" "1" "1620344378.9688058" "false" "1000" "queue" "6.2.2"
1620344378.969021 [1 lua] "LLEN" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED"
1620344378.969037 [1 lua] "HLEN" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED"
1620344378.969050 [1 lua] "HEXISTS" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:LOCKED" "0750f1b1783085a4143d75cc"
1620344378.969062 [1 lua] "GET" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a"
1620344378.969085 [1 lua] "ZADD" "uniquejobs:changelog" "1620344378.9688058" "{\"message\":\"Limit exceeded\",\"job_id\":\"0750f1b1783085a4143d75cc\",\"prev_jid\":\"b78b6988b47d990124d37517\",\"time\":1620344378.9688,\"script\":\"queue.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344378.969124 [1 lua] "ZCARD" "uniquejobs:changelog"
1620344378.969131 [1 lua] "ZREMRANGEBYRANK" "uniquejobs:changelog" "1000" "-1"
1620344378.969139 [1 lua] "PUBLISH" "uniquejobs:changelog" "{\"message\":\"Limit exceeded\",\"job_id\":\"0750f1b1783085a4143d75cc\",\"prev_jid\":\"b78b6988b47d990124d37517\",\"time\":1620344378.9688,\"script\":\"queue.lua\",\"digest\":\"uniquejobs:fa68834fe2208f9f94ab236dd5ef392a\"}"
1620344378.969322 [1 127.0.0.1:62371] "set" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:INFO" "{\"worker\":\"Jobs::SyncDevice\",\"queue\":\"deploy\",\"limit\":null,\"timeout\":0,\"ttl\":null,\"type\":\"until_and_while_executing\",\"lock_args\":[1,\"test\"],\"time\":1620344378.96924}"
1620344378.969492 [1 127.0.0.1:62371] "rpoplpush" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:QUEUED" "uniquejobs:fa68834fe2208f9f94ab236dd5ef392a:PRIMED"
1620344378.970444 [1 127.0.0.1:62372] "keys" "*"
1620344380.974849 [1 127.0.0.1:62372] "keys" "*"
@mhenrixon
Copy link
Owner

The problem is that your particular lock is actually two different ones. It first locks when the client pushes the job to the queue, when the server picks it up from the queue it first unlocks that one and then creates a different lock for just around your worker doing the work.

I agree that information is lost this way and that it would be cool to keep the lock info around until the job is either unlocked or completed.

It won't happen overnight though...

@mhenrixon mhenrixon self-assigned this Jun 29, 2021
mhenrixon added a commit that referenced this issue Oct 8, 2021
mhenrixon added a commit that referenced this issue Oct 8, 2021
* Prevent too eager cleanup of lock info

Close #589
Close #602

* Mandatory rubocop commit
@mhenrixon
Copy link
Owner

Released as v7.1.8

@mhenrixon
Copy link
Owner

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Projects
None yet
Development

Successfully merging a pull request may close this issue.

2 participants