Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Jobs get delayed #328

Open
mahmoudawadeen opened this issue Feb 3, 2021 · 5 comments
Open

Jobs get delayed #328

mahmoudawadeen opened this issue Feb 3, 2021 · 5 comments

Comments

@mahmoudawadeen
Copy link

Hello,

Background:

sidekiq: 5.2.3
sidekiq-scheduler: 3.0.1
redis: 5.0.2
jruby: 9.1.17.0

We are running only one container for sidekiq-scheduler and the sidekiq config of that container (the client) is listening to an always empty queue, to ensure that this container is only busy scheduling tasks and not processing any jobs. We have multiple sidekiq worker containers listening to the queues used in the scheduler schedule file. We are using cron syntax for all the jobs we have. sidekiq-scheduler and all sidekiq workers are connected to the same redis db.

Issue:

The jobs most of the time run at the scheduled time but sometimes they are delayed.

Investigation:

We looked at one instance of such behavior and compared the timestamps the job actually ran at (check screenshot from kibana)
image
vs

the timestamps in the pushed sorted set in redis (check screenshot from redis db)for and found that the pushed set (sidekiq:sidekiq-scheduler:pushed:<job_class>) contained a value for the delayed job with the expected timestamp.
image

Question
We don't believe the workers are a bottleneck, is there anything in the sidekiq-scheduler that would make it behave this way. are there any metrics we could look at to be able to identify if the issue is in sidekiq or sidekiq-scheduler?

@stale
Copy link

stale bot commented Jun 3, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The issue or PR has been inactive label Jun 3, 2021
@marcelolx marcelolx removed the stale The issue or PR has been inactive label Jun 3, 2021
@stale
Copy link

stale bot commented Oct 2, 2021

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The issue or PR has been inactive label Oct 2, 2021
@marcelolx marcelolx removed the stale The issue or PR has been inactive label Oct 2, 2021
@stale
Copy link

stale bot commented Jan 9, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The issue or PR has been inactive label Jan 9, 2022
@marcelolx marcelolx removed the stale The issue or PR has been inactive label Jan 9, 2022
@stale
Copy link

stale bot commented Apr 16, 2022

This issue has been automatically marked as stale because it has not had recent activity. It will be closed if no further activity occurs.

@stale stale bot added the stale The issue or PR has been inactive label Apr 16, 2022
@marcelolx marcelolx removed the stale The issue or PR has been inactive label Apr 16, 2022
@bpo
Copy link

bpo commented Apr 20, 2022

@mahmoudawadeen a few questions for you:

  1. Is this still an issue for you, or did you come to some sort of resolution?
  2. How frequently does this happen?
  3. Are you able to monitor the processing latency for the queue(s) affected over the period when this happens? (i.e. graph something like Sidekiq::Queue.new("high").latency)
  4. Are you able to modify the job to accept and log metadata? Search for the include_metadata config in the README - that will attach the timestamp from the scheduler's perspective of when the job is being enqueued to the arguments of the job being performed.
  5. Finally, any chance the VM that the scheduler is running on is extremely resource-constrained around the time this happens?

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

3 participants