Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Slow SQL update when delayed_job table gets large #650

Open
brchristian opened this issue Apr 17, 2014 · 6 comments
Open

Slow SQL update when delayed_job table gets large #650

brchristian opened this issue Apr 17, 2014 · 6 comments

Comments

@brchristian
Copy link

@brchristian brchristian commented Apr 17, 2014

I've been noticing my delayed_job workers going incredibly slowly (1 job per second), and looked at the logs to see what might be up. It seems like about 90%+ of the time per job is spent doing delayed_job bookkeeping, which seems like something must be amiss.

Here's the line in question:

SQL (811.3ms) UPDATE `delayed_jobs` SET `locked_at` = '2014-04-17 22:32:20', `locked_by` = 'host:b38f770a-f3f3-4b2a-8c66-7c8eebdb7fea pid:2' WHERE ((run_at <= '2014-04-17 22:32:20' AND (locked_at IS NULL OR locked_at < '2014-04-17 18:32:20') OR locked_by = 'host:b38f770a-f3f3-4b2a-8c66-7c8eebdb7fea pid:2') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1

Am I somehow missing a db index or something? I'm guessing that this command isn't supposed to take 800ms!

@mumoshu
Copy link

@mumoshu mumoshu commented Aug 1, 2014

Hi @brchristian

I have encountered the same issue.
Have you resolved it?

In our specific case, each query like the below takes several seconds.

UPDATE `delayed_jobs` SET `locked_at` = '2014-07-31 02:14:49', `locked_by` = 'delayed_job.2 host:<OUR_HOST_HERE> pid:989' WHERE ((run_at <= '2014-07-31 02:14:49' AND (locked_at IS NULL OR locked_at < '2014-07-30 22:14:49') OR locked_by = 'delayed_job.2 host:<OUR_HOST_HERE> pid:989') AND failed_at IS NULL) ORDER BY priority ASC, run_at ASC LIMIT 1
@fguillen
Copy link

@fguillen fguillen commented Jun 6, 2016

Duplicated here: #581

Still not solved :/

@brchristian
Copy link
Author

@brchristian brchristian commented Jun 6, 2016

@mumoshu Unfortunately, two years later I have not resolved this issue.

@fguillen My best guess is just that it’s a complicated query, using constraints on run_at, locked_at, locked_by, and failed_at, and then sorting by priority and run_at. That’s a lot! Perhaps some kind of composite index would do the job here.

@TomK32
Copy link

@TomK32 TomK32 commented Feb 18, 2018

I had the same problem with mongoid, turned out i was missing some of the indexes. With them it is a lot faster.

@gcv
Copy link

@gcv gcv commented Oct 13, 2019

For MySQL 5.6, adding an index on the failed_at column helped considerably.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Linked pull requests

Successfully merging a pull request may close this issue.

None yet
5 participants