Skip to content
This repository has been archived by the owner on Sep 20, 2022. It is now read-only.

super slow #41

Open
cypriss opened this issue Jul 10, 2016 · 1 comment
Open

super slow #41

cypriss opened this issue Jul 10, 2016 · 1 comment

Comments

@cypriss
Copy link

cypriss commented Jul 10, 2016

I benchmarked a few job processing libraries that use Redis as a backend: https://github.com/gocraft/work#benchmarks

Most can do about 10,000 to 20,000 jobs/second. Using default options, the albrow/jobs library clocked in at 40 jobs/second. I was able to increase that by increasing the # of works and the batch size, but I wasn't sure what you think good values for those params are.

Benchmark code: https://github.com/gocraft/work/blob/master/benches/bench_jobs/main.go

I'd love it if you could review my benchmark to see if I made a glaring mistake, and also what your thoughts are about good params for batch size.

@albrow
Copy link
Owner

albrow commented Jan 10, 2017

@cypriss thank you for reporting this issue and I'm sorry it has taken so long to reply. Jobs has not been a main focus for me for quite some time now. I now understand why this is happening.

Jobs performance scales terribly with queue size. If you look at the Lua script for popping the next available jobs off the queue, there are two ZUNIONSTORE commands and one ZINTERSTORE command. The performance of these commands is sensitive to the size of the input sets (O(N)) and the size of the resulting output set (O(M*log(M))). When the queue is large, both the input and output sets have high cardinality and therefore the script to pop jobs off the queue runs incredibly slowly. I should have benchmarked Jobs with larger queue sizes in order to catch this issue sooner.

There are a few ways we could fix this, including:

  1. Keep all existing features and rewrite the commands in pop_next_jobs.lua so that it performs better with large queue sizes. I'm afraid the best we might be able to achieve here is O(N) performance instead of O(N*log(N)), which won't matter much considering other libraries are able to pop jobs in a way that scales independent of queue size. The problem is that Jobs currently allows arbitrary priority levels and stores them in a sorted set where the members are job ids and the scores are the corresponding priorities.
  2. Don't allow arbitrary priority levels. We can change Jobs so that the number of priorities is a parameter set during initialization. If we do this, we can have a single set for each priority level and consume from them in order (highest priority to lowest priority) until we have consumed the desired number of jobs. This will make the performance of pop_next_jobs.lua dependent only on the number of priority levels and the number of jobs consumed (O(P + J), I think). Users who don't need different priority levels can set it to 1 and expect similar performance to competing libraries.

I believe 2 is the best option but would involve breaking backwards compatibility. I can't make any guarantees about when I will have time to do this, but I would be more than happy to review a PR if someone wants to take this on.

Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Projects
None yet
Development

No branches or pull requests

2 participants