-
-
Notifications
You must be signed in to change notification settings - Fork 140
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Request: custom queue limit identifier #8
Comments
I've thought about this scenario. That's was the reason why when sidekiq-limit_fetch was created it supported two modes – local and global. And in local mode there was no interaction with other processes and nothing was stored in redis – all was done with local semaphores (built on mutexes). But since most people expected to be able to pause and query queues from console, I've dropped local mode. I'm not sure if it would help you anyway. I think, current global mode with support for per-process limits would indeed be more useful. It would be probably trivial to implement. I'll try to highlight places which need to be changed: https://github.com/brainopia/sidekiq-limit_fetch/blob/master/lib/sidekiq/limit_fetch/global/selector.rb#L69 – retrieve a queue limit To support per-process limits we need to differentiate them from usual limits underneath, we could have used a separate field but instead of extracting extra field we can store it in the same place as usual limits but use another format to differentiate. Eg, we could continue to store global limits in current format (as a number) and limits per process would be numbers with prefix "p" or any other. This logic would go to line 69 of referenced file. And second part would be to count not all locks on line 75 but only values which match worker_name (it's passed in lua script so it's already available there – https://github.com/brainopia/sidekiq-limit_fetch/blob/master/lib/sidekiq/limit_fetch/global/selector.rb#L54). I hope it's sounds understandable. If you wish you can implement it or I can 😉 |
Happy to hear you think it would be a fairly trivial solution! I'm pretty sure I understand more or less what you mean, though it will take me a little time to familiarize myself with the code (and LUA ;) ) How would you go about specifying that custom prefix? My vague idea was to create an option that can be set in the Worker's class SomeWorker
include Sidekiq::Worker
sidekiq_options queue: :encoding, limit_fetch_key: Proc.new{ ENV['HOSTNAME'] }
def perform(*args)
# ...
end
end Ideally it's not something that goes into the sidekiq.yml file, so I don't need to have different versions of that file on each server. |
Currently each sidekiq process is identified inside sidekiq-limit_fetch with uuid. It's available inside lua script as a worker_name variable. In future we can support giving a custom identifiers but it would be a separate feature ;) |
Ah I see. I have multiple Sidekiq instances running on each server (4 instances x 30 concurrency), so each instance would have a different uuid, but this is still better than a global limit. |
Sorry for delay, I'll take a look today or tomorrow at implementing this feature :) |
No worries! I was also meaning to look into this, but just haven't had time. Looking forward to it :) |
https://github.com/brainopia/sidekiq-limit_fetch#limits-per-process – here you go :) |
Thanks so much! I'm deploying it now :) |
Hi there,
I'm making heavy use of sidekiq-limit_fetch in a cluster of Sidekiq worker servers, and a requirement has cropped up:
I have some CPU-intensive jobs, which I've relegated to their own queue, let's call it
:encoding
, and I have placed a limit of 10 on that queue. Unless I'm mistaken, I believe this means that globally only 10 of these jobs can be running at any given time.My problem is that I have multiple Sidekiq servers, and so I'd like that to be a per-server limit, not global. This will allow me to scale the
:encoding
queue by adding more servers without ever overloading a single server.I was picturing some sort of unique key I could use to identify each server (possibly the hostname) that would be used when checking how many workers are busy on a queue.
I'd be happy to fork and implement this myself, but wanted to check first to ensure this isn't already possible. If not, any advice on where in the code I should look to get started would be most welcome.
The text was updated successfully, but these errors were encountered: