-
Notifications
You must be signed in to change notification settings - Fork 42
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
can't make autoscaler to work with multiple queues #9
Comments
I have a setup that is very similar. I am also unable to get autoscaler to start my workers, but it does shut them down just fine. My Profile looks like this:
I have no idea what else to try. I'm considering switching back to HireFireApp. |
@friendlycredit , I think the issue is that you have background jobs trigger jobs in a different worker - my application doesn't do this (my import jobs are triggered manually or by the scheduler) As a consquence, the example code doesn't configure scale-up on the background jobs. I believe that you need to add |
@pboling can you open a separate issue to save us some confusion? Please provide your sidekiq middleware configuration. Another good place to start would be to try and get https://github.com/JustinLove/autoscaler_sample working, and make a fork with you configuration that demonstrates the problem. |
@JustinLove My issue seems to be literally the same one @friendlycredit, so I am trying you're advice to him now. Our jobs schedule other jobs too. |
@JustinLove Just to verify there wasn't a typo - you are saying add config.clientmiddleware to the Sidekiq.configureserver block? I already have a config.servermiddleware block in my Sidekiq.configureserver block:
Are you saying I need the |
Middleware and configuration are two separate things, that happen to often be entangled. Configurations are alternates depending on how it's run. Server runs when you start the Sidekiq CLI, Client runs any other time. The configures are convenience methods to streamline setting the configuration for each case. Their execution is mutually exclusive - one has Middleware exists in two chains; the chains are disjoint, but both can be configured and running in the same process. The client chain is executed when pushing a job, and the server chain is executed when running a job. So: when running a worker, the If the duplication offends you, the version of Sidekiq I'm looking at passes See also:
@pboling Are you no longer using |
@JustinLove That is correct - I removed low from the Procfile. I guess that could be my problem? It'll take me awhile to digest your explanation. I'll take a look at those links. Thanks for the help! |
@JustinLove does autoscaler scale workers when running I am still unable to scale up, but down is working fine. I have updated my gist with the current code. UPDATE: I've been watching my queues. I'm not sure when I think I may have something bad in my sidekiq commands in the Procfile, so reading up on that now. |
Right now it only interacts with the Heroku API. The autoscaler_sample project has some notes on remote-controlling a heroku instance from your local machine for testing purposes. I made HerokuScaler a separate object, so in principal you could write a scaler that spawned and killed processes locally. I'll look at the gist later. I'll have to review this, but I believe that there is still some spooky action at a distance because only one active flag is being used, if your |
I made a sample configuration using your gist. https://github.com/JustinLove/autoscaler_sample/tree/pboling Things work fine in the basic case, so there are no fundamental configuration errors. I removed the final call to ActiveRecord since the sample has no database, and I didn't try running it with Puma. I was able to prevent other processes from scaling down by hitting 'low' occasionally, so the theoretical entanglement has been actually seen. I was also seeing spurious scale-to-1 for the low process, so something is off there. |
I just pushed 0.2.1 which should eliminate the known crosstalk issues. There is also a StubScaler that can be used for local testing. The only thing it does is print a message, but you can check whether scaling is being triggered without a heroku puppet. |
Great! I'll give the new version a shot, and report back. On the previous version I did also see strange random scale-to-1 messages when I could have sworn it was already scaled to 1, which, I think is the same as you mentioned. |
Hi,
I have a Heroku application with 3 Sidekiq workers on 3 different queues and I'm trying to make it work with this great gem.
My workers are configured to use the following queues: "enrich", "mass_enrich" and "import" (all given as strings and not symbols)
That's my sidekiq.rb conf file:
and that's my Procfile:
My user can only send jobs to the mass_enrich queue, This worker in turn, after performing some filtering and normalization will send some enrich jobs (can be hundreds or thousands). The EnrichWorker will perform the enrichment and might send an import job.
I've separated to 3 queues because I wanted to ensure that mass_enrichment requests are not stuck behind a lot of enrichments, and that import will always happen at a slow pace without hurting enrichment capacity.
What happens is that I send the mass_enrichment job and autoscaler starts the mass_enrich process. It handles the job and I see that Sidekiq now has waiting jobs on the enrich queue but nothing happens then.
When I manually (via rails console) send a dummy enrich job, the enrich process begins and handles all of them but then the same happens to the import job.
Is there any problem with workers sending other workers? Any other approaches to solve my problem?
Many thanks, and if I can help in any way, please feel free to ask. I'm not a ruby/rails expert and don't know the middleware layer very well but will be happy to try.
Zach
The text was updated successfully, but these errors were encountered: