Currently, Solid Queue doesn't boot in case the ActiveRecord connection pool is smaller than " + 2" because of the following check:
|
def ensure_correctly_sized_thread_pool |
|
if (db_pool_size = SolidQueue::Record.connection_pool&.size) && db_pool_size < estimated_number_of_threads |
|
errors.add(:base, "Solid Queue is configured to use #{estimated_number_of_threads} threads but the " + |
|
"database connection pool is #{db_pool_size}. Increase it in `config/database.yml`") |
|
end |
|
end |
Though I can see an argument for making this an advisory limitation, there doesn't appear to be any reason for a hard cap.
We are currently monkey patching Solid Queue to remove this limitation via the following in our bin/jobs file:
module SolidQueue
class Configuration
def ensure_correctly_sized_thread_pool
end
end
end
This allows us to run our "parallel_io" queue with 150 worker threads but only 15 database connections without issues.
This queue mostly handles OpenAI API calls which typically take 6+ seconds to complete. So while 150 worker threads sounds like a lot, it's all just a bunch of slow HTTP requests and the overall throughput is quite low.
Allowing a lower connection pool number than worker threads would also resolve: #627
There is also the async scheduler work done in #728 which may provide an even more efficient solution to high IO workloads, but adding a configuration statement or argument to bin/jobs to allow arbitrary connection pool vs worker thread sizes should be a super simple improvement.
Currently, Solid Queue doesn't boot in case the ActiveRecord connection pool is smaller than " + 2" because of the following check:
solid_queue/lib/solid_queue/configuration.rb
Lines 91 to 96 in 176721e
Though I can see an argument for making this an advisory limitation, there doesn't appear to be any reason for a hard cap.
We are currently monkey patching Solid Queue to remove this limitation via the following in our
bin/jobsfile:This allows us to run our "parallel_io" queue with 150 worker threads but only 15 database connections without issues.
This queue mostly handles OpenAI API calls which typically take 6+ seconds to complete. So while 150 worker threads sounds like a lot, it's all just a bunch of slow HTTP requests and the overall throughput is quite low.
Allowing a lower connection pool number than worker threads would also resolve: #627
There is also the async scheduler work done in #728 which may provide an even more efficient solution to high IO workloads, but adding a configuration statement or argument to
bin/jobsto allow arbitrary connection pool vs worker thread sizes should be a super simple improvement.