Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Prevent overwhelming of worker nodes by dynamically sizing thread pools #469

Open
JamesRTaylor opened this issue Oct 9, 2020 · 2 comments

Comments

@JamesRTaylor
Copy link
Contributor

Instead of sizing the local transfer thread pool and bookeeper thread pool at 4096, they should be sized dynamically based on the formula that @stagraqubole outlined here:

rubix.pool.size.max=P
number-of-nodes=N
max-threads=P*N
So in a 100 node cluster, with rubix.pool.size.max=4, you can keep lower this value to 400.

You could introduce a config instead that expresses a percentage increase/decrease from this dynamically calculated size.

Having two thread pools of 4096 threads on top of the work already being done by a worker node leads worker nodes becoming unresponsive.

@JamesRTaylor JamesRTaylor changed the title Size the local transfer thread pool and bookeeper thread pool dynamically Prevent overwhelming of worker nodes by dynamically sizing thread pools Oct 9, 2020
@sopel39
Copy link
Contributor

sopel39 commented Oct 9, 2020

Should default be lower than 4096 then, like 512? @stagraqubole ?

@JamesRTaylor
Copy link
Contributor Author

JamesRTaylor commented Oct 9, 2020

The correct sizing of the pool is really related to the number of worker nodes. Sizing too small causes many more queries to timeout while sizing too large can cause the node to become unresponsive. With @stagraqubole help, we tuned out cluster of 110 worker nodes with the following config values to find the right balance and solve this issue:

    rubix.pool.size.max=8
    rubix.local.transfer.max-threads=1200
    rubix.cache.bookkeeper.max-threads=1200
    rubix.pool.wait.timeout=200

This took a lot of trial and error, though. To improve the out-of-the-box experience, it'd be good if the thread pool sizes were dynamically determined with a config value expressed not as an absolute size, but as a percentage above/below the calculated size.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

No branches or pull requests

2 participants