-
Notifications
You must be signed in to change notification settings - Fork 24.7k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Should we remove the listener thread pool? #53049
Comments
Pinging @elastic/es-core-infra (:Core/Infra/Core) |
I missed an additional use of this thread pool which is that refresh listeners are called back on this thread pool. We discussed this during the distributed team sync and agreed that we should aim for reducing the number of thread pools in Elasticsearch and settled on the following plan:
|
+1 Since this issue is still labeled team discuss and it was discussed during the distributed sync, is there something specific that needs further discussion? |
I've removed the label. 🙂 |
@original-brownbear Thanks for being my review buddy on this one, and your helpful comments on the PRs! |
Previously this thread pool was primarily used in the transport client to ensure that listeners are not called back on network threads.
We also adopted the use of this thread pool for global checkpoint listeners, which are callbacks that are executed at the shard level when the global checkpoint for the shard increases above a per-listener defined value.
With the removal of the transport client, this leaves global checkpoint listeners as the only use of the listener thread pool.
I would love to remove another thread pool from Elasticsearch.
Should we remove the listener thread pool from Elasticsearch? This would be a breaking change for any user that has manually configured the size of this thread pool. That would be unexpected, tuning this thread pool should not be necessary:
Therefore, while this would be a breaking change, we expect the impact on the user base to be extremely minimal.
The text was updated successfully, but these errors were encountered: