Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

LUCENE-9074: Slice Allocation Circuit Breakers in IndexSearcher #1049

Closed
wants to merge 3 commits into from

Conversation

atris
Copy link
Contributor

@atris atris commented Dec 2, 2019

This commit introduces accounting for the queue length of the ExecutorService
being used to perform concurrent search when allocating slices for an
IndexSearcher. This commit also introduces an abstraction to define
custom parameters for sealing bulkheads under heavy node stress
to allow better predictable behaviour of latencies under varying stress

@atris atris requested a review from jpountz December 2, 2019 10:45
This commit introduces accounting for the queue length of the ExecutorService
being used to perform concurrent search when allocating slices for an
IndexSearcher. This commit also introduces an abstraction to define
custom parameters for sealing bulkheads under heavy node stress
to allow better predictable behaviour of latencies under varying stress
@atris
Copy link
Contributor Author

atris commented Dec 4, 2019

Raised another iteration -- please see and let me know your comments

* Return true if the circuit breaker condition has triggered,
* false otherwise
*/
boolean hasCircuitBreakerTriggered();
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Maybe we should move the logic of assigning tasks to threads to this class to give it more flexibility instead of just exposing whether the pool is running over capacity. I'm thinking of something that could look like void invokeAll(Collection<Runnable> tasks) (similar to ForkJoinPool), which could then decide to merge some runnables together, run some of them on the current thread, etc.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Would that also include creating the actual slices? We would need this class to be involved in that process as well since it can control the number of slices being created. Or maybe we create the slices as we do today, but can run multiple slices on caller thread when things are hot?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I was thinking the latter indeed, having more and more slices in a single task as the number of entries in the queue increases.

@atris
Copy link
Contributor Author

atris commented Jan 27, 2020

Superseded by #1214

@atris atris closed this Jan 27, 2020
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
3 participants