-
Notifications
You must be signed in to change notification settings - Fork 186
Introduce an incremental bounce #767
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Conversation
@@ -362,9 +386,19 @@ private void drainTaskCleanupQueue() { | |||
return; | |||
} | |||
|
|||
Map<String, Integer> incrementalBounceRemainingInstanceMap = new HashMap<>(); |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
can you clarify what this map does? it tracks how many tasks still need to bounce per request?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
When doing our cleanup, each task doesn't know what has happened to the task before it. This way we can keep track of how many cleaning tasks were already killed in this cleanup run. (i.e. if we have room to kill one cleaning task in our incremental bounce, now we'll know when we get to the second that we've already done that and we shouldn't kill the second)
LGTM |
Introduce an incremental bounce
Currently, in order to bounce a request which has
SEPARATE*
placement, you need to be running instance_count * 2 slaves. This can be quite a pain for requests with many instances. This PR creates a new type of bounce where we shut down old tasks as new ones are spun up (vs waiting for all new tasks to be ready first). Using this, you only need a minimum of instance_count + 1 slaves to bounce the same request