Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Bug 1809665: Start graceful shutdown on SIGTERM #94

Merged

Conversation

smarterclayton
Copy link
Contributor

@smarterclayton smarterclayton commented Mar 1, 2020

Graceful shutdown in Kubernetes requires:

  1. Wait for the frontend to be taken out of rotation by kube-proxy (fast, 5s or so) and external load balancers (slow, 45s is reasonable across all clouds) so that new connections are not allowed
  2. Stop accepting new connections
  3. Drain existing connections within some timeout
  4. Exit with 0 if drain before the timeout, exit with 1 after the timeout (connections still outstanding)

This PR updates the router to implement the above logic.

When the main process recieves SIGTERM or SIGINT, wait ROUTER_GRACEFUL_SHUTDOWN_DELAY (default: 45) seconds and then signal the reload script with ROUTER_SHUTDOWN=true that a graceful termination is requested. The reload-haproxy script then invokes USR1 on all child haproxy processes and waits ROUTER_MAX_SHUTDOWN_TIMEOUT or MAX_RELOAD_WAIT_TIME (default 30) seconds before invoking TERM on the child processes. If TERM is invoked the script exits with 1, indicating that not all processes completed their work.

Clients with long running requests should set ROUTER_MAX_SHUTDOWN_TIMEOUT as appropriate to ensure all connections exit cleanly.

@openshift-ci-robot openshift-ci-robot added the size/L Denotes a PR that changes 100-499 lines, ignoring generated files. label Mar 1, 2020
@openshift-ci-robot openshift-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Mar 1, 2020
@smarterclayton smarterclayton force-pushed the signal_shutdown branch 2 times, most recently from 7fc4d56 to 1b7654c Compare March 2, 2020 00:00
@smarterclayton
Copy link
Contributor Author

smarterclayton commented Mar 2, 2020

Depends on openshift/cluster-ingress-operator#366 (we're getting aborted early)

@smarterclayton
Copy link
Contributor Author

Testing upgrade from this PR + 366 to itself (changed images only), https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/19735 got 2s of disruption in the console, which should be fixed by openshift/console-operator#385

@smarterclayton
Copy link
Contributor Author

Remaining change here, pod needs to start reporting not ready as soon as it is marked deleted. Otherwise I'm testing the logging and output.

@knobunc knobunc requested review from ironcladlou, frobware, Miciah and danehans and removed request for pecameron March 2, 2020 19:23
@knobunc
Copy link
Contributor

knobunc commented Mar 2, 2020

This approach, and the code, looks good to me. I'm avoiding saying lgtm in case you intend to add the not-ready reporting before it gets merged.

@smarterclayton
Copy link
Contributor Author

Yeah, I want to see all three of the PRs working together before I merge any one.

@smarterclayton
Copy link
Contributor Author

/hold

@openshift-ci-robot openshift-ci-robot added the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 2, 2020
@smarterclayton
Copy link
Contributor Author

/hold cancel

Verified the health check during shutdown with:

$ oc rsh -n openshift-ingress deploy/router-default
sh-4.2$ kill 1
sh-4.2$ curl http://localhost:1936/healthz/ready
[+]backend-http ok
[+]has-synced ok
[-]process-running failed: reason withheld
healthz check failed

Verified during upgrade route ingresses remain available https://prow.svc.ci.openshift.org/view/gcs/origin-ci-test/logs/release-openshift-origin-installer-e2e-aws-upgrade/19806 when testing all three PRs together.

This is ready for review.

@openshift-ci-robot openshift-ci-robot removed the do-not-merge/hold Indicates that a PR should not merge because someone has issued a /hold command. label Mar 3, 2020
pkg/router/shutdown/shutdown.go Outdated Show resolved Hide resolved
// which is closed on one of these signals. If a second signal is caught, the program
// is terminated with exit code 1.
func SetupSignalHandler() <-chan struct{} {
close(onlyOneSignalHandler) // panics when called twice
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What could call SetupSignalHandler twice? Is the purpose of this line to guard against future coding errors?

Copy link
Contributor

@frobware frobware Mar 3, 2020

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This is the same code as genericapiserver. It's not appropriate for router to take a dependency on it, and this code is straightforward.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, the panic guards against coding errors.

images/router/haproxy/reload-haproxy Outdated Show resolved Hide resolved
fi
readonly wait_time=${ROUTER_MAX_SHUTDOWN_TIMEOUT:-${MAX_RELOAD_WAIT_TIME:-$max_wait_time}}
kill -USR1 $old_pids
for i in $( seq 1 $wait_time ); do
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Not sure it matters enough to worry about, but the following would be more precise:

  stop=$((SECONDS + wait_time))
  while ((SECONDS < stop)); do
    # ...

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Hrm, that construct looks much harder to understand for a casual reader. Why is it more precise?

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Because the condition is true once $wait_time seconds has passed since the assignment. In contrast, sleeping for 1 second n times fails to account for execution time. It probably does not matter here as the only commands are builtins and pidof, which should be fast; it would matter more for a command like, say, curl, where the loop may spend more time executing commands than sleeping. That is why I say it probably does not matter enough to worry about, and if the for plus seq is easier to read, we can run with that.

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yeah, I'd prefer to use the simpler construct if we don't have a concrete reason to avoid it (waiting slightly longer should not be an issue).

pkg/cmd/infra/router/template.go Outdated Show resolved Hide resolved
// endpoint out of rotation.
delay := getIntervalFromEnv("ROUTER_GRACEFUL_SHUTDOWN_DELAY", 45)
log.Info(fmt.Sprintf("Shutdown requested, waiting %s for new connections to cease", delay))
time.Sleep(delay)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

The thing that sends the signal that triggers the graceful shutdown is the kubelet, when the pod is marked for deletion, right? As soon as a pod is marked for deletion, the pod is removed from endpoints, so it should not be receiving new connections. So once the endpoints controller updates the endpoints in response to the pod's deletion and the service proxy updates in response to the endpoints update, we're really just waiting for already established connections to drain, right? Where does the 45-second delay come from?

Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, you're waiting for distributed load balancers to take you out of rotation.

  1. Pod marked for deletion in etcd
  2. Delete notification propagates to all consumers
  3. Consumers stop directing new traffic to the endpoint

1 is fast. 2 may take up to 5-10s depending on load. 3 takes as long as any type of global load balancer in front of the service takes to detect a not ready service (which is (unhealthy checks + 1) * interval check or 32s for GCP). See https://docs.google.com/document/d/1BUmtdTth49V02UZ5EjRvJ92A5vjF8wMJMSdPb1Wz3wQ/edit# for an explanation (that will become part of openshift/enhancements)

pkg/router/template/router.go Outdated Show resolved Hide resolved
@smarterclayton smarterclayton changed the title Start graceful shutdown on SIGTERM Bug 1809665: Start graceful shutdown on SIGTERM Mar 3, 2020
@openshift-ci-robot openshift-ci-robot added the bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. label Mar 3, 2020
@openshift-ci-robot
Copy link
Contributor

@smarterclayton: This pull request references Bugzilla bug 1809665, which is invalid:

  • expected the bug to be in one of the following states: NEW, ASSIGNED, ON_DEV, POST, POST, but it is MODIFIED instead

Comment /bugzilla refresh to re-evaluate validity if changes to the Bugzilla bug are made, or edit the title of this pull request to link to a different bug.

In response to this:

Bug 1809665: Start graceful shutdown on SIGTERM

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@smarterclayton
Copy link
Contributor Author

4.4 bug is 1809667 and 4.3 bug is 1809668

@smarterclayton
Copy link
Contributor Author

/cherry-pick release-4.4

@openshift-cherrypick-robot

@smarterclayton: once the present PR merges, I will cherry-pick it on top of release-4.4 in a new PR and assign it to you.

In response to this:

/cherry-pick release-4.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@smarterclayton
Copy link
Contributor Author

/cherry-pick release-4.3

@smarterclayton
Copy link
Contributor Author

smarterclayton commented Mar 3, 2020

Both jobs passed with zero disruption (just verifying I didn't break the graceful shutdown with the changes to reload-haproxy).

@smarterclayton
Copy link
Contributor Author

/retest

1 similar comment
@smarterclayton
Copy link
Contributor Author

/retest

@smarterclayton
Copy link
Contributor Author

Hrm, that's super weird. AWS install fails because it can't find the haproxy-config.template. That's scary.

@smarterclayton
Copy link
Contributor Author

/retest

@Miciah
Copy link
Contributor

Miciah commented Mar 3, 2020

/lgtm

@openshift-ci-robot openshift-ci-robot added the lgtm Indicates that a PR is ready to be merged. label Mar 3, 2020
@smarterclayton
Copy link
Contributor Author

/retest

@openshift-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: Miciah, smarterclayton

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@smarterclayton
Copy link
Contributor Author

/bugzilla refresh

@openshift-ci-robot
Copy link
Contributor

@smarterclayton: This pull request references Bugzilla bug 1809665, which is valid. The bug has been updated to refer to the pull request using the external bug tracker.

3 validation(s) were run on this bug
  • bug is open, matching expected state (open)
  • bug target release (4.5.0) matches configured target release for branch (4.5.0)
  • bug is in the state POST, which is one of the valid states (NEW, ASSIGNED, ON_DEV, POST, POST)

In response to this:

/bugzilla refresh

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-ci-robot openshift-ci-robot added bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. and removed bugzilla/invalid-bug Indicates that a referenced Bugzilla bug is invalid for the branch this PR is targeting. labels Mar 4, 2020
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

3 similar comments
@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-bot
Copy link
Contributor

/retest

Please review the full test history for this PR and help us cut down flakes.

@openshift-cherrypick-robot

@smarterclayton: new pull request created: #97

In response to this:

/cherry-pick release-4.4

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

@openshift-cherrypick-robot

@smarterclayton: new pull request created: #98

In response to this:

/cherry-pick release-4.3

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. bugzilla/valid-bug Indicates that a referenced Bugzilla bug is valid for the branch this PR is targeting. lgtm Indicates that a PR is ready to be merged. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

8 participants