-
Notifications
You must be signed in to change notification settings - Fork 39.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Cluster shutdown "events" #16337
Comments
@bgrant0607 @brendandburns @smarterclayton On Mon, Oct 26, 2015 at 6:06 PM, Prashanth B notifications@github.com
|
Related #4630 |
It's the pod (LB plugin?) that creates the forwarding rule? |
Ideally there would be some Kubernetes resources that represented the underlying infrastructure resources, such that deleting the former would cause the latter to be deleted. |
See also #13515 |
My example was with a HEAD kube cluster, so it's the service controller (that runs as part of kube-controller-manager) that creates the forwarding rule for service Type=LoadBalancer, not the L7 LB plugin pod. |
Yes, GKE still really wants this. It's a pretty big wart. |
Just receiving a delete is insufficient, I need to differentiate that SIGTERM from a preemption in the case of a pod controller. A new field as suggested in some of the other issues might help, because I can use the grace period to check it in the apiserver. |
Another possible solution would be to map the "Steps" in the simple cluster setup proposal (https://docs.google.com/document/d/1v68yStV2O6aHuRuT3AlWnbe6vkUCtzyC5I7Elhgik3o/edit?ts=561ee618) to runlevels persisted through the apiserver, and add a shutdown level just like init systems, hence the title. |
Naively, deleting the Ingress object should trigger the controller Pod to On Tue, Oct 27, 2015 at 9:53 AM, Prashanth B notifications@github.com
|
Re. pods deleted last: Another use case for finalizers. #3585 |
Also vaguely related to #7459 |
@bprashanth I'm marking this P2 but feel free to upgrade to P1 if you think it's higher priority. |
Also related to #10179 |
Note that new resources cannot be created in namespaces that are being deleted:
And we already treat the default and kube-system namespaces specially (and openshift-infra in OS):
|
Issues go stale after 30d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
Should this issue be consolidated with something else, or kept open for the tear down of GCLB resources? |
/remove-lifecycle stale |
/cc @k4leung4 |
I'd like to delete certain external resources (loadbalancers) only when the cluster is being torn down. Not when the user deletes the pod, or the scheduler preempts it. Currently scripts like kube-down aren't kubernetes aware, they just start nuking cluster resources using gcloud, meaning the following happens:
Having at least the following guarantees when there's a pending shutdown will make life a little easier:
@thockin
The text was updated successfully, but these errors were encountered: