New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Need an API call for "teardown all external resources" #4630
Comments
Can you please explain what this API would look like? I'm not sure that widening API for a central component is the right approach. |
Just to be clear - I think that removing of internal services should be handled by master itself, but I see no reason why we need an API call to delete external resources for user defined services. |
First, the meta-point: kube-down.sh isn't the only setup and teardown The second meta-point: today ELBs are the biggest issue. There are As far as what the API looks like, we were envisioning something like: The API should probably be best-effort semantics, since it's going down
|
I just saw your next comment. Why a distinction between user services and
|
Also, to be clear, the API can just outright delete the services, too. I don't see a reason it has to delete the underlying resources versus the services themselves, since it's running in the shutdown path. If we really want to be delicate, we can terminate all objects (#1535) first. |
I'll be happy working on this. I hope no one is working on it now. |
Implementation of master call "/teardown" which removes all external resources used by kubernetes cluseter (currently, external load balancers are removed). Related to kubernetes#4630.
We need to support deletion of clusters, both for GKE and for e.g. e2e tests. The master will create and delete various CloudProvider objects, so it should own those objects. We need a way to (a) crease accepting new objects, (b) change desired state for existing objects to does-not-exist, (c) block until the CloudProvider reconciler (or equivalent) finishes actually deleting all of those, then (d) the master can be deleted. Given the master knows what it created and has code to delete such things (for whatever version of k8s it's running), we should use the master itself to clean up a cluster that needs to be deleted, not require some out-of-band tool to do so. |
Discussion is occurring in #5025. |
I don't think that this makes the 1.0 cut. |
@brendandburns Without this, we orphan resources in GCE on cluster delete. And if e.g. user spins up a new cluster with the same name, there will be all sorts of fun from dangling rules. I think this falls under operational reliability. |
I don't think a 'protected' field is necessary for this. Today we have RBAC, finalizers, and GC. I think a client w/ super admin powers could delete all namespaces and then wait for the namespace count to go to 0. (There's probably a corner case or two around the default and kube-system namespaces that this would turn up.) |
I might be missing it, but I haven't found a way to delete |
Issues go stale after 90d of inactivity. Prevent issues from auto-closing with an If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or |
/remove-lifecycle stale |
It still isn't possible to delete the following namespaces: We also don't have a way to put the apiserver into a lame duck mode to prevent new namespaces from being created during cluster teardown. |
/lifecycle frozen |
seems like a FR for api-machinery, that can eventually land in kubectl (sig-cli). /sig cli api-machinery |
See #4627 / #4530: These are both the wrong approach, as also noted in #4411 (comment). We need to delete these things on the master, prior to deleting the VM itself. For system add-ons, this is basically the API hook necessary for #3579 cleanup, but it's also required for any user services that were created as well.
The text was updated successfully, but these errors were encountered: