New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Kubernetes namespaces stuck in terminating state #19317
Comments
cc @derekwaynecarr. Do you think the namespace controller is in some sort of infinite loop? |
Can you paste the output for: kubectl get namespace/openshift -o json I assume openshift is no longer running on your cluster? Is there any On Thursday, January 7, 2016, Andy Goldstein notifications@github.com
|
Smells like different components have different ideas about the finalizer list? Does rebooting controller-manager change anything? |
kubectl get ns openshift -o json
Interestingly the finalizer is set to openshift.io/origin. I tried deleting the finalizer out of the namespace using kubectl edit, but it still remains in another get operation. |
This also happens with the one other namespace I manually created in OpenShift with the projects system:
I'm not actually using OpenShift anymore so these namespaces are pretty much stuck in my prod cluster until I can figure out how to get past this. |
Deleted the controller-manager pod and the associated pause pod and restarted kubelet on the master. The containers were re-created, |
@kubernetes/rh-cluster-infra |
@paralin - there is no code issue, but maybe I can look to improve in the openshift example clean-up scripts or document the steps. When you created a project in openshift, it created a namespace for that project, and annotated the namespace with a A quick fix:
That will remove the lock that blocks the namespace from being completely terminated, and you should quickly see that the namespace is removed from your system. Closing the issue, but feel free to comment if you continue to have problems or hit me up on slack. |
I'm facing the same issue
I have deleted the project named "gitlab" via Openshift Origin web console. But it is not removed. As said by @derekwaynecarr I did the following
and
but it is removed. |
Im facing the same problem in GKE. Bouncing the cluster definitely fixes the issue (they are immediately terminated). |
I believe this issue still exist in v1.3 release. Manually remove the finalizer doesn't seems to help.
Several hours, it still remain.
Until I completely restart the master server, all "terminating" namespaces gone... |
Still v1.4.0 also...
|
im hitting the error with 1.3.5 as well....
@derekwaynecarr can we reopen this? |
At least in my case, it might be API issue...?
kube-apiserver:
kube-controller-manager
|
In my case, ThirdPartyResource had been kept on Etcd. Stucked namespaces was removed after deleting it like this.
|
The problem about thirdpartyresources is not the same as the original one, I think we need to create another new issue. |
created: #37278 |
we are hitting this issue atm on 1.4.6; |
I am using |
This is fixed in |
Getting this issue in I deleted a namespace, and it shows as permanently "Terminating". Removing the finalizer made no difference, and there are no pods running in the ns. |
Same here in |
Have you solved it? |
I have not, no. Incredible that such an issue has been unfixed for over three years. |
Can we reopen this issue? |
Same Issue Here Rancher 2.2.4 Kubernetes 1.13.5 We have a namespace stuck in Removing state, he does not have any resource inside but htere is no way to remove it |
same issue here. |
Same issue.
In my case, this was a deployment of
No subordinate resources, ns config has a finalizer I'd rather it be fine with not validating this API before deleting the ns. |
Resolved using the script from this comment. |
I can't believe this issue still persists and it's a dice roll on what the actual cause is. Perhaps Kubernetes should be a little more specific showing what finalizers it is waiting on? CRDs and other namespaces should NOT cause a namespace deletion to stick. I'm flabbergasted that this is a problem since 1.8. The only sure-fire way I've ever been able to get this stupid problem to go away is to restart the entire control plane which is ridiculous. |
Facing the same issue on Azure kubernetes 1.13.7. |
Same issue on aws with kops kubernetes v1.12.8 |
same issue on azure k8s v1.14.1 |
This is clearly still an issue. Can it be re-opened? |
GKE 1.13.7 same issue |
The (old but effective) comment from @derekwaynecarr did the trick for me the only missing step for me was |
same issue on 1.15 , nothing worked in my case.
|
have you tried to delete it using the following script ?
|
None of these suggestions worked and after realising I could be looking at wasting hours trying to fix it, I came to the conclusion that if many equally capable devs had tried before me then I would be wasting my time for something so insignificant. Just deleting the kubernetes data and settings (through the Docker Desktop client) worked fine. You should have scripts for setting up your clusters and stuff anyway so no harm there if you're in a dev environment. |
got the issue on EKS (Kubernetes version: 1.15)
The kill-kube-ns script works for me. Thank you. |
I use |
I am unable to delete ns, still showing in terminating state, there's no filed where mentioned the finalizer option. can some please help to resolve the problem? cat fleet-system.json |
I came across this issue when cleaning up our staging cluster, which our developers use a lot.
|
I tried to delete some namespaces from my kubernetes cluster, but they've been stuck in Terminating state for over a month.
The openshift namespaces were made as part of the example in this repo for running Openshift under Kube.
There's nothing in any of these namespaces (I used get on every resource type and they're all empty).
So what's holding up the terminate?
The kube cluster is healthy:
The versions are:
The server version corresponds to this commit: paralin@d9ab692
Compiled from source. Cluster was built using kube-up to GCE with the following env:
Any ideas?
The text was updated successfully, but these errors were encountered: