Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Kubernetes namespaces stuck in terminating state #19317

Closed
paralin opened this issue Jan 6, 2016 · 66 comments
Closed

Kubernetes namespaces stuck in terminating state #19317

paralin opened this issue Jan 6, 2016 · 66 comments
Assignees
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Milestone

Comments

@paralin
Copy link
Contributor

paralin commented Jan 6, 2016

I tried to delete some namespaces from my kubernetes cluster, but they've been stuck in Terminating state for over a month.

kubectl get ns
NAME              LABELS    STATUS        AGE
myproject         <none>    Active        12d
default              <none>    Active        40d
anotherproject  <none>    Terminating   40d
openshift         <none>    Terminating   40d
openshift-infra   <none>    Terminating   40d

The openshift namespaces were made as part of the example in this repo for running Openshift under Kube.

There's nothing in any of these namespaces (I used get on every resource type and they're all empty).

So what's holding up the terminate?

The kube cluster is healthy:

NAME                 STATUS    MESSAGE              ERROR
etcd-1               Healthy   {"health": "true"}   
controller-manager   Healthy   ok                   
scheduler            Healthy   ok                   
etcd-0               Healthy   {"health": "true"}   

The versions are:

Client Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.4.208+c39262c9915b0b", GitCommit:"c39262c9915b0b1c493de66f37c49f3ef587cd97", GitTreeState:"clean"}
Server Version: version.Info{Major:"1", Minor:"2+", GitVersion:"v1.2.0-alpha.4.166+d9ab692edc08a2", GitCommit:"d9ab692edc08a279396b29efb4d7b1e6248dfb60", GitTreeState:"clean"}

The server version corresponds to this commit: paralin@d9ab692

Compiled from source. Cluster was built using kube-up to GCE with the following env:

export KUBERNETES_PROVIDER=gce
export KUBE_GCE_ZONE=us-central1-b
export MASTER_SIZE=n1-standard-1
export MINION_SIZE=n1-standard-2
export NUM_MINIONS=3

export KUBE_ENABLE_NODE_AUTOSCALER=true
export KUBE_AUTOSCALER_MIN_NODES=3
export KUBE_AUTOSCALER_MAX_NODES=3

export KUBE_ENABLE_DAEMONSETS=true
export KUBE_ENABLE_DEPLOYMENTS=true

export KUBE_ENABLE_INSECURE_REGISTRY=true

Any ideas?

@ncdc
Copy link
Member

ncdc commented Jan 8, 2016

cc @derekwaynecarr. Do you think the namespace controller is in some sort of infinite loop?

@derekwaynecarr
Copy link
Member

Can you paste the output for:

kubectl get namespace/openshift -o json

I assume openshift is no longer running on your cluster? Is there any
content in that namespace?

On Thursday, January 7, 2016, Andy Goldstein notifications@github.com
wrote:

cc @derekwaynecarr https://github.com/derekwaynecarr. Do you think the
namespace controller is in some sort of infinite loop?


Reply to this email directly or view it on GitHub
#19317 (comment)
.

@j3ffml j3ffml added the sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. label Jan 8, 2016
@lavalamp lavalamp added team/control-plane and removed sig/api-machinery Categorizes an issue or PR as relevant to SIG API Machinery. labels Jan 26, 2016
@lavalamp
Copy link
Member

Smells like different components have different ideas about the finalizer list? Does rebooting controller-manager change anything?

@paralin
Copy link
Contributor Author

paralin commented Jan 26, 2016

kubectl get ns openshift -o json

{
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
        "name": "openshift",
        "selfLink": "/api/v1/namespaces/openshift",
        "uid": "0a659292-94af-11e5-855c-42010af00002",
        "resourceVersion": "14645862",
        "creationTimestamp": "2015-11-27T02:32:01Z",
        "deletionTimestamp": "2015-12-25T03:20:25Z",
        "annotations": {
            "openshift.io/sa.scc.mcs": "s0:c6,c0",
            "openshift.io/sa.scc.supplemental-groups": "1000030000/10000",
            "openshift.io/sa.scc.uid-range": "1000030000/10000"
        }
    },
    "spec": {
        "finalizers": [
            "openshift.io/origin"
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}

Interestingly the finalizer is set to openshift.io/origin.

I tried deleting the finalizer out of the namespace using kubectl edit, but it still remains in another get operation.

@paralin
Copy link
Contributor Author

paralin commented Jan 26, 2016

This also happens with the one other namespace I manually created in OpenShift with the projects system:

Error from server: Namespace "dotabridge-dev" cannot be updated: The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

I'm not actually using OpenShift anymore so these namespaces are pretty much stuck in my prod cluster until I can figure out how to get past this.

@paralin
Copy link
Contributor Author

paralin commented Jan 26, 2016

Deleted the controller-manager pod and the associated pause pod and restarted kubelet on the master. The containers were re-created, kubectl get cs shows everything as healthy, but the namespaces remain.

@davidopp davidopp added this to the v1.2 milestone Feb 4, 2016
@davidopp davidopp added the priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release. label Feb 4, 2016
@davidopp
Copy link
Member

davidopp commented Feb 4, 2016

@kubernetes/rh-cluster-infra

@derekwaynecarr derekwaynecarr self-assigned this Feb 4, 2016
@derekwaynecarr
Copy link
Member

@paralin - there is no code issue, but maybe I can look to improve in the openshift example clean-up scripts or document the steps. When you created a project in openshift, it created a namespace for that project, and annotated the namespace with a finalizer token that said before a namespace is deleted, an external agent needs to remove its lock on the object that says it was done clean-up. Since you are no longer running openshift, its agent did not remove the lock and take part in the termination flow.

A quick fix:

# find each namespace impacted
$ kubectl get namespaces -o json | grep "openshift.io/origin"
$ kubect get namespace <ns> -o json > temp.json
# vi temp.json and remove the finalizer entry for "openshift.io/origin"
# for example
{
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
        "name": "testing",
        "selfLink": "/api/v1/namespaces/testing",
        "uid": "33074e57-cb72-11e5-9d3d-28d2444e470d",
        "resourceVersion": "234",
        "creationTimestamp": "2016-02-04T19:05:04Z",
        "deletionTimestamp": "2016-02-04T19:05:54Z"
    },
    "spec": {
        "finalizers": [
            "openshift.io/org"  <--- remove me
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}

$ curl -H "Content-Type: application/json" -X PUT --data-binary @temp.json http://127.0.0.1:8080/api/v1/namespaces/<name_of_namespace>/finalize
# wait a moment, and you should see your namespace removed
$ kubectl get namespaces 

That will remove the lock that blocks the namespace from being completely terminated, and you should quickly see that the namespace is removed from your system.

Closing the issue, but feel free to comment if you continue to have problems or hit me up on slack.

@balakrishnangithub
Copy link

balakrishnangithub commented Jul 12, 2016

I'm facing the same issue

# oc version
oc v1.3.0-alpha.2
kubernetes v1.3.0-alpha.1-331-g0522e63

I have deleted the project named "gitlab" via Openshift Origin web console. But it is not removed.

As said by @derekwaynecarr I did the following

# kubectl get namespace gitlab -o json > temp.json
# cat temp.json
{
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
        "name": "gitlab",
        "selfLink": "/api/v1/namespaces/gitlab",
        "uid": "cd86c372-481e-11e6-aebc-408d5c676116",
        "resourceVersion": "3115",
        "creationTimestamp": "2016-07-12T10:53:01Z",
        "deletionTimestamp": "2016-07-12T11:11:36Z",
        "annotations": {
            "openshift.io/description": "",
            "openshift.io/display-name": "GitLab",
            "openshift.io/requester": "developer",
            "openshift.io/sa.scc.mcs": "s0:c8,c7",
            "openshift.io/sa.scc.supplemental-groups": "1000070000/10000",
            "openshift.io/sa.scc.uid-range": "1000070000/10000"
        }
    },
    "spec": {
        "finalizers": [
            "kubernetes"   <---removed
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}

and

# curl -k -H "Content-Type: application/json" -X PUT --data-binary @temp.json https://10.28.27.65:8443/api/v1/namespaces/gitlab/finalize
{
  "kind": "Status",
  "apiVersion": "v1",
  "metadata": {},
  "status": "Failure",
  "message": "User \"system:anonymous\" cannot update namespaces/finalize in project \"gitlab\"",
  "reason": "Forbidden",  <--- seems like nothing happened
  "details": {
    "name": "gitlab",
    "kind": "namespaces/finalize"
  },
  "code": 403
}

but it is removed.

@jchauncey
Copy link

Im facing the same problem in GKE. Bouncing the cluster definitely fixes the issue (they are immediately terminated).

@linfan
Copy link

linfan commented Aug 2, 2016

I believe this issue still exist in v1.3 release.

Manually remove the finalizer doesn't seems to help.

$ curl -H "Content-Type: application/json" -X PUT --data-binary @temp.json http://ip-172-31-14-177:8080/api/v1/namespaces/limit/finalize
{
  "kind": "Namespace",
  "apiVersion": "v1",
  "metadata": {
    "name": "limit",
    "selfLink": "/api/v1/namespaces/limit/finalize",
    "uid": "caf5daa5-57f8-11e6-9e7e-0ad69bcef303",
    "resourceVersion": "10171",
    "creationTimestamp": "2016-08-01T15:01:14Z",
    "deletionTimestamp": "2016-08-02T04:30:24Z"
  },
  "spec": {},
  "status": {
    "phase": "Terminating"
  }

Several hours, it still remain.

$ kubectl get namespaces
NAME                STATUS         AGE
...                 ...
limit               Terminating   13h

Until I completely restart the master server, all "terminating" namespaces gone...

@monaka
Copy link

monaka commented Oct 10, 2016

Still v1.4.0 also...

$ kubectl get ns openmct -o json
{
    "kind": "Namespace",
    "apiVersion": "v1",
    "metadata": {
        "name": "openmct",
        "selfLink": "/api/v1/namespaces/openmct",
        "uid": "34124209-8e8d-11e6-8260-000d3a505da6",
        "resourceVersion": "11957259",
        "creationTimestamp": "2016-10-10T01:59:39Z",
        "deletionTimestamp": "2016-10-10T02:13:46Z",
        "labels": {
            "heritage": "deis"
        }
    },
    "spec": {
        "finalizers": [
            "kubernetes"
        ]
    },
    "status": {
        "phase": "Terminating"
    }
}
$ kubectl version
Client Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0", GitCommit:"a16c0a7f71a6f93c7e0f222d961f4675cd97a46b", GitTreeState:"clean", BuildDate:"2016-09-26T18:16:57Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"4", GitVersion:"v1.4.0+coreos.2", GitCommit:"672d0ab602ada99c100e7f18ecbbdcea181ef008", GitTreeState:"clean", BuildDate:"2016-09-30T05:49:34Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

@jsloyer
Copy link

jsloyer commented Oct 10, 2016

im hitting the error with 1.3.5 as well....

$kubectl version
Client Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.5", GitCommit:"b0deb2eb8f4037421077f77cb163dbb4c0a2a9f5", GitTreeState:"clean", BuildDate:"2016-08-11T20:29:08Z", GoVersion:"go1.6.2", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"3", GitVersion:"v1.3.6", GitCommit:"ae4550cc9c89a593bcda6678df201db1b208133b", GitTreeState:"clean", BuildDate:"2016-09-22T01:52:27Z", GoVersion:"go1.6.3", Compiler:"gc", Platform:"linux/amd64"}

@derekwaynecarr can we reopen this?

@monaka
Copy link

monaka commented Oct 17, 2016

At least in my case, it might be API issue...?

$ kubectl get ns | grep Terminating | wc -l
7

kube-apiserver:

E1017 01:55:35.954834       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:35.959011       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:36.772335       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:37.248079       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:38.254651       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:38.584616       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:55:39.171880       1 errors.go:63] apiserver received an error that is not an unversioned.Status: no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"

kube-controller-manager

E1017 01:50:36.002533       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:36.040668       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:37.066455       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:37.102275       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:38.229602       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:38.602775       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"
E1017 01:50:39.181639       1 namespace_controller.go:163] no kind "DeleteOptions" is registered for version "net.alpha.kubernetes.io/v1alpha1"

@monaka
Copy link

monaka commented Nov 7, 2016

In my case, ThirdPartyResource had been kept on Etcd. Stucked namespaces was removed after deleting it like this.

etcdctl rm /registry/thirdpartyresources/default/network-policy.net.alpha.kubernetes.io

@zhouhaibing089
Copy link
Contributor

zhouhaibing089 commented Nov 22, 2016

The problem about thirdpartyresources is not the same as the original one, I think we need to create another new issue.

@zhouhaibing089
Copy link
Contributor

created: #37278

@pidah
Copy link

pidah commented Nov 22, 2016

we are hitting this issue atm on 1.4.6;
edit: actually our issue is #37278

@hectorj2f
Copy link

we are hitting this issue atm on 1.5.2

#37554
#37278

@hectorj2f
Copy link

I am using v1.5.2 and the problem seems to be fixed. I am able to delete namespaces.

@nikhita
Copy link
Member

nikhita commented Mar 28, 2017

This is fixed in v1.5.2. Please see: #37278 (comment)

@chancecarey
Copy link

Getting this issue in 1.14.3.

I deleted a namespace, and it shows as permanently "Terminating". Removing the finalizer made no difference, and there are no pods running in the ns.

@Bregor
Copy link

Bregor commented Jun 21, 2019

Same here in 1.15.0

@thusihaveheard
Copy link

Getting this issue in 1.14.3.

I deleted a namespace, and it shows as permanently "Terminating". Removing the finalizer made no difference, and there are no pods running in the ns.

Have you solved it?

@chancecarey
Copy link

Getting this issue in 1.14.3.
I deleted a namespace, and it shows as permanently "Terminating". Removing the finalizer made no difference, and there are no pods running in the ns.

Have you solved it?

I have not, no. Incredible that such an issue has been unfixed for over three years.

@rabun788
Copy link

Can we reopen this issue?

@ghost
Copy link

ghost commented Jul 12, 2019

Same Issue Here

Rancher 2.2.4

Kubernetes 1.13.5

We have a namespace stuck in Removing state, he does not have any resource inside but htere is no way to remove it

@eduardorj
Copy link

same issue here.

@difabion
Copy link

Same issue.

$ kubectl version
Client Version: version.Info{Major:"1", Minor:"14", GitVersion:"v1.14.2", GitCommit:"66049e3b21efe110454d67df4fa62b08ea79a19b", GitTreeState:"clean", BuildDate:"2019-05-16T18:55:03Z", GoVersion:"go1.12.5", Compiler:"gc", Platform:"darwin/amd64"}
Server Version: version.Info{Major:"1", Minor:"12", GitVersion:"v1.12.8", GitCommit:"a89f8c11a5f4f132503edbc4918c98518fd504e3", GitTreeState:"clean", BuildDate:"2019-04-23T04:41:47Z", GoVersion:"go1.10.8", Compiler:"gc", Platform:"linux/amd64"}

In my case, this was a deployment of cert-manager that I was working into and only got as far as creating namespace, adding label, and deploying CRDs in line with Jetstack's installation docs. Deleting the CRDs was fine but the namespace is stuck in status Terminating. kube-controller-manager logs have this repeating ~30s:

W0715 22:02:18.409286       1 garbagecollector.go:647] failed to discover some groups: map[admission.certmanager.k8s.io/v1beta1:the server is currently unable to handle the request]
E0715 22:02:27.390081       1 resource_quota_controller.go:430] unable to retrieve the complete list of server APIs: admission.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request
E0715 22:02:27.830397       1 memcache.go:134] couldn't get resource list for admission.certmanager.k8s.io/v1beta1: the server is currently unable to handle the request

No subordinate resources, ns config has a finalizer kubernetes in the spec and nothing else.

I'd rather it be fine with not validating this API before deleting the ns.

@difabion
Copy link

Resolved using the script from this comment.

@dansl19
Copy link

dansl19 commented Jul 26, 2019

in my case, it was cert-managaer ns, which stuck in termination state. It could not be delete because of CRDs which was created before but was not seen by cmd kubectl get all but link to the script which has been provided by @difabion do the job. thx @difabion

@farvour
Copy link

farvour commented Jul 31, 2019

I can't believe this issue still persists and it's a dice roll on what the actual cause is. Perhaps Kubernetes should be a little more specific showing what finalizers it is waiting on? CRDs and other namespaces should NOT cause a namespace deletion to stick. I'm flabbergasted that this is a problem since 1.8.

The only sure-fire way I've ever been able to get this stupid problem to go away is to restart the entire control plane which is ridiculous.

@singhania
Copy link

Facing the same issue on Azure kubernetes 1.13.7.

@wweir
Copy link

wweir commented Aug 27, 2019

Same issue on aws with kops kubernetes v1.12.8

@vvrnv
Copy link

vvrnv commented Oct 2, 2019

same issue on azure k8s v1.14.1

@chancecarey
Copy link

This is clearly still an issue. Can it be re-opened?

@bakayolo
Copy link

Server Version: version.Info{Major:"1", Minor:"13+", GitVersion:"v1.13.7-gke.24", GitCommit:"2ce02ef1754a457ba464ab87dba9090d90cf0468", GitTreeState:"clean", BuildDate:"2019-08-12T22:05:28Z", GoVersion:"go1.11.5b4", Compiler:"gc", Platform:"linux/amd64"}

GKE 1.13.7 same issue

@marcelloromani
Copy link

The (old but effective) comment from @derekwaynecarr did the trick for me

#19317 (comment)

the only missing step for me was kubectl proxy and changing the port number accordingly (8001 instead of 8080)

@iahmad-khan
Copy link

same issue on 1.15 , nothing worked in my case.

kubectl delete ns test
Error from server (Conflict): Operation cannot be fulfilled on namespaces "test": The system is ensuring all content is removed from this namespace.  Upon completion, this namespace will automatically be purged by the system.

@Slutzky
Copy link

Slutzky commented Jan 14, 2020

same issue on 1.15 , nothing worked in my case.

kubectl delete ns test

Error from server (Conflict): Operation cannot be fulfilled on namespaces "test": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.

have you tried to delete it using the following script ?

#!/bin/bash

###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################

set -eo pipefail

die() { echo "$*" 1>&2 ; exit 1; }

need() {
	which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs

need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift

test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
	kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"

# proxy will get killed by the trap ```
`#!/bin/bash

###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################

set -eo pipefail

die() { echo "$*" 1>&2 ; exit 1; }

need() {
	which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs

need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift

test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
	kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"

# proxy will get killed by the trap`
usage :  ./kill-kube-ns <name of the namespace>

@gxm651182644
Copy link

in my case, it was cert-managaer ns, which stuck in termination state. It could not be delete because of CRDs which was created before but was not seen by cmd kubectl get all but link to the script which has been provided by @difabion do the job. thx @difabion

Have you solved it?

@joekendal
Copy link

joekendal commented Jul 13, 2020

None of these suggestions worked and after realising I could be looking at wasting hours trying to fix it, I came to the conclusion that if many equally capable devs had tried before me then I would be wasting my time for something so insignificant. Just deleting the kubernetes data and settings (through the Docker Desktop client) worked fine. You should have scripts for setting up your clusters and stuff anyway so no harm there if you're in a dev environment.

@tp6m4fu6250071
Copy link

same issue on 1.15 , nothing worked in my case.

kubectl delete ns test

Error from server (Conflict): Operation cannot be fulfilled on namespaces "test": The system is ensuring all content is removed from this namespace. Upon completion, this namespace will automatically be purged by the system.

have you tried to delete it using the following script ?

#!/bin/bash

###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################

set -eo pipefail

die() { echo "$*" 1>&2 ; exit 1; }

need() {
	which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs

need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift

test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
	kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"

# proxy will get killed by the trap ```
`#!/bin/bash

###############################################################################
# Copyright (c) 2018 Red Hat Inc
#
# See the NOTICE file(s) distributed with this work for additional
# information regarding copyright ownership.
#
# This program and the accompanying materials are made available under the
# terms of the Eclipse Public License 2.0 which is available at
# http://www.eclipse.org/legal/epl-2.0
#
# SPDX-License-Identifier: EPL-2.0
###############################################################################

set -eo pipefail

die() { echo "$*" 1>&2 ; exit 1; }

need() {
	which "$1" &>/dev/null || die "Binary '$1' is missing but required"
}

# checking pre-reqs

need "jq"
need "curl"
need "kubectl"

PROJECT="$1"
shift

test -n "$PROJECT" || die "Missing arguments: kill-ns <namespace>"

kubectl proxy &>/dev/null &
PROXY_PID=$!
killproxy () {
	kill $PROXY_PID
}
trap killproxy EXIT

sleep 1 # give the proxy a second

kubectl get namespace "$PROJECT" -o json | jq 'del(.spec.finalizers[] | select("kubernetes"))' | curl -s -k -H "Content-Type: application/json" -X PUT -o /dev/null --data-binary @- http://localhost:8001/api/v1/namespaces/$PROJECT/finalize && echo "Killed namespace: $PROJECT"

# proxy will get killed by the trap`
usage :  ./kill-kube-ns <name of the namespace>

got the issue on EKS (Kubernetes version: 1.15)
kubectl version:

Client Version: version.Info{Major:"1", Minor:"18", GitVersion:"v1.18.4", GitCommit:"c96aede7b5205121079932896c4ad89bb93260af", GitTreeState:"clean", BuildDate:"2020-06-17T11:41:22Z", GoVersion:"go1.13.9", Compiler:"gc", Platform:"linux/amd64"}
Server Version: version.Info{Major:"1", Minor:"15+", GitVersion:"v1.15.11-eks-065dce", GitCommit:"065dcecfcd2a91bd68a17ee0b5e895088430bd05", GitTreeState:"clean", BuildDate:"2020-07-16T01:44:47Z", GoVersion:"go1.12.17", Compiler:"gc", Platform:"linux/amd64"}

The kill-kube-ns script works for me. Thank you.

@tangyouze
Copy link

I use kubectl edit namespace **** and remove the finalizer part, and save.

rancher/rancher#14715 (comment)

@sharma-raj
Copy link

sharma-raj commented Jan 11, 2021

I am unable to delete ns, still showing in terminating state, there's no filed where mentioned the finalizer option. can some please help to resolve the problem?

cat fleet-system.json
{
"apiVersion": "v1",
"kind": "Namespace",
"metadata": {
"annotations": {
"cattle.io/status": "{"Conditions":[{"Type":"ResourceQuotaInit","Status":"True","Message":"","LastUpdateTime":"2021-01-08T09:14:26Z"},{"Type":"InitialRolesPopulated","Status":"True","Message":"","LastUpdateTime":"2021-01-08T09:14:31Z"}]}",
"field.cattle.io/projectId": "c-s8hs9:p-5bk2c",
"kubectl.kubernetes.io/last-applied-configuration": "{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{},"name":"fleet-system"},"spec":{"finalizers":[]}}\n",
"lifecycle.cattle.io/create.namespace-auth": "true",
"objectset.rio.cattle.io/applied": "{"apiVersion":"v1","kind":"Namespace","metadata":{"annotations":{"objectset.rio.cattle.io/id":"fleet-agent-bootstrap"},"labels":{"objectset.rio.cattle.io/hash":"f399d0b310fbfb28e9667312fdc7a33954e2b8c8"},"name":"fleet-system"}}",
"objectset.rio.cattle.io/id": "fleet-agent-bootstrap"
},
"creationTimestamp": "2021-01-08T09:14:25Z",
"deletionGracePeriodSeconds": 0,
"deletionTimestamp": "2021-01-11T08:41:43Z",
"finalizers": [
"controller.cattle.io/namespace-auth"
],
"labels": {
"field.cattle.io/projectId": "p-5bk2c",
"objectset.rio.cattle.io/hash": "f399d0b310fbfb28e9667312fdc7a33954e2b8c8"
},
"managedFields": [
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:labels": {
"f:field.cattle.io/projectId": {}
}
}
},
"manager": "agent",
"operation": "Update",
"time": "2021-01-08T09:14:25Z"
},
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
".": {},
"f:objectset.rio.cattle.io/applied": {},
"f:objectset.rio.cattle.io/id": {}
},
"f:labels": {
".": {},
"f:objectset.rio.cattle.io/hash": {}
}
}
},
"manager": "fleetcontroller",
"operation": "Update",
"time": "2021-01-08T09:14:25Z"
},
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
"f:cattle.io/status": {},
"f:field.cattle.io/projectId": {},
"f:lifecycle.cattle.io/create.namespace-auth": {}
},
"f:finalizers": {
".": {},
"v:"controller.cattle.io/namespace-auth"": {}
}
}
},
"manager": "rancher",
"operation": "Update",
"time": "2021-01-08T09:14:30Z"
},
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:status": {
"f:phase": {}
}
},
"manager": "kube-controller-manager",
"operation": "Update",
"time": "2021-01-11T08:42:10Z"
},
{
"apiVersion": "v1",
"fieldsType": "FieldsV1",
"fieldsV1": {
"f:metadata": {
"f:annotations": {
"f:kubectl.kubernetes.io/last-applied-configuration": {}
}
}
},
"manager": "kubectl",
"operation": "Update",
"time": "2021-01-11T10:56:28Z"
}
],
"name": "fleet-system",
"resourceVersion": "88004811",
"selfLink": "/api/v1/namespaces/fleet-system",
"uid": "af190fda-969c-4fbc-b233-81fa52449411"
},
"spec": {},
"status": {
"conditions": [
{
"lastTransitionTime": "2021-01-11T08:41:48Z",
"message": "All resources successfully discovered",
"reason": "ResourcesDiscovered",
"status": "False",
"type": "NamespaceDeletionDiscoveryFailure"
},
{
"lastTransitionTime": "2021-01-11T08:41:48Z",
"message": "All legacy kube types successfully parsed",
"reason": "ParsedGroupVersions",
"status": "False",
"type": "NamespaceDeletionGroupVersionParsingFailure"
},
{
"lastTransitionTime": "2021-01-11T08:41:48Z",
"message": "All content successfully deleted, may be waiting on finalization",
"reason": "ContentDeleted",
"status": "False",
"type": "NamespaceDeletionContentFailure"
},
{
"lastTransitionTime": "2021-01-11T08:42:09Z",
"message": "All content successfully removed",
"reason": "ContentRemoved",
"status": "False",
"type": "NamespaceContentRemaining"
},
{
"lastTransitionTime": "2021-01-11T08:41:48Z",
"message": "All content-preserving finalizers finished",
"reason": "ContentHasNoFinalizers",
"status": "False",
"type": "NamespaceFinalizersRemaining"
}
],
"phase": "Terminating"
}
}

@KornKalle
Copy link

I came across this issue when cleaning up our staging cluster, which our developers use a lot.
We use a cluster with nodes provisioned through rancher at Digitalocean.
For the other people ending here after googling this issue and looking for an easy way to remove these namespaces, i will leave the shell script I've written for these cases here, please use it with care:

# This script deletes namespaces created through rancher with dangling finalizers
namespace=undefined

# path to your kube config file, e.g.: ~/.kube/config
kubeconfig=
# URL of your rancher, e.g.: https://rancher.example.com
rancher_url=
# ID of the cluster, will be found in the URL of the cluster start page: https://rancher.example.com/c/<CLUSTER_ID>/monitoring
cluster_id=

# Your Rancher Bearer Token generated at 'APIs & Keys' in Rancher
RANCHER_BEARER=

# Ask which namespace will be delete
echo "Enter Namespace you want to delete:"
read namespace

echo "Get Namespace $namespace"
kubectl --kubeconfig $kubeconfig get ns $namespace -o json > $namespace.json

# Removes the whole "Spec" block of the namespace
echo "Removing spec block"
sed -i -e '/\"spec\"/,/}/ d; /^$/d' $namespace.json

# Push namespace back, will be deleted immediately if already dangling
echo "Send edited json file back to rancher"
curl -k -H "Content-Type: application/json" -H "Authorization: Bearer $RANCHER_BEARER" -X PUT --data-binary @$namespace.json $rancher_url/k8s/clusters/$cluster_id/api/v1/namespaces/$namespace/finalize```

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
priority/important-soon Must be staffed and worked on either currently, or very soon, ideally in time for the next release.
Projects
None yet
Development

No branches or pull requests