Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Can't delete namespaces from test environment #880

Open
ekuefler opened this issue Mar 30, 2020 · 18 comments
Open

Can't delete namespaces from test environment #880

ekuefler opened this issue Mar 30, 2020 · 18 comments
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Milestone

Comments

@ekuefler
Copy link
Contributor

I'd like to use namespaces to isolate my tests from one another. For example, I want to create a new namespace with a random name in my BeforeEach and delete it in AfterEach so that individual tests don't have to worry about cleaning up after themselves.

However, I've observed that namespace deletion doesn't work when using envtest. When I delete a namespace, the deletion timestamp on resources within that namespace remain unset, and the namespace stays in the Terminating phase seemingly forever. The latter is true even if the namespace is empty (e.g. I delete it immediately after creating it).

Is this behavior expected? Is whatever mechanism that normally executes namespace deletion missing from envtest?

@djzager
Copy link
Contributor

djzager commented Apr 18, 2020

@omaskery
Copy link

omaskery commented May 27, 2020

Whilst I appreciate this is expected behaviour, it would be handy to know what the advice is for separating tests from one another. @ekuefler have you found a way forward?

I too am trying to keep my tests independent in envtest. My operator queries all resources of a given kind, so I need to ensure its queries are not populated with resources from previous tests! :)

Edit: for future time travelling envtest writers, I eventually got what I wanted with the following approach (caveat emptor, I only just wrote this and could find issues, your mileage may vary 😄):

	var createdResources []runtime.Object

	deleteResourceAfterTest := func(o runtime.Object) {
		createdResources = append(createdResources, o)
	}

	BeforeEach(func() {
		log.Info("resetting created resources list")
		createdResources = nil
	})

	AfterEach(func() {
		for i := len(createdResources) - 1; i >= 0; i-- {
			r := createdResources[i]
			key, err := client.ObjectKeyFromObject(r)
			Expect(err).To(Succeed())
			log.Info("deleting resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
			Expect(k8sClient.Delete(ctx, r)).To(Succeed())

			_, isNamespace := r.(*corev1.Namespace)
			if !isNamespace {
				log.Info("waiting for resource to disappear", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
				Eventually(func() error {
					return k8sClient.Get(ctx, key, r)
				}, timeout, interval).Should(HaveOccurred())
				log.Info("deleted resource", "namespace", key.Namespace, "name", key.Name, "test", CurrentGinkgoTestDescription().FullTestText)
			}
		}
	})

@DirectXMan12
Copy link
Contributor

/kind feature
/priority important-longterm

@k8s-ci-robot k8s-ci-robot added kind/feature Categorizes issue or PR as related to a new feature. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete. labels Jul 1, 2020
@vincepri
Copy link
Member

/milestone Next

@k8s-ci-robot k8s-ci-robot added this to the Next milestone Jul 22, 2020
@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Oct 20, 2020
@fejta-bot
Copy link

Stale issues rot after 30d of inactivity.
Mark the issue as fresh with /remove-lifecycle rotten.
Rotten issues close after an additional 30d of inactivity.

If this issue is safe to close now please do so with /close.

Send feedback to sig-testing, kubernetes/test-infra and/or fejta.
/lifecycle rotten

@k8s-ci-robot k8s-ci-robot added lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Nov 19, 2020
@alkar
Copy link

alkar commented Dec 10, 2020

/remove-lifecycle rotten

@k8s-ci-robot k8s-ci-robot removed the lifecycle/rotten Denotes an issue or PR that has aged beyond stale and will be auto-closed. label Dec 10, 2020
@orirawlings
Copy link

Here is the work-around I implemented in my test helper functions. Basically, I use the discovery API to optimistically delete all namespaced resources in the namespace I'm trying to delete and then I update the /finalize subresource of the namespace to remove the kubernetes finalizer. The idea here is that it is a (perhaps poor) approximation of what kube-controller-manager would normally do to finalize a terminating namespace.

I brought in k8s.io/client-go in order to access the /finalize subresource and to interact with the discovery API (though I suspect the latter is possible with the standard controller-runtime client)

import (
	"context"
	"strings"

	. "github.com/onsi/gomega"

	corev1 "k8s.io/api/core/v1"
	apierrors "k8s.io/apimachinery/pkg/api/errors"
	metav1 "k8s.io/apimachinery/pkg/apis/meta/v1"
	"k8s.io/apimachinery/pkg/apis/meta/v1/unstructured"
	"k8s.io/apimachinery/pkg/runtime"
	"k8s.io/apimachinery/pkg/runtime/schema"
	"k8s.io/client-go/kubernetes"
	"sigs.k8s.io/controller-runtime/pkg/client"
)

func deleteAll(objs ...runtime.Object) {
	ctx := context.Background()
	clientGo, err := kubernetes.NewForConfig(testEnv.Config)
	Expect(err).ShouldNot(HaveOccurred())
	for _, obj := range objs {
		Expect(client.IgnoreNotFound(k8sClient.Delete(ctx, obj))).Should(Succeed())

		if ns, ok := obj.(*corev1.Namespace); ok {
			// Normally the kube-controller-manager would handle finalization
			// and garbage collection of namespaces, but with envtest, we aren't
			// running a kube-controller-manager. Instead we're gonna approximate
			// (poorly) the kube-controller-manager by explicitly deleting some
			// resources within the namespace and then removing the `kubernetes`
			// finalizer from the namespace resource so it can finish deleting.
			// Note that any resources within the namespace that we don't
			// successfully delete could reappear if the namespace is ever
			// recreated with the same name.

			// Look up all namespaced resources under the discovery API
			_, apiResources, err := clientGo.Discovery().ServerGroupsAndResources()
			Expect(err).ShouldNot(HaveOccurred())
			namespacedGVKs := make(map[string]schema.GroupVersionKind)
			for _, apiResourceList := range apiResources {
				defaultGV, err := schema.ParseGroupVersion(apiResourceList.GroupVersion)
				Expect(err).ShouldNot(HaveOccurred())
				for _, r := range apiResourceList.APIResources {
					if !r.Namespaced || strings.Contains(r.Name, "/") {
						// skip non-namespaced and subresources
						continue
					}
					gvk := schema.GroupVersionKind{
						Group:   defaultGV.Group,
						Version: defaultGV.Version,
						Kind:    r.Kind,
					}
					if r.Group != "" {
						gvk.Group = r.Group
					}
					if r.Version != "" {
						gvk.Version = r.Version
					}
					namespacedGVKs[gvk.String()] = gvk
				}
			}

			// Delete all namespaced resources in this namespace
			for _, gvk := range namespacedGVKs {
				var u unstructured.Unstructured
				u.SetGroupVersionKind(gvk)
				err := k8sClient.DeleteAllOf(ctx, &u, client.InNamespace(ns.Name))
				Expect(client.IgnoreNotFound(ignoreMethodNotAllowed(err))).ShouldNot(HaveOccurred())
			}

			Eventually(func() error {
				key, err := client.ObjectKeyFromObject(ns)
				if err != nil {
					return err
				}
				if err := k8sClient.Get(ctx, key, ns); err != nil {
					return client.IgnoreNotFound(err)
				}
				// remove `kubernetes` finalizer
				const kubernetes = "kubernetes"
				finalizers := []corev1.FinalizerName{}
				for _, f := range ns.Spec.Finalizers {
					if f != kubernetes {
						finalizers = append(finalizers, f)
					}
				}
				ns.Spec.Finalizers = finalizers

				// We have to use the k8s.io/client-go library here to expose
				// ability to patch the /finalize subresource on the namespace
				_, err = clientGo.CoreV1().Namespaces().Finalize(ns)
				return err
			}, timeout, interval).Should(Succeed())
		}

		Eventually(func() metav1.StatusReason {
			key, _ := client.ObjectKeyFromObject(obj)
			if err := k8sClient.Get(ctx, key, obj); err != nil {
				return apierrors.ReasonForError(err)
			}
			return ""
		}, timeout, interval).Should(Equal(metav1.StatusReasonNotFound))
	}
}

func ignoreMethodNotAllowed(err error) error {
	if err != nil {
		if apierrors.ReasonForError(err) == metav1.StatusReasonMethodNotAllowed {
			return nil
		}
	}
	return err
}

@coderanger
Copy link
Contributor

If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.

@orirawlings
Copy link

If this is a priority for your testing setup, I would strongly recommend using Kind or K3s as your test control plane rather than the default one launched by envtest. That will provide a more complete setup, including kube-controller-manager and the related namespace and gc controllers.

I was having a little trouble tracking down documentation for launching kind or k3s clusters from envtest. Is there example documentation anywhere?

Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec kind prior to starting envtest, and then just ensure envtest is passed the correct rest.Config and UseExistingCluster = true?

@coderanger
Copy link
Contributor

Based on what I'm gleaning from https://pkg.go.dev/sigs.k8s.io/controller-runtime/pkg/envtest is it correct to assume that one would exec kind prior to starting envtest, and then just ensure envtest is passed the correct rest.Config and UseExistingCluster = true?

Yep, that. It will use your default kubeconfig so set whatever context you need before starting and then enable UseExistingCluster.

@fejta-bot
Copy link

Issues go stale after 90d of inactivity.
Mark the issue as fresh with /remove-lifecycle stale.
Stale issues rot after an additional 30d of inactivity and eventually close.

If this issue is safe to close now please do so with /close.

Send feedback to sig-contributor-experience at kubernetes/community.
/lifecycle stale

@k8s-ci-robot k8s-ci-robot added the lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. label Mar 23, 2021
@coderanger
Copy link
Contributor

I think we can close this ticket, as running kube-controller-manager from envtest is out of scope and better served by UseExistingCluster as mentioned above (in combination with tools like Kind or K3s, or other minimalist Kubernetes environments).

/close

@k8s-ci-robot
Copy link
Contributor

@coderanger: Closing this issue.

In response to this:

I think we can close this ticket, as running kube-controller-manager from envtest is out of scope and better served by UseExistingCluster as mentioned above (in combination with tools like Kind or K3s, or other minimalist Kubernetes environments).

/close

Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository.

clobaa pushed a commit to clobaa/gonmap that referenced this issue Mar 25, 2021
- and longer timeout
- this doesn't work properly yet
- there's an issue with envtest
- more info: kubernetes-sigs/controller-runtime#880
czunker added a commit to mondoohq/mondoo-operator that referenced this issue Jun 3, 2022
Remove resource cleanup, because it is not supported:
kubernetes-sigs/controller-runtime#880

Implement check for Service Account in workloads.

Signed-off-by: Christian Zunker <christian@mondoo.com>
czunker added a commit to mondoohq/mondoo-operator that referenced this issue Jun 3, 2022
Remove resource cleanup, because it is not supported:
kubernetes-sigs/controller-runtime#880

Implement check for Service Account in workloads.

Signed-off-by: Christian Zunker <christian@mondoo.com>
@camilamacedo86
Copy link
Member

camilamacedo86 commented Aug 3, 2022

I am re-open this one since it is not sorted out and was defined as a priority/important.
Also, adding frozen label to not let the bot close it until it is addressed.

@camilamacedo86 camilamacedo86 added lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. and removed lifecycle/stale Denotes an issue or PR has remained open with no activity and has become stale. labels Aug 3, 2022
@wallrj
Copy link
Contributor

wallrj commented Oct 26, 2022

The documentation moved:

Expected behavior https://book.kubebuilder.io/reference/testing/envtest.html?highlight=envtest#testing-considerations

https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation

@detro
Copy link

detro commented May 31, 2024

Out of curiosity: is there anywhere explanation of the why of this behaviour of envtest?
I couldn't find it in https://book.kubebuilder.io/reference/envtest.html#namespace-usage-limitation.
It seems odd, but I'm a fan of knowing the why behind things :)

@sbueringer
Copy link
Member

sbueringer commented May 31, 2024

Wild guess, because we don't have a kube-controller-manager in envtest (which also causes a bunch of other limitations, e.g. object garbage collection)

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/feature Categorizes issue or PR as related to a new feature. lifecycle/frozen Indicates that an issue or PR should not be auto-closed due to staleness. priority/important-longterm Important over the long term, but may not be staffed and/or may need multiple releases to complete.
Projects
None yet
Development

No branches or pull requests