-
Notifications
You must be signed in to change notification settings - Fork 8.1k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Service #10015
Comments
This issue is currently awaiting triage. If Ingress contributors determines this is a relevant issue, they will accept it by applying the The Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
/remove-kind bug Hi, Firstly, can you kindly answer all the questions that are visible in the template but have been skipped by you. That info may provide context & details that will help analysis. Importantly, if you can kindly write a step-by-step guide that anyone can copy/paste on a kind cluster or a minikube cluster, it will help make some progress on this issue. |
Hi there, I tried to provide as much information as I know and I will keep updating them once more information is available to me. Thanks. |
Could you please try to upgrade your ingress-nginx controller to latest version? |
Hi there, we do plan to upgrade. But we would also want to understand the root cause to make sure that it's been fixed in the more recent versions. Thank you. |
In fact, version 1.4 is no longer supported. So I hope you can upgrade it before we confirm again. |
This is stale, but we won't close it automatically, just bare in mind the maintainers may be busy with other tasks and will reach your issue ASAP. If you have any question or request to prioritize this, please reach |
I think this problem is related to the informer closing down for some reason and pushing this type of message |
I've also experienced this issue, though I don't have a good way to reproduce it. |
I encountered something similar when my Pod's ClusterRole was missing the watch Pods permission (here probabaly Services). |
What happened:
We noticed several times when pods had panic with "interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Service". Checked the original source code and it seems like the type assertion doesn't handle the case where the obj could be cache.DeletedFinalStateUnknown. I am sorry if this issue might have been reported by others already.
Thanks.
E0525 04:03:38.226398 7 runtime.go:79] Observed a panic: &runtime.TypeAssertionError{_interface:(*runtime._type)(0x17c10e0), concrete:(*runtime._type)(0x18a3e00), asserted:(*runtime._type)(0x1a23800), missingMethod:""} (interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Service)
goroutine 125 [running]:
k8s.io/apimachinery/pkg/util/runtime.logPanic({0x1822d40?, 0xc003232660})
k8s.io/apimachinery@v0.25.2/pkg/util/runtime/runtime.go:75 +0x99
k8s.io/apimachinery/pkg/util/runtime.HandleCrash({0x0, 0x0, 0x203020203030202?})
k8s.io/apimachinery@v0.25.2/pkg/util/runtime/runtime.go:49 +0x75
panic({0x1822d40, 0xc003232660})
runtime/panic.go:884 +0x212
k8s.io/ingress-nginx/internal/ingress/controller/store.New.func21({0x18a3e00?, 0xc001d820e0?})
k8s.io/ingress-nginx/internal/ingress/controller/store/store.go:772 +0xde
k8s.io/client-go/tools/cache.ResourceEventHandlerFuncs.OnDelete(...)
k8s.io/client-go@v0.25.2/tools/cache/controller.go:246
k8s.io/client-go/tools/cache.(*processorListener).run.func1()
k8s.io/client-go@v0.25.2/tools/cache/shared_informer.go:820 +0xaf
k8s.io/apimachinery/pkg/util/wait.BackoffUntil.func1(0x0?)
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:157 +0x3e
k8s.io/apimachinery/pkg/util/wait.BackoffUntil(0xc000659f38?, {0x1ccc140, 0xc00056ea20}, 0x1, 0xc000520cc0)
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:158 +0xb6
k8s.io/apimachinery/pkg/util/wait.JitterUntil(0x0?, 0x3b9aca00, 0x0, 0x0?, 0xc000659f88?)
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:135 +0x89
k8s.io/apimachinery/pkg/util/wait.Until(...)
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:92
k8s.io/client-go/tools/cache.(*processorListener).run(0xc0007b3a80?)
k8s.io/client-go@v0.25.2/tools/cache/shared_informer.go:812 +0x6b
k8s.io/apimachinery/pkg/util/wait.(*Group).Start.func1()
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:75 +0x5a
created by k8s.io/apimachinery/pkg/util/wait.(*Group).Start
k8s.io/apimachinery@v0.25.2/pkg/util/wait/wait.go:73 +0x85
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Service [recovered]
panic: interface conversion: interface {} is cache.DeletedFinalStateUnknown, not *v1.Service
What you expected to happen:
NGINX Ingress controller version (exec into the pod and run nginx-ingress-controller --version.): controller-v1.4.0 tag
Kubernetes version (use
kubectl version
): v1.25.7Environment:
Cloud provider or hardware configuration: GCP
OS (e.g. from /etc/os-release):
Kernel (e.g.
uname -a
):Install tools: it's a GKE managed kubernetes cluster.
Please mention how/where was the cluster created like kubeadm/kops/minikube/kind etc.
Basic cluster related info:
kubectl version
kubectl get nodes -o wide
How was the ingress-nginx-controller installed:
helm ls -A | grep -i ingress
helm -n <ingresscontrollernamepspace> get values <helmreleasename>
We didn't use Helm to install chart but instead a YAML manifest apply.
Current State of the controller:
kubectl describe ingressclasses
kubectl -n <ingresscontrollernamespace> get all -A -o wide
kubectl -n <ingresscontrollernamespace> describe po <ingresscontrollerpodname>
kubectl -n <ingresscontrollernamespace> describe svc <ingresscontrollerservicename>
Current state of ingress object, if applicable:
kubectl -n <appnnamespace> get all,ing -o wide
kubectl -n <appnamespace> describe ing <ingressname>
Others:
kubectl describe ...
of any custom configmap(s) created and in useBelow is the code snippet that I got from the ingress-nginx/internal/ingress/controller/store/store.go with tag controller-v1.4.0 where I think the panic happens.
How to reproduce this issue:
Anything else we need to know:
It's not a critical issue for us as the pod can restore after crashing.
The text was updated successfully, but these errors were encountered: