New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
reconciler: updateDevicePath() panic:invalid memory address or nil pointer dereference #86722
Comments
1.13 is very old. Is it possible to try a recent release and see if the panic persists ? Thanks |
/remove-kind bug
what patch version is this? is your API server older than 1.13? try matching the kubelet and api-sever versions. the minimum version in the support skew is 1.15.x, so please upgrade. there are only two changes in |
@neolit123: Those labels are not set on the issue: In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Version is 1.13.0, and merges bug from 1.13.6. |
< 1.15 is out of support at this point, so it is best to help us confirm if the issue is still present in versions in the support skew. anything else you can tells us?
given this call passes:
abdda3f == i'm going to assume that what is causing the panic on this line:
is a looking at the backtrace in the OP and the history of the file i don't see any changes that could have fixed such a panic. @kubernetes/sig-storage-bugs |
In kubelet.go, kubeClient is passed from:
Earlier there is check:
I am thinking of the following fix:
|
@tedyu |
@h4ghhh
It seems kubeDeps.KubeClient might be nil in case of error. |
I don‘t find such log... The whole system was upgrading. Apiserver had not beening working at that time, but kubelet started running first. |
Ah, is it possible that the If the Kubelet wasn't running in standalone mode, I'm not seeing how |
+1 |
Yes, kubelet was running in standalone mode, then? |
Only rc.updateDevicePath() uses KubeClient. I am open to not running reconciler if KubeClient is nil (over #86795). |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
Rotten issues close after 30d of inactivity. Send feedback to sig-testing, kubernetes/test-infra and/or fejta. |
@fejta-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
kubelet start reconciler with panic.
4773 reconciler.go:154] Reconciler: start to sync state E1211 00:37:16.826560 84773 runtime.go:69] Observed a panic: "invalid memory address or nil pointer dereference" (runtime error: invalid memory address or nil pointer dereference) /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:76 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:65 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/runtime/runtime.go:51 /usr/local/go/src/runtime/asm_amd64.s:522 /usr/local/go/src/runtime/panic.go:513 /usr/local/go/src/runtime/panic.go:82 /usr/local/go/src/runtime/signal_unix.go:390 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:563 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:600 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:419 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:330 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:155 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:133 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:134 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/k8s.io/apimachinery/pkg/util/wait/wait.go:88 /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/pkg/kubelet/volumemanager/reconciler/reconciler.go:143 /usr/local/go/src/runtime/asm_amd64.s:1333
What you expected to happen:
No panic.
How to reproduce it (as minimally and precisely as possible):
I don't know...
Anything else we need to know?:
Environment:
kubectl version
): 1.13cat /etc/os-release
):uname -a
):/sig node
/sig storage
/kind bug
The text was updated successfully, but these errors were encountered: