New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

container-runtime and container-runtime-endpoint flags crash kubelet #182

Open
jforman opened this Issue Feb 16, 2018 · 1 comment

Comments

Projects
None yet
1 participant
@jforman

jforman commented Feb 16, 2018

Kubernetes v1.9.3 kubelet running on CoreOS beta channel using suggested flags for rktlet integration seems to crash kubelet.

I tried implementing the suggestions in the getting started doc by running a rktlet on my CoreOS worker nodes, and adding in the flags to the kubelet [1], but it seems that

--container-runtime=remote \
--container-runtime-endpoint=unix:///var/run/rktlet.sock \

cause the kubelet to crash.


Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: I0216 16:44:44.692495   17477 handler.go:325] Added event &{/system.slice/etcd-member.service 2018-02-16 16:39:15.421687849 +0000 UTC containerCreation {<nil>}}
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: panic: runtime error: invalid memory address or nil pointer dereference
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: [signal SIGSEGV: segmentation violation code=0x1 addr=0x20 pc=0x1906a40]
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: goroutine 321 [running]:
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/rkt.(*rktContainerHandler).Start(0xc4202c9960)
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/container/rkt/handler.go:186 +0x30
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).housekeeping(0xc420120480)
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:429 +0x4b
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]: created by k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager.(*containerData).Start
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net kubelet[17477]:         /go/src/k8s.io/kubernetes/_output/dockerized/go/src/k8s.io/kubernetes/vendor/github.com/google/cadvisor/manager/container.go:106 +0x3f
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net rkt[727]: 2018/02/16 16:44:44 transport: http2Server.HandleStreams failed to read frame: read tcp 127.0.0.1:15441->127.0.0.1:33170: read: connection reset by peer
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net systemd[1]: kubelet.service: Main process exited, code=exited, status=2/INVALIDARGUMENT
Feb 16 16:44:44 corea-worker0.obfuscated.domain.net systemd[1]: kubelet.service: Failed with result 'exit-code'.

1: https://github.com/kubernetes-incubator/rktlet/blob/master/docs/getting-started-guide.md#configure-kubernetes-to-use-rktlet

@jforman

This comment has been minimized.

jforman commented Feb 16, 2018

It looks like the flag which causes the damage is container-runtime. If I change it back to 'rkt', the kubelet does not crash.

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment