New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Virt handler failed start in the kind k8s cluster on X86_64 #7638
Comments
Hi @zhlhahaha , |
I had the same problem. I ran the init container manualy and I get an error: $ docker run quay.io/kubevirt/virt-launcher:v0.53.0 /bin/sh -c node-labeller.sh
standard_init_linux.go:228: exec user process caused: operation not permitted What should be noted is that my environmen is Ubuntu 20.04 VirtualMachine.
Linux XXX 5.13.0-40-generic #45~20.04.1-Ubuntu SMP Mon Apr 4 09:38:31 UTC 2022 x86_64 x86_64 x86_64 GNU/Linux |
swtch to version v0.51.0 docker run --rm quay.io/kubevirt/virt-launcher:v0.51.0 "/bin/sh -c node-labeller.sh"
{"component":"virt-launcher","level":"info","msg":"Collected all requested hook sidecar sockets","pos":"manager.go:76","timestamp":"2022-05-13T06:47:53.270416Z"}
{"component":"virt-launcher","level":"info","msg":"Sorted all collected sidecar sockets per hook point based on their priority and name: map[]","pos":"manager.go:79","timestamp":"2022-05-13T06:47:53.270487Z"}
panic: open /var/run/libvirt/libvirtd.conf: no such file or directory
goroutine 1 [running]:
main.main()
cmd/virt-launcher/virt-launcher.go:421 +0x170e
{"component":"virt-launcher","level":"info","msg":"Reaped pid 15 with status 512","pos":"virt-launcher.go:554","timestamp":"2022-05-13T06:47:53.284728Z"}
{"component":"virt-launcher","level":"error","msg":"dirty virt-launcher shutdown: exit-code 2","pos":"virt-launcher.go:572","timestamp":"2022-05-13T06:47:53.284849Z"}
{"component":"virt-launcher","level":"error","msg":"error when checking for istio-proxy presence","pos":"virt-launcher.go:662","reason":"Get \"http://localhost:15021/healthz/ready\": dial tcp 127.0.0.1:15021: connect: connection refused","timestamp":"2022-05-13T06:47:53.286784Z"}
{"component":"virt-launcher","level":"error","msg":"error when checking for istio-proxy presence","pos":"virt-launcher.go:662","reason":"Get \"http://localhost:15021/healthz/ready\": dial tcp 127.0.0.1:15021: connect: connection refused","timestamp":"2022-05-13T06:47:53.298675Z"}
{"component":"virt-launcher","level":"error","msg":"error when checking for istio-proxy presence","pos":"virt-launcher.go:662","reason":"Get \"http://localhost:15021/healthz/ready\": dial tcp 127.0.0.1:15021: connect: connection refused","timestamp":"2022-05-13T06:47:53.354727Z"}
{"component":"virt-launcher","level":"error","msg":"error when checking for istio-proxy presence","pos":"virt-launcher.go:662","reason":"Get \"http://localhost:15021/healthz/ready\": dial tcp 127.0.0.1:15021: connect: connection refused","timestamp":"2022-05-13T06:47:53.623807Z"} |
Got exactly the same issue as @zhlhahaha and @drawdy mentioned. |
I found a solution. Virt-handler is now working on my Kubernetes after these steps:
|
I switched to CentOS 7.9 and everything works fine. |
Veirfied, @xpivarc maybe we can write it into document. |
It seems that the failure only happens on Ubuntu. |
I would support it. @zhlhahaha @vasiliy-ul Do you think there is anything we can improve to streamline the interaction with AppArmor? |
@xpivarc , we could potentially leverage the apparmor support in k8s: https://kubernetes.io/docs/tutorials/security/apparmor/ I.e. by using the annotation # apply apparmor profile to container
container.apparmor.security.beta.kubernetes.io/<container_name>: localhost/<apparmor-profile-name>
# run unconfined
container.apparmor.security.beta.kubernetes.io/<container_name>: unconfined E.g. if an apparmor profile for This approach has one implication though. A pod with that annotation can only be scheduled on an apparmor-enabled node. Hence it may cause issues with mixed clusters. Also writing an apparmor profile for libvirt/qemu is a bit tricky. AFAIK libvirt handles apparmor for qemu internally. There is a |
@vasiliy-ul Thanks for the write-up. So the only downside I see is the lack of support of apparmor in Kubernetes or mixing of apparmor and SELinux. Therefore I am not sure how useful and usual it is to run them both in a cluster. I think it would be good if Kubevirt integrates at least with what Kubernetes support as I see many issues like this. |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kubevirt-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
I followed the instruction in https://kubevirt.io/quickstart_kind/ to create a kind k8s cluster. And then try to deploy kubevirt on it, but virt-handler failed to start in the init process as following:
I do not find any helpful logs. Here is the output form
kubectl describe pods
What you expected to happen:
kubeVirt successfully start in the kind k8s cluster.
How to reproduce it (as minimally and precisely as possible):
Just follow the instruction in the https://kubevirt.io/quickstart_kind/
Environment:
virtctl version
): v0.52.0kubectl version
): 1.23.4uname -a
): Linux dell 4.18.0-25-genericThe text was updated successfully, but these errors were encountered: