-
Notifications
You must be signed in to change notification settings - Fork 1.3k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
panic: fork/exec /usr/sbin/virtqemud: errno 0 #9465
Comments
I think your k8s version UPD: Same as #9441 |
The error message "Failed to connect socket to '/var/run/libvirt/virtqemud-sock': No such file or directory" indicates that the libvirt daemon is not running or is not accessible. This could be due to changes in the KubeVirt deployment or configuration, or due to changes in the underlying system configuration. |
I have the same problem,Can't find this file“ /var/run/libvirt/virtqemud-sock ”
|
The error about |
I would try to run the virt-launcher container and exec
|
@kvaps, what container runtime are you using in your cluster? Is it docker? I am able to reproduce this error with docker, but it works well with containerd (also likely it works fine with cri-o since it is used in CI). |
@vasiliy-ul Did you see any difference between docker and containerd? Can you check /proc/$/status? |
@xpivarc, what difference do you mean? Apart from this issue with launching a VM all seems the same.
What process to check? |
Mainly capabilities of the process(parent or self if you are trying it from bash) and permissions on that binary.
The process which tries to launch virtqemud. |
I checked 43,44c43,44
< CapPrm: 0000000000000400
< CapEff: 0000000000000400
---
> CapPrm: 0000000000000000
> CapEff: 0000000000000000
56,57c56,57 This is |
@vasiliy-ul Thank you! |
The pod does have this capability requested (not the pod, but securityContext:
allowPrivilegeEscalation: false
capabilities:
add:
- NET_BIND_SERVICE
drop:
- ALL
privileged: false
runAsGroup: 107
runAsNonRoot: true
runAsUser: 107
In both cases, I used the same KubeVirt |
Well in the case of docker, we still see the capability in the bound set so I don't think docker ignored it. In this case, the file capability is ignored(almost as if the runtime executed on a binary that does not have the capability set) but I can't imagine why it would be. I wonder what type of fs is used here? For the |
Hm... After poking around in the moby source code, found this interesting commit: moby/moby@0d9a37d. More precisely, this code snippet: https://github.com/moby/moby/blob/7c93e4a09be1a11012ecba0dc612115cd4a79233/oci/oci.go#L30-L36.
Comparing to containerd: |
Take a look at moby/moby#45491 (comment)
It seems to me that, actually, docker handles that correctly. WDYT? |
Issues go stale after 90d of inactivity. If this issue is safe to close now please do so with /lifecycle stale |
@vasiliy-ul they fixed moby moby/moby#45491 (PR moby/moby#45511) Any chance this can get pulled into Kubevirt to resolve? |
@k8scoder192 |
Stale issues rot after 30d of inactivity. If this issue is safe to close now please do so with /lifecycle rotten |
Rotten issues close after 30d of inactivity. /close |
@kubevirt-bot: Closing this issue. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
What happened:
KubeVirt v0.59.0 enabled rootless mode by default. This makes my VMs unable to start:
What you expected to happen:
VM able to start without errors
How to reproduce it (as minimally and precisely as possible):
Not sure what is exactly wrong, I tried to compile version without patches and faced the same behavior.
Root
feature gate is not enabledAdditional context:
When I enable
Root
feature gate everything starts working as it shouldEnvironment:
v0.59.0-dirty
v1.23.17
uname -a
):5.15.0-25-generic
The text was updated successfully, but these errors were encountered: