Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

VMI error: failed to find a sourceFile in containerDisk........../disk: no such file or directory #5861

Closed
vaibhavraizada-hcl opened this issue Jun 18, 2021 · 6 comments · Fixed by #5872
Labels
kind/bug triage/accepted Indicates an issue or PR is ready to be actively worked on.

Comments

@vaibhavraizada-hcl
Copy link

vaibhavraizada-hcl commented Jun 18, 2021

Is this a BUG REPORT or FEATURE REQUEST?:

Uncomment only one, leave it on its own line:

/kind bug

/kind enhancement

What happened:
VMI failed with below error

failed to find a sourceFile in containerDisk containerdisk: Failed to check /proc/1/root/home/docker/node003/docker/devicemapper/mnt/d3575d97a42289078ac894d2002ad8e8d90a8d424a16d8d84bd8f7223e0a31c1/disk for disks: open /proc/1/root/home/docker/node003/docker/devicemapper/mnt/d3575d97a42289078ac894d2002ad8e8d90a8d424a16d8d84bd8f7223e0a31c1/disk: no such file or directory

What you expected to happen:

VMI to be in Running state.

How to reproduce it (as minimally and precisely as possible):
I have centos 8 VM which is diskless.
I am trying to execute the attached VMI specification. The vmi is created but after some time it is in Failed state.
I execute kubectl describe vmi vmi_name -n kubevirt-demo and then I see above error in 'Events'

`
image

`

Anything else we need to know?:
The VM on which I am trying to create vmi is a diskless VM. Let me know if my attached vmi specification needs any changes for same.

I am able to successfully run same vmi specification on another VM which is not diskless.

Environment:

  • KubeVirt version (use virtctl version): 0.35.0
  • Kubernetes version (use kubectl version): 1.21.0
  • VM or VMI specifications: diskless VM
  • Cloud provider or hardware configuration: on-prem
  • OS (e.g. from /etc/os-release): CentOs 8
  • Kernel (e.g. uname -a): Linux node003 4.18.0-240.22.1.el8_3.x86_64 Add travis support #1 SMP Thu Apr 8 19:01:30 UTC 2021 x86_64 x86_64 x86_64 GNU/Linux
  • Install tools:
  • Others:
@vasiliy-ul
Copy link
Contributor

vasiliy-ul commented Jun 18, 2021

KubeVirt version (use virtctl version): 0.35.0

I remember there was a similar issue reported some time ago #4613. 0.35.0 is pretty old version. I think the problem was fixed near ~0.37.0. Could you maybe try the latest KubeVirt release 0.42.1?

@vaibhavraizada-hcl
Copy link
Author

@vasiliy-ul I installed version 0.42.1, the previous error is now gone but I am seeing a new error:

Warning SyncFailed 108s virt-handler server error. command SyncVMI failed: "LibvirtError(Code=1, Domain=10, Message='internal error: process exited while connecting to monitor: 2021-06-18T07:07:55.989747Z qemu-kvm: -blockdev {\"driver\":\"file\",\"filename\":\"/var/run/kubevirt-ephemeral-disks/cloud-init-data/kubevirt-demo/vmi-centos7/noCloud.iso\",\"node-name\":\"libvirt-1-storage\",\"cache\":{\"direct\":true,\"no-flush\":false},\"auto-read-only\":true,\"discard\":\"unmap\"}: Could not open '/var/run/kubevirt-ephemeral-disks/cloud-init-data/kubevirt-demo/vmi-centos7/noCloud.iso': filesystem does not support O_DIRECT')"

@vasiliy-ul
Copy link
Contributor

You can try setting cache: writethrough in you VMI spec for the disks. Though according to the docs https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#disk-device-cache it shold automatically detect what the underlying fs actually supports.

@rmohr
Copy link
Member

rmohr commented Jun 18, 2021

You can try setting cache: writethrough in you VMI spec for the disks. Though according to the docs https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#disk-device-cache it shold automatically detect what the underlying fs actually supports.

Maybe we don't check it for all disk types. Definitely sounds like a bug.

@rmohr rmohr added triage/accepted Indicates an issue or PR is ready to be actively worked on. kind/bug labels Jun 18, 2021
@vasiliy-ul
Copy link
Contributor

vasiliy-ul commented Jun 18, 2021

I think the issue here is that SetDriverCacheMode is called in preStartHook while the ISO is actually generated later in SyncVMI. And checkDirectIOFlag returns true in case the file is not found:

func checkDirectIOFlag(path string) bool {
// check if fs where disk.img file is located or block device
// support direct i/o
// #nosec No risk for path injection. No information can be exposed to attacker
f, err := os.OpenFile(path, syscall.O_RDONLY|syscall.O_DIRECT, 0)
if err != nil && !os.IsNotExist(err) {
return false
}
defer util.CloseIOAndCheckErr(f, nil)
return true
}

So eventually the driver cache mode is not detected for the cloudinit disk.

@vaibhavraizada-hcl
Copy link
Author

You can try setting cache: writethrough in you VMI spec for the disks. Though according to the docs https://kubevirt.io/user-guide/virtual_machines/disks_and_volumes/#disk-device-cache it shold automatically detect what the underlying fs actually supports.

I tried this...and it works...
After adding cache: writethrough in VMI spec...it is running successfully

Thanks a lot !!

Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
kind/bug triage/accepted Indicates an issue or PR is ready to be actively worked on.
Projects
None yet
Development

Successfully merging a pull request may close this issue.

3 participants