Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Support qcow backing file #5573

Merged
merged 8 commits into from
Jul 19, 2023

Conversation

yukiiiteru
Copy link
Contributor

Currently, the qcow format does not support backing file. Noticed that the qcow implementation comes from CrosVM and the CrosVM supports the qcow support files now, so I backported it in this PR.

This PR does the following:

  • Merge qcow, vhdx and block_util into block crate
  • Add the FixedVhd type for BlockBackend
  • Introduce a trait BlockBackend for generic ops (e.g., read, write and seek), and the backing file can be any type which implements this trait
  • Backport the feature of qcow backing file
  • Add an integration test case for qcow support files

And I also fixed a bug when testing the QcowHeader with backing file name 😂 some values that don't belong to v2 are written into the v2 header but it, and these values are never read. So the previous unit tests always pass.

@yukiiiteru yukiiiteru requested a review from a team as a code owner July 6, 2023 13:38

[features]
default = []
authors = ["The Cloud Hypervisor Authors", "The Chromium OS Authors"]
Copy link
Contributor Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I don't know if there is any problem with updating this here 🤔

@yukiiiteru yukiiiteru force-pushed the feature/qcow-backing-file branch 3 times, most recently from 589a706 to 9e3b671 Compare July 7, 2023 08:33
@yukiiiteru
Copy link
Contributor Author

I made too many low-level mistakes on file names in integration tests 😢

@yukiiiteru yukiiiteru force-pushed the feature/qcow-backing-file branch 2 times, most recently from 69b36ae to ee259be Compare July 7, 2023 10:43
@yukiiiteru
Copy link
Contributor Author

It seems that the integration tests behave differently in different arches 😢

Tests on x86_64 always download these images, maybe these images are deleted somewhere, but tests on aarch64 don't download images because they exist on the machine.

Is there any way to delete them on the test machine, e.g., webshell?

Or how about just deleting them at the beginning?

The following is the log during the test:

aarch64
[2023-07-07T10:46:18.960Z] + update_workloads
[2023-07-07T10:46:18.960Z] + cp scripts/sha1sums-aarch64 /root/workloads
[2023-07-07T10:46:18.960Z] + BIONIC_OS_IMAGE_DOWNLOAD_NAME=bionic-server-cloudimg-arm64.img
[2023-07-07T10:46:18.960Z] + BIONIC_OS_IMAGE_DOWNLOAD_URL=https://cloud-hypervisor.azureedge.net/bionic-server-cloudimg-arm64.img
[2023-07-07T10:46:18.960Z] + BIONIC_OS_DOWNLOAD_IMAGE=/root/workloads/bionic-server-cloudimg-arm64.img
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/bionic-server-cloudimg-arm64.img ']'
[2023-07-07T10:46:18.960Z] + BIONIC_OS_RAW_IMAGE_NAME=bionic-server-cloudimg-arm64.raw
[2023-07-07T10:46:18.960Z] + BIONIC_OS_RAW_IMAGE=/root/workloads/bionic-server-cloudimg-arm64.raw
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/bionic-server-cloudimg-arm64.raw ']'
[2023-07-07T10:46:18.960Z] + BIONIC_OS_QCOW2_IMAGE_UNCOMPRESSED_NAME=bionic-server-cloudimg-arm64.qcow2
[2023-07-07T10:46:18.960Z] + BIONIC_OS_QCOW2_UNCOMPRESSED_IMAGE=/root/workloads/bionic-server-cloudimg-arm64.qcow2
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/bionic-server-cloudimg-arm64.qcow2 ']'
[2023-07-07T10:46:18.960Z] + FOCAL_OS_RAW_IMAGE_NAME=focal-server-cloudimg-arm64-custom-20210929-0.raw
[2023-07-07T10:46:18.960Z] + FOCAL_OS_RAW_IMAGE_DOWNLOAD_URL=https://cloud-hypervisor.azureedge.net/focal-server-cloudimg-arm64-custom-20210929-0.raw
[2023-07-07T10:46:18.960Z] + FOCAL_OS_RAW_IMAGE=/root/workloads/focal-server-cloudimg-arm64-custom-20210929-0.raw
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-arm64-custom-20210929-0.raw ']'
[2023-07-07T10:46:18.960Z] + FOCAL_OS_QCOW2_IMAGE_UNCOMPRESSED_NAME=focal-server-cloudimg-arm64-custom-20210929-0.qcow2
[2023-07-07T10:46:18.960Z] + FOCAL_OS_QCOW2_IMAGE_UNCOMPRESSED_DOWNLOAD_URL=https://cloud-hypervisor.azureedge.net/focal-server-cloudimg-arm64-custom-20210929-0.qcow2
[2023-07-07T10:46:18.960Z] + FOCAL_OS_QCOW2_UNCOMPRESSED_IMAGE=/root/workloads/focal-server-cloudimg-arm64-custom-20210929-0.qcow2
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-arm64-custom-20210929-0.qcow2 ']'
[2023-07-07T10:46:18.960Z] + FOCAL_OS_QCOW2_IMAGE_BACKING_FILE_NAME=focal-server-cloudimg-arm64-custom-20210929-0-backing.qcow2
[2023-07-07T10:46:18.960Z] + FOCAL_OS_QCOW2_BACKING_FILE_IMAGE=/root/workloads/focal-server-cloudimg-arm64-custom-20210929-0-backing.qcow2
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-arm64-custom-20210929-0-backing.qcow2 ']'
[2023-07-07T10:46:18.960Z] + JAMMY_OS_RAW_IMAGE_NAME=jammy-server-cloudimg-arm64-custom-20220329-0.raw
[2023-07-07T10:46:18.960Z] + JAMMY_OS_RAW_IMAGE_DOWNLOAD_URL=https://cloud-hypervisor.azureedge.net/jammy-server-cloudimg-arm64-custom-20220329-0.raw
[2023-07-07T10:46:18.960Z] + JAMMY_OS_RAW_IMAGE=/root/workloads/jammy-server-cloudimg-arm64-custom-20220329-0.raw
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/jammy-server-cloudimg-arm64-custom-20220329-0.raw ']'
[2023-07-07T10:46:18.960Z] + JAMMY_OS_QCOW2_IMAGE_UNCOMPRESSED_NAME=jammy-server-cloudimg-arm64-custom-20220329-0.qcow2
[2023-07-07T10:46:18.960Z] + JAMMY_OS_QCOW2_IMAGE_UNCOMPRESSED_DOWNLOAD_URL=https://cloud-hypervisor.azureedge.net/jammy-server-cloudimg-arm64-custom-20220329-0.qcow2
[2023-07-07T10:46:18.960Z] + JAMMY_OS_QCOW2_UNCOMPRESSED_IMAGE=/root/workloads/jammy-server-cloudimg-arm64-custom-20220329-0.qcow2
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/jammy-server-cloudimg-arm64-custom-20220329-0.qcow2 ']'
[2023-07-07T10:46:18.960Z] + ALPINE_MINIROOTFS_URL=http://dl-cdn.alpinelinux.org/alpine/v3.11/releases/aarch64/alpine-minirootfs-3.11.3-aarch64.tar.gz
[2023-07-07T10:46:18.960Z] + ALPINE_MINIROOTFS_TARBALL=/root/workloads/alpine-minirootfs-aarch64.tar.gz
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/alpine-minirootfs-aarch64.tar.gz ']'
[2023-07-07T10:46:18.960Z] + ALPINE_INITRAMFS_IMAGE=/root/workloads/alpine_initramfs.img
[2023-07-07T10:46:18.960Z] + '[' '!' -f /root/workloads/alpine_initramfs.img ']'
[2023-07-07T10:46:18.960Z] + pushd /root/workloads
[2023-07-07T10:46:18.960Z] ~/workloads /cloud-hypervisor
x86_64
[2023-07-07T10:50:01.299Z] + cp scripts/sha1sums-x86_64 /root/workloads
[2023-07-07T10:50:01.299Z] ++ curl --silent https://api.github.com/repos/cloud-hypervisor/rust-hypervisor-firmware/releases/latest
[2023-07-07T10:50:01.299Z] ++ grep browser_download_url
[2023-07-07T10:50:01.299Z] ++ grep -o 'https://.*[^ "]'
[2023-07-07T10:50:01.552Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:01.552Z] + FW_URL=https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.4.2/hypervisor-fw
[2023-07-07T10:50:01.552Z] + FW=/root/workloads/hypervisor-fw
[2023-07-07T10:50:01.552Z] + '[' '!' -f /root/workloads/hypervisor-fw ']'
[2023-07-07T10:50:01.552Z] + pushd /root/workloads
[2023-07-07T10:50:01.552Z] + wget --quiet https://github.com/cloud-hypervisor/rust-hypervisor-firmware/releases/download/0.4.2/hypervisor-fw
[2023-07-07T10:50:02.111Z] /cloud-hypervisor
[2023-07-07T10:50:02.111Z] 
[2023-07-07T10:50:02.111Z] real	0m0.537s
[2023-07-07T10:50:02.111Z] user	0m0.003s
[2023-07-07T10:50:02.111Z] sys	0m0.006s
[2023-07-07T10:50:02.111Z] + popd
[2023-07-07T10:50:02.111Z] ++ curl --silent https://api.github.com/repos/cloud-hypervisor/edk2/releases/latest
[2023-07-07T10:50:02.111Z] ++ grep browser_download_url
[2023-07-07T10:50:02.111Z] ++ grep -o 'https://.*[^ "]'
[2023-07-07T10:50:02.364Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:02.364Z] + OVMF_FW_URL=https://github.com/cloud-hypervisor/edk2/releases/download/ch-92c79b2901/CLOUDHV.fd
[2023-07-07T10:50:02.364Z] + OVMF_FW=/root/workloads/CLOUDHV.fd
[2023-07-07T10:50:02.364Z] + '[' '!' -f /root/workloads/CLOUDHV.fd ']'
[2023-07-07T10:50:02.364Z] + pushd /root/workloads
[2023-07-07T10:50:02.364Z] + wget --quiet https://github.com/cloud-hypervisor/edk2/releases/download/ch-92c79b2901/CLOUDHV.fd
[2023-07-07T10:50:02.921Z] 
[2023-07-07T10:50:02.921Z] real	0m0.514s
[2023-07-07T10:50:02.921Z] user	0m0.005s
[2023-07-07T10:50:02.921Z] sys	0m0.014s
[2023-07-07T10:50:02.921Z] + popd
[2023-07-07T10:50:02.921Z] + FOCAL_OS_IMAGE_NAME=focal-server-cloudimg-amd64-custom-20210609-0.qcow2
[2023-07-07T10:50:02.921Z] + FOCAL_OS_IMAGE_URL=https://cloud-hypervisor.azureedge.net/focal-server-cloudimg-amd64-custom-20210609-0.qcow2
[2023-07-07T10:50:02.921Z] + FOCAL_OS_IMAGE=/root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.qcow2
[2023-07-07T10:50:02.921Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.qcow2 ']'
[2023-07-07T10:50:02.921Z] + pushd /root/workloads
[2023-07-07T10:50:02.921Z] + wget --quiet https://cloud-hypervisor.azureedge.net/focal-server-cloudimg-amd64-custom-20210609-0.qcow2
[2023-07-07T10:50:02.921Z] /cloud-hypervisor
[2023-07-07T10:50:02.921Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:24.768Z] 
[2023-07-07T10:50:24.768Z] real	0m19.010s
[2023-07-07T10:50:24.768Z] user	0m1.261s
[2023-07-07T10:50:24.768Z] sys	0m2.893s
[2023-07-07T10:50:24.768Z] + popd
[2023-07-07T10:50:24.768Z] + FOCAL_OS_RAW_IMAGE_NAME=focal-server-cloudimg-amd64-custom-20210609-0.raw
[2023-07-07T10:50:24.768Z] + FOCAL_OS_RAW_IMAGE=/root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.raw
[2023-07-07T10:50:24.768Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.raw ']'
[2023-07-07T10:50:24.768Z] + pushd /root/workloads
[2023-07-07T10:50:24.768Z] + qemu-img convert -p -f qcow2 -O raw focal-server-cloudimg-amd64-custom-20210609-0.qcow2 focal-server-cloudimg-amd64-custom-20210609-0.raw
[2023-07-07T10:50:24.768Z] /cloud-hypervisor
[2023-07-07T10:50:24.768Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:24.768Z]     (0.00/100%)
......
[2023-07-07T10:50:24.768Z] /cloud-hypervisor
[2023-07-07T10:50:24.768Z] 
[2023-07-07T10:50:24.768Z] real	0m0.848s
[2023-07-07T10:50:24.768Z] user	0m0.065s
[2023-07-07T10:50:24.768Z] sys	0m1.085s
[2023-07-07T10:50:24.768Z] + popd
[2023-07-07T10:50:24.768Z] + FOCAL_OS_QCOW_BACKING_FILE_IMAGE_NAME=focal-server-cloudimg-amd64-custom-20210609-0-backing.qcow2
[2023-07-07T10:50:24.768Z] + FOCAL_OS_QCOW_BACKING_FILE_IMAGE=/root/workloads/focal-server-cloudimg-amd64-custom-20210609-0-backing.qcow2
[2023-07-07T10:50:24.768Z] + '[' '!' -f /root/workloads/focal-server-cloudimg-amd64-custom-20210609-0-backing.qcow2 ']'
[2023-07-07T10:50:24.768Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:24.768Z] + pushd /root/workloads
[2023-07-07T10:50:24.768Z] + qemu-img create -f qcow2 -b /root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.qcow2 -F qcow2 focal-server-cloudimg-amd64-custom-20210609-0-backing.qcow2
[2023-07-07T10:50:24.768Z] Formatting 'focal-server-cloudimg-amd64-custom-20210609-0-backing.qcow2', fmt=qcow2 size=2361393152 backing_file=/root/workloads/focal-server-cloudimg-amd64-custom-20210609-0.qcow2 backing_fmt=qcow2 cluster_size=65536 lazy_refcounts=off refcount_bits=16
[2023-07-07T10:50:24.768Z] /cloud-hypervisor
[2023-07-07T10:50:24.768Z] 
[2023-07-07T10:50:24.768Z] real	0m0.021s
[2023-07-07T10:50:24.768Z] user	0m0.005s
[2023-07-07T10:50:24.768Z] sys	0m0.000s
[2023-07-07T10:50:24.768Z] + popd
[2023-07-07T10:50:24.768Z] + JAMMY_OS_IMAGE_NAME=jammy-server-cloudimg-amd64-custom-20230119-0.qcow2
[2023-07-07T10:50:24.768Z] + JAMMY_OS_IMAGE_URL=https://cloud-hypervisor.azureedge.net/jammy-server-cloudimg-amd64-custom-20230119-0.qcow2
[2023-07-07T10:50:24.768Z] + JAMMY_OS_IMAGE=/root/workloads/jammy-server-cloudimg-amd64-custom-20230119-0.qcow2
[2023-07-07T10:50:24.768Z] + '[' '!' -f /root/workloads/jammy-server-cloudimg-amd64-custom-20230119-0.qcow2 ']'
[2023-07-07T10:50:24.768Z] + pushd /root/workloads
[2023-07-07T10:50:24.768Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:24.768Z] + wget --quiet https://cloud-hypervisor.azureedge.net/jammy-server-cloudimg-amd64-custom-20230119-0.qcow2
[2023-07-07T10:50:46.615Z] /cloud-hypervisor
[2023-07-07T10:50:46.615Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:46.615Z] 
[2023-07-07T10:50:46.615Z] real	0m22.267s
[2023-07-07T10:50:46.615Z] user	0m1.405s
[2023-07-07T10:50:46.615Z] sys	0m3.428s
[2023-07-07T10:50:46.615Z] + popd
[2023-07-07T10:50:46.615Z] + JAMMY_OS_RAW_IMAGE_NAME=jammy-server-cloudimg-amd64-custom-20230119-0.raw
[2023-07-07T10:50:46.615Z] + JAMMY_OS_RAW_IMAGE=/root/workloads/jammy-server-cloudimg-amd64-custom-20230119-0.raw
[2023-07-07T10:50:46.615Z] + '[' '!' -f /root/workloads/jammy-server-cloudimg-amd64-custom-20230119-0.raw ']'
[2023-07-07T10:50:46.615Z] + pushd /root/workloads
[2023-07-07T10:50:46.615Z] + qemu-img convert -p -f qcow2 -O raw jammy-server-cloudimg-amd64-custom-20230119-0.qcow2 jammy-server-cloudimg-amd64-custom-20230119-0.raw
[2023-07-07T10:50:46.615Z]     (0.00/100%)
......
[2023-07-07T10:50:46.615Z] 
[2023-07-07T10:50:46.615Z] real	0m1.033s
[2023-07-07T10:50:46.615Z] user	0m0.077s
[2023-07-07T10:50:46.615Z] sys	0m1.411s
[2023-07-07T10:50:46.615Z] + popd
[2023-07-07T10:50:46.615Z] + ALPINE_MINIROOTFS_URL=http://dl-cdn.alpinelinux.org/alpine/v3.11/releases/x86_64/alpine-minirootfs-3.11.3-x86_64.tar.gz
[2023-07-07T10:50:46.615Z] /cloud-hypervisor
[2023-07-07T10:50:46.615Z] + ALPINE_MINIROOTFS_TARBALL=/root/workloads/alpine-minirootfs-x86_64.tar.gz
[2023-07-07T10:50:46.615Z] + '[' '!' -f /root/workloads/alpine-minirootfs-x86_64.tar.gz ']'
[2023-07-07T10:50:46.615Z] + pushd /root/workloads
[2023-07-07T10:50:46.615Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:46.615Z] + wget --quiet http://dl-cdn.alpinelinux.org/alpine/v3.11/releases/x86_64/alpine-minirootfs-3.11.3-x86_64.tar.gz -O /root/workloads/alpine-minirootfs-x86_64.tar.gz
[2023-07-07T10:50:46.615Z] 
[2023-07-07T10:50:46.615Z] real	0m0.022s
[2023-07-07T10:50:46.615Z] user	0m0.000s
[2023-07-07T10:50:46.615Z] sys	0m0.006s
[2023-07-07T10:50:46.615Z] + popd
[2023-07-07T10:50:46.615Z] + ALPINE_INITRAMFS_IMAGE=/root/workloads/alpine_initramfs.img
[2023-07-07T10:50:46.615Z] + '[' '!' -f /root/workloads/alpine_initramfs.img ']'
[2023-07-07T10:50:46.615Z] + pushd /root/workloads
[2023-07-07T10:50:46.615Z] + mkdir alpine-minirootfs
[2023-07-07T10:50:46.615Z] /cloud-hypervisor
[2023-07-07T10:50:46.615Z] ~/workloads /cloud-hypervisor
[2023-07-07T10:50:46.615Z] + tar xf /root/workloads/alpine-minirootfs-x86_64.tar.gz -C alpine-minirootfs

@michael2012z
Copy link
Member

It seems that the integration tests behave differently in different arches

Indeed, the disk images are handled in different ways.

x86_64 test runs in VM. A new VM instance is created for an integration. When the test starts, there isn't any image in the VM. So the images are always downloaded freshly. aarch64 test runs in a baremetal server. We don't remove the images every time because they can be reused.

I removed the old focal-server-cloudimg-arm64-custom-20210929-0-backing.qcow2 from the server and re-triggered the CI. Now the test passed.

@yukiiiteru
Copy link
Contributor Author

It seems that the integration tests behave differently in different arches

Indeed, the disk images are handled in different ways.

x86_64 test runs in VM. A new VM instance is created for an integration. When the test starts, there isn't any image in the VM. So the images are always downloaded freshly. aarch64 test runs in a baremetal server. We don't remove the images every time because they can be reused.

I removed the old focal-server-cloudimg-arm64-custom-20210929-0-backing.qcow2 from the server and re-triggered the CI. Now the test passed.

Got it, thanks!

@yukiiiteru yukiiiteru force-pushed the feature/qcow-backing-file branch 5 times, most recently from 931ca4c to 8f2d5fb Compare July 12, 2023 05:02
@yukiiiteru
Copy link
Contributor Author

My code conflicts with the "main" branch, I rebased them.

And there are some new warnings with the latest beta rust toolchain, the latest 3 commits fixed them.

@yukiiiteru yukiiiteru force-pushed the feature/qcow-backing-file branch 2 times, most recently from 7183268 to 196ac18 Compare July 12, 2023 06:50
@yukiiiteru
Copy link
Contributor Author

There are too much clippy warnings in beta toolchain 😢 I'm sorry for using too many CI resources.

Copy link
Member

@likebreath likebreath left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Thank you for the contribution. Would you please send the clippy fixes separately in a new PR?

Sorry, this is not my area of expertise. Let's wait for Rob to come back next week for a proper review. Thank you.

@yukiiiteru
Copy link
Contributor Author

Thank you for the contribution. Would you please send the clippy fixes separately in a new PR?

Sorry, this is not my area of expertise. Let's wait for Rob to come back next week for a proper review. Thank you.

Sure, a new PR #5590 has been created.

This PR will not be updated for now, I will rebase it after that one is merged.

@rbradford
Copy link
Member

@wfly1998 Please rebase to drop your clippy changes

Copy link
Member

@rbradford rbradford left a comment

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This change seems very reasonable to me - thanks!

This commit introduces a generic type `FixedVhd` for its sync and async
disk I/O types, and the disk I/O types will use it directly instead of
maintaining a file themselves.

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
This commit merges crates `qcow`, `vhdx` and `block_util` into the
crate `block`, which can allow `qcow` to use functions from `block_util`
without introducing a circular crate dependency.

This commit is based on crosvm implementation:
https://chromium.googlesource.com/crosvm/crosvm/+/f2eecc4152eca8d395566cffa2c102ec090a152d

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
This commit introduces the trait `BlockBackend` with generic ops
including read, write and seek, which can be used for common I/O
interfaces for the block types without using `DiskFile` and `AsyncIo`.

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
Fixes: 3f02cca

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
This commit allows opening qcow with a backing file, which supports any
type implementing `BlockBackend`.

This commit is based on crosvm implementation:
https://chromium.googlesource.com/crosvm/crosvm/+/9ca6039b030a5c83062cfec9a5ff52f42814fa13

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
Reads to qcow files with backing files will fall through to the backing
file if there is no allocated cluster. As of this change, a write will
still trash the cluster and hide any data already present.

This commit is based on crosvm implementation:
https://chromium.googlesource.com/crosvm/crosvm/+/d8144a56e26ca09e2c7ff97ed63c57e7e7965674

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
This preserves any data that the backing file had on a cluster when
doing a write to a subset of that cluster. These writes cause a
performance penalty on creating new clusters if a backing file is
present.

This commit is based on crosvm implementation:
https://chromium.googlesource.com/crosvm/crosvm/+/5ad3bc345904b252efd6dd2ef4853f5ee06ae3c5

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
This test case creates a new qcow2 file using the image of ubuntu as
its backing file, and boot a virtual machine with this image file.

Signed-off-by: Yu Li <liyu.yukiteru@bytedance.com>
@yukiiiteru
Copy link
Contributor Author

@rbradford I have done the rebase, but one of the integration tests is failing and I don't know how to fix it

@rbradford
Copy link
Member

@rbradford I have done the rebase, but one of the integration tests is failing and I don't know how to fix it

It was a flake.

@rbradford rbradford merged commit 4ef388b into cloud-hypervisor:main Jul 19, 2023
@yukiiiteru yukiiiteru deleted the feature/qcow-backing-file branch July 20, 2023 11:34
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
Status: Done
Development

Successfully merging this pull request may close these issues.

4 participants