Skip to content
This repository has been archived by the owner on Jul 17, 2023. It is now read-only.

virtio-queue: update virtio-queue to v0.6.0 #248

Merged
merged 7 commits into from
Feb 1, 2023

Conversation

studychao
Copy link
Member

Since the vm-memory conflict brought by virtio-queue 0.4.0, we need to upgrade virtio-queue to the newest version. Meanwhile, virtio-queue change a lot of APIs from 0.4.0 to 0.5.0, so this PR mainly for fixing all the API changes.

Signed-off-by: Chao Wu chaowu@linux.alibaba.com

for queue in self.queues.iter_mut() {
queue.queue = Q::new(queue.queue.max_size());
let new_queue = Q::new(queue.queue.max_size());
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part need to be reviewed more carefully.
Other parts are just changeing the parameter names.

@@ -68,8 +68,9 @@ impl<Q: QueueStateT> VirtioQueueConfig<Q> {
pub fn create(queue_size: u16, index: u16) -> Result<Self> {
let eventfd = EventFd::new(EFD_NONBLOCK).map_err(Error::IOError)?;

let queue = Q::new(queue_size)?;
Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

This part need to be reviewed more carefully.
Other parts are just changeing the parameter names.

@@ -32,7 +32,7 @@ serde_json = "1.0.9"
thiserror = "1"
threadpool = "1"
virtio-bindings = "0.1.0"
virtio-queue = "0.4.0"
virtio-queue = "0.6.0"
vmm-sys-util = "0.11.0"
vm-memory = { version = "0.9.0", features = [ "backend-mmap" ] }
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Does this need to be updated too?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think there is no need. Since virtio-queue 0.6.0 use vm-memory 0.9.0 and 0.10.0 of vm-memory may cause conflict.

let new_queue = Q::new(queue.queue.max_size());
if let Err(e) = new_queue {
reset_suc_flag = false;
warn!("reset device failed because new virtio-queue could not be created due to {:?}", e);
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

If one of the Queues fails to initialize, the device does not work.

I think is it better to panic directly? @jiangliu

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Or modify the reset function to return Result, and the caller decides the next action.

Copy link
Member Author

@studychao studychao Nov 12, 2022

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Result is added for reset.
And how the caller handle reset 's Result is also updated (in mmio_v2.rs).

@studychao studychao force-pushed the chao/update_virtio_queue branch 3 times, most recently from 0259da7 to 8151514 Compare November 12, 2022 16:17
Since the vm-memory conflict brought by virtio-queue 0.4.0, we need to
upgrade virtio-queue to the newest version. Meanwhile, virtio-queue
change a lot of APIs from 0.4.0 to 0.5.0, so this PR mainly for fixing
all the API changes.

fixes: openanolis#249
Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
In order to solve virtio-queue dependency problem, we need to update
nydus crates.

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
Update the unit tests that are related to virtio-queue upgrade changes.

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
@studychao studychao force-pushed the chao/update_virtio_queue branch 2 times, most recently from 09a5289 to 5179716 Compare January 28, 2023 11:26
@codecov
Copy link

codecov bot commented Jan 28, 2023

Codecov Report

Merging #248 (8e1c81e) into main (fff7509) will decrease coverage by 0.01%.
The diff coverage is 85.25%.

@@            Coverage Diff             @@
##             main     #248      +/-   ##
==========================================
- Coverage   91.57%   91.57%   -0.01%     
==========================================
  Files          80       80              
  Lines       22429    22369      -60     
==========================================
- Hits        20540    20484      -56     
+ Misses       1889     1885       -4     
Flag Coverage Δ
dbs-address-space 95.06% <93.33%> (-0.02%) ⬇️
dbs-allocator 94.98% <ø> (ø)
dbs-arch 96.30% <100.00%> (-0.01%) ⬇️
dbs-boot 94.90% <83.33%> (ø)
dbs-device 92.95% <100.00%> (ø)
dbs-interrupt 90.40% <100.00%> (ø)
dbs-legacy-devices 92.77% <0.00%> (ø)
dbs-miniball ∅ <ø> (∅)
dbs-upcall 94.25% <75.00%> (-0.02%) ⬇️
dbs-utils 91.25% <80.00%> (ø)
dbs-virtio-devices 89.18% <84.07%> (+<0.01%) ⬆️

Flags with carried forward coverage won't be shown. Click here to find out more.

Impacted Files Coverage Δ
crates/dbs-legacy-devices/src/serial.rs 88.63% <0.00%> (ø)
crates/dbs-upcall/src/dev_mgr_service.rs 85.15% <0.00%> (ø)
crates/dbs-utils/src/time.rs 85.90% <0.00%> (ø)
crates/dbs-virtio-devices/src/block/handler.rs 78.33% <ø> (ø)
...ates/dbs-virtio-devices/src/vsock/backend/inner.rs 93.52% <0.00%> (ø)
...ates/dbs-virtio-devices/src/vsock/epoll_handler.rs 89.44% <ø> (ø)
crates/dbs-virtio-devices/src/fs/device.rs 69.84% <44.68%> (+0.54%) ⬆️
crates/dbs-address-space/src/region.rs 89.91% <50.00%> (+0.22%) ⬆️
crates/dbs-virtio-devices/src/mmio/mmio_state.rs 84.54% <75.00%> (-0.40%) ⬇️
crates/dbs-boot/src/x86_64/mod.rs 90.42% <83.33%> (ø)
... and 30 more

📣 We’re building smart automated test selection to slash your CI/CD build times. Learn more

@@ -842,8 +842,8 @@ pub(crate) mod tests {
let gm = GuestMemoryAtomic::<GuestMemoryMmap>::new(gmm);

let queues = vec![
VirtioQueueConfig::create(0, 0).unwrap(),
VirtioQueueConfig::create(0, 0).unwrap(),
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

What error will be reported if it is not modified?

Copy link
Member Author

@studychao studychao Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

In the new Virtio-queue, when creating Virtio queue, it will check that the max size is a power of 2 because the valid queue sizes are a power of 2 as per the specification.

So here we could not successfully create a Virtio queue with a max size of 0

@@ -62,12 +61,6 @@ pub(crate) const PASSTHROUGHFS: &str = "passthroughfs";
pub(crate) const BLOBFS: &str = "blobfs";
pub(crate) const RAFS: &str = "rafs";

#[derive(Clone, Deserialize)]
struct BlobCacheConfig {
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Is this useless?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Yes, in the new Nydus-api, they introduce ConfigV2, it could be created through RafsConfig str. blob_config is directly generated in the creation of ConfigV2, so there is no need for dbs-virtio-devices to define and create the BlobCacheConfig for Nydus.

@@ -1,4 +1,5 @@
// Copyright 2022 Alibaba Cloud. All Rights Reserved.
// Copyright 2022 Alibaba Cloud. All Rights Reserved.
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Repeated

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed

@@ -295,10 +289,10 @@ impl<AS: GuestAddressSpace> VirtioFs<AS> {
"bootstrap_path": "{}",
"blob_cache_dir": "{}"
}}"#,
cfg, source, &blob_config.work_dir
cfg, source, blob_config.as_ref().unwrap().work_dir
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Need to return Error here? This is the behavior at runtime.

Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

FsError::InvalidData

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

fixed all the unwrap() with ok_or() to define FsError

@studychao studychao force-pushed the chao/update_virtio_queue branch 2 times, most recently from 291bb4a to 4216a65 Compare January 31, 2023 09:51

(
Rafs::new(rafs_conf, mountpoint, &mut file)
Rafs::new(&rafs_conf, mountpoint, &mut file)
Copy link
Contributor

@wllenyj wllenyj Jan 31, 2023

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Can it work without the reference?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Nope, Rafs api has changed in the new Nydus.

@wllenyj
Copy link
Contributor

wllenyj commented Jan 31, 2023

It seems that there are still a few clippy that have not been fixed.

fix several clippy errors introduced by the latest Rust.

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
adjust some queue size in UT to fix queue size erorr.

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
update nydus-api, nydus-blobfs and nydus-rafs to fit with the
virtio-queue version.

Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
@studychao studychao force-pushed the chao/update_virtio_queue branch 2 times, most recently from 5d0b1b6 to bfc1831 Compare January 31, 2023 12:56
Signed-off-by: Chao Wu <chaowu@linux.alibaba.com>
@studychao studychao merged commit aeb030c into openanolis:main Feb 1, 2023
Sign up for free to subscribe to this conversation on GitHub. Already have an account? Sign in.
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

2 participants