@@ -265,35 +265,138 @@ virtqueue through the user-level vring service API helpers.
265
265
Kernel-Land Virtio Framework
266
266
============================
267
267
268
- The architecture of ACRN kernel-land virtio framework (VBS-K) is shown
269
- in :numref: `virtio-kernelland `.
270
-
271
- VBS-K provides acceleration for performance critical devices emulated by
272
- VBS-U modules by handling the "data plane" of the devices directly in
273
- the kernel. When VBS-K is enabled for certain device, the kernel-land
274
- vring service API helpers are used to access the virtqueues shared by
275
- the FE driver. Compared to VBS-U, this eliminates the overhead of
276
- copying data back-and-forth between user-land and kernel-land within the
277
- service OS, but pays with the extra implementation complexity of the BE
278
- drivers.
268
+ ACRN supports two kernel-land virtio frameworks: VBS-K, designed from
269
+ scratch for ACRN, the other called Vhost, compatible with Linux Vhost.
270
+
271
+ VBS-K framework
272
+ ---------------
273
+
274
+ The architecture of ACRN VBS-K is shown in
275
+ :numref: `kernel-virtio-framework ` below.
276
+
277
+ Generally VBS-K provides acceleration towards performance critical
278
+ devices emulated by VBS-U modules by handling the “data plane” of the
279
+ devices directly in the kernel. When VBS-K is enabled for certain
280
+ devices, the kernel-land vring service API helpers, instead of the
281
+ user-land helpers, are used to access the virtqueues shared by the FE
282
+ driver. Compared to VBS-U, this eliminates the overhead of copying data
283
+ back-and-forth between user-land and kernel-land within service OS, but
284
+ pays with the extra implementation complexity of the BE drivers.
279
285
280
286
Except for the differences mentioned above, VBS-K still relies on VBS-U
281
287
for feature negotiations between FE and BE drivers. This means the
282
288
"control plane" of the virtio device still remains in VBS-U. When
283
289
feature negotiation is done, which is determined by FE driver setting up
284
- an indicative flag, VBS-K module will be initialized by VBS-U, after
285
- which all request handling will be offloaded to the VBS-K in kernel.
290
+ an indicative flag, VBS-K module will be initialized by VBS-U.
291
+ Afterwards, all request handling will be offloaded to the VBS-K in
292
+ kernel.
286
293
287
- The FE driver is not aware of how the BE driver is implemented, either
288
- in the VBS-U or VBS-K model . This saves engineering effort regarding FE
294
+ Finally the FE driver is not aware of how the BE driver is implemented,
295
+ either in VBS-U or VBS-K. This saves engineering effort regarding FE
289
296
driver development.
290
297
291
- .. figure :: images/virtio-hld-image6.png
292
- :width: 900px
298
+ .. figure :: images/virtio-hld-image54.png
299
+ :align: center
300
+ :name: kernel-virtio-framework
301
+
302
+ ACRN Kernel Land Virtio Framework
303
+
304
+ Vhost framework
305
+ ---------------
306
+
307
+ Vhost is similar to VBS-K. Vhost is a common solution upstreamed in the
308
+ Linux kernel, with several kernel mediators based on it.
309
+
310
+ Architecture
311
+ ~~~~~~~~~~~~
312
+
313
+ Vhost/virtio is a semi-virtualized device abstraction interface
314
+ specification that has been widely applied in various virtualization
315
+ solutions. Vhost is a specific kind of virtio where the data plane is
316
+ put into host kernel space to reduce the context switch while processing
317
+ the IO request. It is usually called "virtio" when used as a front-end
318
+ driver in a guest operating system or "vhost" when used as a back-end
319
+ driver in a host. Compared with a pure virtio solution on a host, vhost
320
+ uses the same frontend driver as virtio solution and can achieve better
321
+ performance. :numref: `vhost-arch ` shows the vhost architecture on ACRN.
322
+
323
+ .. figure :: images/virtio-hld-image71.png
324
+ :align: center
325
+ :name: vhost-arch
326
+
327
+ Vhost Architecture on ACRN
328
+
329
+ Compared with a userspace virtio solution, vhost decomposes data plane
330
+ from user space to kernel space. The vhost general data plane workflow
331
+ can be described as:
332
+
333
+ 1. vhost proxy creates two eventfds per virtqueue, one is for kick,
334
+ (an ioeventfd), the other is for call, (an irqfd).
335
+ 2. vhost proxy registers the two eventfds to VHM through VHM character
336
+ device:
337
+
338
+ a) Ioevenftd is bound with a PIO/MMIO range. If it is a PIO, it is
339
+ registered with (fd, port, len, value). If it is a MMIO, it is
340
+ registered with (fd, addr, len).
341
+ b) Irqfd is registered with MSI vector.
342
+
343
+ 3. vhost proxy sets the two fds to vhost kernel through ioctl of vhost
344
+ device.
345
+ 4. vhost starts polling the kick fd and wakes up when guest kicks a
346
+ virtqueue, which results a event_signal on kick fd by VHM ioeventfd.
347
+ 5. vhost device in kernel signals on the irqfd to notify the guest.
348
+
349
+ Ioeventfd implementation
350
+ ~~~~~~~~~~~~~~~~~~~~~~~~
351
+
352
+ Ioeventfd module is implemented in VHM, and can enhance a registered
353
+ eventfd to listen to IO requests (PIO/MMIO) from vhm ioreq module and
354
+ signal the eventfd when needed. :numref: `ioeventfd-workflow ` shows the
355
+ general workflow of ioeventfd.
356
+
357
+ .. figure :: images/virtio-hld-image58.png
358
+ :align: center
359
+ :name: ioeventfd-workflow
360
+
361
+ ioeventfd general work flow
362
+
363
+ The workflow can be summarized as:
364
+
365
+ 1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
366
+ irqfd.
367
+ 2. pass ioeventfd to vhost kernel driver.
368
+ 3. pass ioevent fd to vhm driver
369
+ 4. UOS FE driver triggers ioreq and forwarded to SOS by hypervisor
370
+ 5. ioreq is dispatched by vhm driver to related vhm client.
371
+ 6. ioeventfd vhm client traverse the io_range list and find
372
+ corresponding eventfd.
373
+ 7. trigger the signal to related eventfd.
374
+
375
+ Irqfd implementation
376
+ ~~~~~~~~~~~~~~~~~~~~
377
+
378
+ The irqfd module is implemented in VHM, and can enhance an registered
379
+ eventfd to inject an interrupt to a guest OS when the eventfd gets
380
+ signalled. :numref: `irqfd-workflow ` shows the general flow for irqfd.
381
+
382
+ .. figure :: images/virtio-hld-image60.png
293
383
:align: center
294
- :name: virtio-kernelland
384
+ :name: irqfd-workflow
385
+
386
+ irqfd general flow
387
+
388
+ The workflow can be summarized as:
295
389
296
- ACRN Kernel-Land Virtio Framework
390
+ 1. vhost device init. Vhost proxy create two eventfd for ioeventfd and
391
+ irqfd.
392
+ 2. pass irqfd to vhost kernel driver.
393
+ 3. pass irq fd to vhm driver
394
+ 4. vhost device driver triggers irq eventfd signal once related native
395
+ transfer is completed.
396
+ 5. irqfd related logic traverses the irqfd list to retrieve related irq
397
+ information.
398
+ 6. irqfd related logic inject an interrupt through vhm interrupt API.
399
+ 7. interrupt is delivered to UOS FE driver through hypervisor.
297
400
298
401
Virtio APIs
299
402
***********
@@ -664,5 +767,6 @@ supported in ACRN.
664
767
665
768
virtio-blk
666
769
virtio-net
770
+ virtio-input
667
771
virtio-console
668
772
virtio-rnd
0 commit comments