Skip to content

Commit 8c3cfe6

Browse files
committed
doc: add VBSK overhead analysis doc
Add a new developer guide describing VBSK overhead analysis. Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
1 parent 30159d5 commit 8c3cfe6

File tree

5 files changed

+148
-0
lines changed

5 files changed

+148
-0
lines changed
Lines changed: 147 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,147 @@
1+
.. _vbsk-overhead:
2+
3+
VBS-K Framework Virtualization Overhead Analysis
4+
################################################
5+
6+
Introduction
7+
************
8+
9+
The ACRN Hypervisor follows the Virtual I/O Device (virtio)
10+
specification to realize I/O virtualization for many
11+
performance-critical devices supported in the ACRN project. The
12+
hypervisor provides the virtio backend service (VBS) APIs, that make it
13+
very straightforward to implement a virtio device in the hypervisor. We
14+
can evaluate the virtio backend service in kernel-land (VBS-K) framework
15+
overhead through a test virtual device called virtio-echo. The total
16+
overhead of a frontend-backend application based on VBS-K consists
17+
of VBS-K framework overhead and application-specific overhead. The
18+
application-specific overhead depends on the specific frontend-backend
19+
design, from microseconds to seconds. In our HW case, the overall VBS-K
20+
framework overhead is on the microsecond level, sufficient to meet the
21+
needs of most applications.
22+
23+
Architecture of VIRTIO-ECHO
24+
***************************
25+
26+
virtio-echo is a virtual device based on virtio, and designed for
27+
testing ACRN virtio backend services in the kernel (VBS-K) framework. It
28+
includes a virtio-echo frontend driver, a virtio-echo driver in ACRN
29+
device model (DM) for initialization, and a virtio-echo driver based on
30+
VBS-K for data reception and transmission. For more virtualization
31+
background introduction, please refer to:
32+
33+
* :ref:`introduction`
34+
* :ref:`virtio-hld`
35+
36+
virtio-echo is implemented as a virtio legacy device in the ACRN device
37+
model (DM), and is registered as a PCI virtio device to the guest OS
38+
(UOS). The virtio-echo software has three parts:
39+
40+
- **virtio-echo Frontend Driver**: This driver runs in the UOS. It prepares
41+
the RXQ and notifies the backend for receiving incoming data when the
42+
UOS starts. Second, it copies the received data from the RXQ to TXQ
43+
and sends them to the backend. After receiving the message that the
44+
transmission is completed, it starts again another round of reception
45+
and transmission, and keeps running until a specified number of cycle
46+
is reached.
47+
- **virtio-echo Driver in DM**: This driver is used for initialization
48+
configuration. It simulates a virtual PCI device for the frontend
49+
driver use, and sets necessary information such as the device
50+
configuration and virtqueue information to the VBS-K. After
51+
initialization, all data exchange are taken over by the VBS-K
52+
vbs-echo driver.
53+
- **vbs-echo Backend Driver**: This driver sets all frontend RX buffers to
54+
be a specific value and sends the data to the frontend driver. After
55+
receiving the data in RXQ, the fronted driver copies the data to the
56+
TXQ, and then sends them back to the backend. The backend driver then
57+
notifies the frontend driver that the data in the TXQ has
58+
been successfully received. In virtio-echo, the backend driver
59+
doesn't process or use the received data.
60+
61+
:numref:`vbsk-virtio-echo-arch` shows the whole architecture of virtio-echo.
62+
63+
.. figure:: images/vbsk-image2.png
64+
:width: 900px
65+
:align: center
66+
:name: vbsk-virtio-echo-arch
67+
68+
virtio-echo Architecture
69+
70+
Virtualization Overhead Analysis
71+
********************************
72+
73+
Let's analyze the overhead of the VBS-K framework. As we know, the VBS-K
74+
handles notifications in the SOS kernel instead of in the SOS user space
75+
DM. This can avoid overhead from switching between kernel space and user
76+
space. Virtqueues are allocated by UOS, and virtqueue information is
77+
configured to VBS-K backend by the virtio-echo driver in DM, thus
78+
virtqueues can be shared between UOS and SOS. There is no copy overhead
79+
in this sense. The overhead of VBS-K framework mainly contains two
80+
parts: kick overhead and notify overhead.
81+
82+
- **Kick Overhead**: The UOS gets trapped when it executes sensitive
83+
instructions that notify the hypervisor first. The notification is
84+
assembled into an IOREQ, saved in a shared IO page, and then
85+
forwarded to the VHM module by the hypervisor. The VHM notifies its
86+
client for this IOREQ, in this case, the client is the vbs-echo
87+
backend driver. Kick overhead is defined as the interval from the
88+
beginning of UOS trap to a specific VBS-K driver e.g. when
89+
virtio-echo gets notified.
90+
- **Notify Overhead**: After the data in virtqueue being processed by the
91+
backend driver, vbs-echo calls the VHM module to inject an interrupt
92+
into the frontend. The VHM then uses the hypercall provided by the
93+
hypervisor, which causes a UOS VMEXIT. The hypervisor finally injects
94+
an interrupt into the vLAPIC of the UOS and resumes it. The UOS
95+
therefore receives the interrupt notification. Notify overhead is
96+
defined as the interval from the beginning of the interrupt injection
97+
to when the UOS starts interrupt processing.
98+
99+
The overhead of a specific application based on VBS-K includes two
100+
parts: VBS-K framework overhead and application-specific overhead.
101+
102+
- **VBS-K Framework Overhead**: As defined above, VBS-K framework overhead
103+
refers to kick overhead and notify overhead.
104+
- **Application-Specific Overhead**: A specific virtual device has its own
105+
frontend driver and backend driver. The application-specific overhead
106+
depends on its own design.
107+
108+
:numref:`vbsk-virtio-echo-e2e` shows the overhead of one end-to-end
109+
operation in virtio-echo. Overhead of steps marked as red are caused by
110+
the virtualization scheme based on VBS-K framework. Costs of one "kick"
111+
operation and one "notify" operation are both on a microsecond level.
112+
Overhead of steps marked as blue depend on specific frontend and backend
113+
virtual device drivers. For virtio-echo, the whole end-to-end process
114+
(from step1 to step 9) costs about 4 dozens of microsecond. That's
115+
because virtio-echo does little things in its frontend and backend
116+
driver which is just for testing and there is very little process
117+
overhead.
118+
119+
.. figure:: images/vbsk-image1.png
120+
:width: 600px
121+
:align: center
122+
:name: vbsk-virtio-echo-e2e
123+
124+
End to End Overhead of virtio-echo
125+
126+
:numref:`vbsk-virtio-echo-path` details the path of kick and notify
127+
operation shown in :numref:`vbsk-virtio-echo-e2e`. The VBS-K framework
128+
overhead is caused by operations through these paths. As we can see, all
129+
these operations are processed in kernel mode which avoids extra
130+
overhead of passing IOREQ to userspace processing.
131+
132+
.. figure:: images/vbsk-image3.png
133+
:width: 900px
134+
:align: center
135+
:name: vbsk-virtio-echo-path
136+
137+
Path of VBS-K Framework Overhead
138+
139+
Conclusion
140+
**********
141+
142+
Unlike VBS-U processing in user mode, VBS-K moves things into the kernel
143+
mode and can be used to accelerate processing. A virtual device
144+
virtio-echo based on VBS-K framework is used to evaluate the VBS-K
145+
framework overhead. In our test, the VBS-K framework overhead (one kick
146+
operation and one notify operation) is on the microsecond level which
147+
can meet the needs of most applications.
41.6 KB
Loading
90.5 KB
Loading
120 KB
Loading

doc/developer-guides/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -12,6 +12,7 @@ Developer Guides
1212
GVT-g-kernel-options
1313
trusty
1414
l1tf
15+
VBSK-analysis
1516
modularity
1617
../api/index
1718
../reference/kconfig/index

0 commit comments

Comments
 (0)