Skip to content

Commit 9871b34

Browse files
committed
doc: update I/O emulation section
Transcode, edit, and upload HLD 0.7 sections 3.4 (I/O emulation) Add anchor targets to other docs reference in this section. Update .known-issues filter for "known" doxygen/breathe errors Tracked-on: #1592 Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
1 parent 6dffef1 commit 9871b34

File tree

11 files changed

+383
-14
lines changed

11 files changed

+383
-14
lines changed

doc/.known-issues/doc/dupdecl.conf

Lines changed: 5 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,5 @@
1+
#
2+
# Emulated devices
3+
#
4+
#
5+
^(?P<filename>[-._/\w]+/hld/hld-io-emulation.rst):(?P<lineno>[0-9]+): WARNING: Duplicate declaration.

doc/developer-guides/hld/hld-emulated-devices.rst

Lines changed: 0 additions & 11 deletions
This file was deleted.
Lines changed: 371 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,371 @@
1+
.. _hld-io-emulation:
2+
3+
I/O Emulation high-level design
4+
###############################
5+
6+
As discussed in :ref:`intro-io-emulation`, there are multiple ways and
7+
places to handle I/O emulation, including HV, SOS Kernel VHM, and SOS
8+
user-land device model (acrn-dm).
9+
10+
I/O emulation in the hypervisor provides these functionalities:
11+
12+
- Maintain lists of port I/O or MMIO handlers in the hypervisor for
13+
emulating trapped I/O accesses in a certain range.
14+
15+
- Forward I/O accesses to SOS when they cannot be handled by the
16+
hypervisor by any registered handlers.
17+
18+
:numref:`io-control-flow` illustrates the main control flow steps of I/O emulation
19+
inside the hypervisor:
20+
21+
1. Trap and decode I/O access by VM exits and decode the access from
22+
exit qualification or by invoking the instruction decoder.
23+
24+
2. If the range of the I/O access overlaps with any registered handler,
25+
call that handler if it completely covers the range of the
26+
access, or ignore the access if the access crosses the boundary.
27+
28+
3. If the range of the I/O access does not overlap the range of any I/O
29+
handler, deliver an I/O request to SOS.
30+
31+
.. figure:: images/ioem-image101.png
32+
:align: center
33+
:name: io-control-flow
34+
35+
Control flow of I/O emulation in the hypervisor
36+
37+
:option:`CONFIG_PARTITION_MODE` is the only configuration option that affects the
38+
functionality of I/O emulation. With :option:`CONFIG_PARTITION_MODE` enabled,
39+
the hypervisor never sends I/O request to any VM. Unhandled I/O reads
40+
get all 1’s and writes are silently dropped.
41+
42+
I/O emulation does not rely on any calibration data.
43+
44+
Trap Path
45+
*********
46+
47+
Port I/O accesses are trapped by VM exits with the basic exit reason
48+
"I/O instruction". The port address to be accessed, size, and direction
49+
(read or write) are fetched from the VM exit qualification. For writes
50+
the value to be written to the I/O port is fetched from guest registers
51+
al, ax or eax, depending on the access size.
52+
53+
MMIO accesses are trapped by VM exits with the basic exit reason "EPT
54+
violation". The instruction emulator is invoked to decode the
55+
instruction that triggers the VM exit to get the memory address being
56+
accessed, size, direction (read or write), and the involved register.
57+
58+
The I/O bitmaps and EPT are used to configure the addresses that will
59+
trigger VM exits when accessed by a VM. Refer to
60+
:ref:`io-mmio-emulation` for details.
61+
62+
I/O Emulation in the Hypervisor
63+
*******************************
64+
65+
When a port I/O or MMIO access is trapped, the hypervisor first checks
66+
whether the to-be-accessed address falls in the range of any registered
67+
handler, and calls the handler when such a handler exists.
68+
69+
Handler Management
70+
==================
71+
72+
Each VM has two lists of I/O handlers, one for port I/O and the other
73+
for MMIO. Each element of the list contains a memory range and a pointer
74+
to the handler which emulates the accesses falling in the range. See
75+
:ref:`io-handler-init` for descriptions of the related data structures.
76+
77+
The I/O handlers are registered on VM creation and never changed until
78+
the destruction of that VM, when the handlers are unregistered. If
79+
multiple handlers are registered for the same address, the one
80+
registered later wins. See :ref:`io-handler-init` for the interfaces
81+
used to register and unregister I/O handlers.
82+
83+
I/O Dispatching
84+
===============
85+
86+
When a port I/O or MMIO access is trapped, the hypervisor first walks
87+
through the corresponding I/O handler list in the reverse order of
88+
registration, looking for a proper handler to emulate the access. The
89+
following cases exist:
90+
91+
- If a handler whose range overlaps the range of the I/O access is
92+
found,
93+
94+
- If the range of the I/O access falls completely in the range the
95+
handler can emulate, that handler is called.
96+
97+
- Otherwise it is implied that the access crosses the boundary of
98+
multiple devices which the hypervisor does not emulate. Thus
99+
no handler is called and no I/O request will be delivered to
100+
SOS. I/O reads get all 1’s and I/O writes are dropped.
101+
102+
- If the range of the I/O access does not overlap with any range of the
103+
handlers, the I/O access is delivered to SOS as an I/O request
104+
for further processing.
105+
106+
I/O Requests
107+
************
108+
109+
An I/O request is delivered to SOS vCPU 0 if the hypervisor does not
110+
find any handler that overlaps the range of a trapped I/O access. This
111+
section describes the initialization of the I/O request mechanism and
112+
how an I/O access is emulated via I/O requests in the hypervisor.
113+
114+
Initialization
115+
==============
116+
117+
For each UOS the hypervisor shares a page with SOS to exchange I/O
118+
requests. The 4-KByte page consists of 16 256-Byte slots, indexed by
119+
vCPU ID. It is required for the DM to allocate and set up the request
120+
buffer on VM creation, otherwise I/O accesses from UOS cannot be
121+
emulated by SOS, and all I/O accesses not handled by the I/O handlers in
122+
the hypervisor will be dropped (reads get all 1’s).
123+
124+
Refer to Section 4.4.1 for the details of I/O requests and the
125+
initialization of the I/O request buffer.
126+
127+
Types of I/O Requests
128+
=====================
129+
130+
There are four types of I/O requests:
131+
132+
.. list-table::
133+
:widths: 50 50
134+
:header-rows: 1
135+
136+
* - I/O Request Type
137+
- Description
138+
139+
* - PIO
140+
- A port I/O access.
141+
142+
* - MMIO
143+
- A MMIO access to a GPA with no mapping in EPT.
144+
145+
* - PCI
146+
- A PCI configuration space access.
147+
148+
* - WP
149+
- A MMIO access to a GPA with a read-only mapping in EPT.
150+
151+
152+
For port I/O accesses, the hypervisor will always deliver an I/O request
153+
of type PIO to SOS. For MMIO accesses, the hypervisor will deliver an
154+
I/O request of either MMIO or WP, depending on the mapping of the
155+
accessed address (in GPA) in the EPT of the vCPU. The hypervisor will
156+
never deliver any I/O request of type PCI, but will handle such I/O
157+
requests in the same ways as port I/O accesses on their completion.
158+
159+
Refer to :ref:`io-structs-interfaces` for a detailed description of the
160+
data held by each type of I/O request.
161+
162+
I/O Request State Transitions
163+
=============================
164+
165+
Each slot in the I/O request buffer is managed by a finite state machine
166+
with four states. The following figure illustrates the state transitions
167+
and the events that trigger them.
168+
169+
.. figure:: images/ioem-image92.png
170+
:align: center
171+
172+
State Transition of I/O Requests
173+
174+
The four states are:
175+
176+
FREE
177+
The I/O request slot is not used and new I/O requests can be
178+
delivered. This is the initial state on UOS creation.
179+
180+
PENDING
181+
The I/O request slot is occupied with an I/O request pending
182+
to be processed by SOS.
183+
184+
PROCESSING
185+
The I/O request has been dispatched to a client but the
186+
client has not finished handling it yet.
187+
188+
COMPLETE
189+
The client has completed the I/O request but the hypervisor
190+
has not consumed the results yet.
191+
192+
The contents of an I/O request slot are owned by the hypervisor when the
193+
state of an I/O request slot is FREE or COMPLETE. In such cases SOS can
194+
only access the state of that slot. Similarly the contents are owned by
195+
SOS when the state is PENDING or PROCESSING, when the hypervisor can
196+
only access the state of that slot.
197+
198+
The states are transferred as follow:
199+
200+
1. To deliver an I/O request, the hypervisor takes the slot
201+
corresponding to the vCPU triggering the I/O access, fills the
202+
contents, changes the state to PENDING and notifies SOS via
203+
upcall.
204+
205+
2. On upcalls, SOS dispatches each I/O request in the PENDING state to
206+
clients and changes the state to PROCESSING.
207+
208+
3. The client assigned an I/O request changes the state to COMPLETE
209+
after it completes the emulation of the I/O request. A hypercall
210+
is made to notify the hypervisor on I/O request completion after
211+
the state change.
212+
213+
4. The hypervisor finishes the post-work of a I/O request after it is
214+
notified on its completion and change the state back to FREE.
215+
216+
States are accessed using atomic operations to avoid getting unexpected
217+
states on one core when it is written on another.
218+
219+
Note that there is no state to represent a ‘failed’ I/O request. SOS
220+
should return all 1’s for reads and ignore writes whenever it cannot
221+
handle the I/O request, and change the state of the request to COMPLETE.
222+
223+
Post-work
224+
=========
225+
226+
After an I/O request is completed, some more work needs to be done for
227+
I/O reads to update guest registers accordingly. Currently the
228+
hypervisor re-enters the vCPU thread every time a vCPU is scheduled back
229+
in, rather than switching to where the vCPU is scheduled out. As a result,
230+
post-work is introduced for this purpose.
231+
232+
The hypervisor pauses a vCPU before an I/O request is delivered to SOS.
233+
Once the I/O request emulation is completed, a client notifies the
234+
hypervisor by a hypercall. The hypervisor will pick up that request, do
235+
the post-work, and resume the guest vCPU. The post-work takes care of
236+
updating the vCPU guest state to reflect the effect of the I/O reads.
237+
238+
.. figure:: images/ioem-image100.png
239+
:align: center
240+
241+
Workflow of MMIO I/O request completion
242+
243+
The figure above illustrates the workflow to complete an I/O
244+
request for MMIO. Once the I/O request is completed, SOS makes a
245+
hypercall to notify the hypervisor which resumes the UOS vCPU triggering
246+
the access after requesting post-work on that vCPU. After the UOS vCPU
247+
resumes, it does the post-work first to update the guest registers if
248+
the access reads an address, changes the state of the corresponding I/O
249+
request slot to FREE, and continues execution of the vCPU.
250+
251+
.. figure:: images/ioem-image106.png
252+
:align: center
253+
:name: port-io-completion
254+
255+
Workflow of port I/O request completion
256+
257+
Completion of a port I/O request (shown in :numref:`port-io-completion`
258+
above) is
259+
similar to the MMIO case, except the post-work is done before resuming
260+
the vCPU. This is because the post-work for port I/O reads need to update
261+
the general register eax of the vCPU, while the post-work for MMIO reads
262+
need further emulation of the trapped instruction. This is much more
263+
complex and may impact the performance of SOS.
264+
265+
.. _io-structs-interfaces:
266+
267+
Data Structures and Interfaces
268+
******************************
269+
270+
External Interfaces
271+
===================
272+
273+
The following structures represent an I/O request. *struct vhm_request*
274+
is the main structure and the others are detailed representations of I/O
275+
requests of different kinds. Refer to Section 4.4.4 for the usage of
276+
*struct pci_request*.
277+
278+
.. doxygenstruct:: mmio_request
279+
:project: Project ACRN
280+
281+
.. doxygenstruct:: pio_request
282+
:project: Project ACRN
283+
284+
.. doxygenstruct:: pci_request
285+
:project: Project ACRN
286+
287+
.. doxygenunion:: vhm_io_request
288+
:project: Project ACRN
289+
290+
.. doxygenstruct:: vhm_request
291+
:project: Project ACRN
292+
293+
For hypercalls related to I/O emulation, refer to Section 3.11.4.
294+
295+
.. _io-handler-init:
296+
297+
Initialization and Deinitialization
298+
===================================
299+
300+
The following structure represents a port I/O handler:
301+
302+
.. note:: add reference to vm_io_handler_desc definition in ioreq.h
303+
304+
The following structure represents a MMIO handler.
305+
306+
.. note:: add reference to mem_io_node definition in ioreq.h
307+
308+
309+
The following APIs are provided to initialize, deinitialize or configure
310+
I/O bitmaps and register or unregister I/O handlers:
311+
312+
.. code-block:: c
313+
314+
/* Initialize the I/O bitmap for vm. */
315+
void setup_io_bitmap(struct vm *vm)
316+
317+
/* Allow a VM to access a port I/O range.
318+
* This API enables direct access from the given vm to the port I/O space
319+
* starting from address_arg to address_arg + nbytes - 1.
320+
*/
321+
void allow_guest_io_access(struct vm *vm, uint32_t address_arg, uint32_t nbytes)
322+
323+
/* Free I/O bitmaps and port I/O handlers of vm. */
324+
void free_io_emulation_resource(struct vm *vm)
325+
326+
/* Register a port I/O handler. */
327+
void register_io_emulation_handler(struct vm *vm, struct vm_io_range *range,
328+
io_read_fn_t io_read_fn_ptr, io_write_fn_t io_write_fn_ptr)
329+
330+
/* Register a MMIO handler. */
331+
int register_mmio_emulation_handler(struct vm *vm, hv_mem_io_handler_t read_write,
332+
uint64_t start, uint64_t end, void *handler_private_data)
333+
334+
/* Unregister a MMIO handler.*/
335+
void unregister_mmio_emulation_handler(struct vm *vm, uint64_t start, uint64_t end)
336+
337+
.. note:: change these to reference API material from ioreq.h
338+
339+
I/O Emulation
340+
=============
341+
342+
The following APIs are provided for I/O emulation at runtime:
343+
344+
.. code-block:: c
345+
346+
/* Emulate the given I/O access for vcpu. */
347+
int32_t emulate_io(struct vcpu *vcpu, struct io_request *io_req)
348+
349+
/* Deliver io_req to SOS and suspend vcpu till its completion. */
350+
int32_t acrn_insert_request_wait(struct vcpu *vcpu, struct io_request *io_req)
351+
352+
/* General post-work for port I/O emulation. */
353+
void emulate_io_post(struct vcpu *vcpu)
354+
355+
/* General post-work for MMIO emulation. */
356+
void emulate_mmio_post(struct vcpu *vcpu, struct io_request *io_req)
357+
358+
/* Post-work of I/O requests for MMIO. */
359+
void dm_emulate_mmio_post(struct vcpu *vcpu)
360+
361+
/* The handler of VM exits on I/O instructions. */
362+
int32_t pio_instr_vmexit_handler(struct vcpu *vcpu)
363+
364+
.. note:: change these to reference API material from ioreq.h
365+
366+
.. toctree::
367+
:maxdepth: 1
368+
369+
GVT-g GPU Virtualization <hld-APL_GVT-g>
370+
UART virtualization <uart-virt-hld>
371+
Watchdoc virtualization <watchdog-hld>

doc/developer-guides/hld/hld-overview.rst

Lines changed: 2 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -94,6 +94,8 @@ start/stop/pause, virtual CPU pause/resume,etc.
9494

9595
ACRN Architecture
9696

97+
.. _intro-io-emulation:
98+
9799
Device Emulation
98100
================
99101

0 commit comments

Comments
 (0)