Skip to content

Commit 1707fc3

Browse files
committed
doc: add memory management HLD
Transcribe and publish the reviewed memory managment HLD into the ACRN doc set as a developer guide Signed-off-by: David B. Kinder <david.b.kinder@intel.com>
1 parent b369098 commit 1707fc3

File tree

9 files changed

+249
-0
lines changed

9 files changed

+249
-0
lines changed
4.7 KB
Loading
23.2 KB
Loading
16.2 KB
Loading
11.1 KB
Loading
35.1 KB
Loading
44.8 KB
Loading
42.6 KB
Loading

doc/developer-guides/index.rst

Lines changed: 1 addition & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -7,6 +7,7 @@ Developer Guides
77
:maxdepth: 1
88

99
primer.rst
10+
memmgt-hld.rst
1011
virtio-hld.rst
1112
../api/index.rst
1213
../reference/kconfig/index.rst

doc/developer-guides/memmgt-hld.rst

Lines changed: 248 additions & 0 deletions
Original file line numberDiff line numberDiff line change
@@ -0,0 +1,248 @@
1+
.. _memmgt-hld:
2+
3+
Memory Management High-Level Design
4+
###################################
5+
6+
This document describes memory management for the ACRN hypervisor.
7+
8+
Overview
9+
********
10+
11+
In the ACRN hypervisor system, there are few different memory spaces to
12+
consider. From the hypervisor's point of view there are:
13+
14+
- Host Physical Address (HPA): the native physical address space, and
15+
- Host Virtual Address (HVA): the native virtual address space based on
16+
a MMU. A page table is used to do the translation between HPA and HVA
17+
spaces.
18+
19+
And from the Guest OS running on a hypervisor there are:
20+
21+
- Guest Physical Address (GPA): the guest physical address space from a
22+
virtual machine. GPA to HPA transition is usually based on a
23+
MMU-like hardware module (EPT in X86), and associated with a page
24+
table
25+
- Guest Virtual Address (GVA): the guest virtual address space from a
26+
virtual machine based on a vMMU
27+
28+
.. figure:: images/mem-image2.png
29+
:align: center
30+
:width: 900px
31+
:name: mem-overview
32+
33+
ACRN Memory Mapping Overview
34+
35+
:numref:`mem-overview` provides an overview of the ACRN system memory
36+
mapping, showing:
37+
38+
- GVA to GPA mapping based on vMMU on a VCPU in a VM
39+
- GPA to HPA mapping based on EPT for a VM in the hypervisor
40+
- HVA to HPA mapping based on MMU in the hypervisor
41+
42+
This document illustrates the memory management infrastructure for the
43+
ACRN hypervisor and how it handles the different memory space views
44+
inside the hypervisor and from a VM:
45+
46+
- How ACRN hypervisor manages host memory (HPA/HVA)
47+
- How ACRN hypervisor manages SOS guest memory (HPA/GPA)
48+
- How ACRN hypervisor & SOS DM manage UOS guest memory (HPA/GPA)
49+
50+
Hypervisor Memory Management
51+
****************************
52+
53+
The ACRN hypervisor is the primary owner to manage system
54+
memory. Typically the boot firmware (e.g., EFI) passes the platform physical
55+
memory layout - E820 table to the hypervisor. The ACRN hypervisor does its memory
56+
management based on this table.
57+
58+
Physical Memory Layout - E820
59+
=============================
60+
61+
The boot firmware (e.g., EFI) passes the E820 table through a multiboot protocol.
62+
This table contains the original memory layout for the platform.
63+
64+
.. figure:: images/mem-image1.png
65+
:align: center
66+
:width: 900px
67+
:name: mem-layout
68+
69+
Physical Memory Layout Example
70+
71+
:numref:`mem-layout` is an example of the physical memory layout based on a simple
72+
platform E820 table. The following sections demonstrate different memory
73+
space management by referencing it.
74+
75+
Physical to Virtual Mapping
76+
===========================
77+
78+
ACRN hypervisor is running under paging mode, so after receiving
79+
the platform E820 table, ACRN hypervisor creates its MMU page table
80+
based on it. This is done by the function init_paging() for all
81+
physical CPUs.
82+
83+
The memory mapping policy here is:
84+
85+
- Identical mapping for each physical CPU (ACRN hypervisor's memory
86+
could be relocatable in a future implementation)
87+
- Map all memory regions with UNCACHED type
88+
- Remap RAM regions to WRITE-BACK type
89+
90+
.. figure:: images/mem-image4.png
91+
:align: center
92+
:width: 900px
93+
:name: vm-layout
94+
95+
Hypervisor Virtual Memory Layout
96+
97+
:numref:`vm-layout` shows:
98+
99+
- Hypervisor can access all of system memory
100+
- Hypervisor has an UNCACHED MMIO/PCI hole reserved for devices, such
101+
as for LAPIC/IOAPIC access
102+
- Hypervisor has its own memory with WRITE-BACK cache type for its
103+
code and data (< 1M part is for secondary CPU reset code)
104+
105+
Service OS Memory Management
106+
****************************
107+
108+
After the ACRN hypervisor starts, it creates the Service OS as its first
109+
VM. The Service OS runs all the native device drivers, manage the
110+
hardware devices, and provides I/O mediation to guest VMs. The Service
111+
OS is in charge of the memory allocation for Guest VMs as well.
112+
113+
ACRN hypervisor passes the whole system memory access (except its own
114+
part) to the Service OS. The Service OS must be able to access all of
115+
the system memory except the hypervisor part.
116+
117+
Guest Physical Memory Layout - E820
118+
===================================
119+
120+
The ACRN hypervisor passes the original E820 table to the Service OS
121+
after filtering out its own part. So from Service OS's view, it sees
122+
almost all the system memory as shown here:
123+
124+
.. figure:: images/mem-image3.png
125+
:align: center
126+
:width: 900px
127+
:name: sos-mem-layout
128+
129+
SOS Physical Memory Layout
130+
131+
Host to Guest Mapping
132+
=====================
133+
134+
ACRN hypervisor creates Service OS's host (HPA) to guest (GPA) mapping
135+
(EPT mapping) through the function
136+
``prepare_vm0_memmap_and_e820()`` when it creates the SOS VM. It follows
137+
these rules:
138+
139+
- Identical mapping
140+
- Map all memory range with UNCACHED type
141+
- Remap RAM entries in E820 (revised) with WRITE-BACK type
142+
- Unmap ACRN hypervisor memory range
143+
- Unmap ACRN hypervisor emulated vLAPIC/vIOAPIC MMIO range
144+
145+
The host to guest mapping is static for the Service OS; it will not
146+
change after the Service OS begins running. Each native device driver
147+
can access its MMIO through this static mapping. EPT violation is only
148+
serving for vLAPIC/vIOAPIC's emulation in the hypervisor for Service OS
149+
VM.
150+
151+
User OS Memory Management
152+
*************************
153+
154+
User OS VM is created by the DM (Device Model) application running in
155+
the Service OS. DM is responsible for the memory allocation for a User
156+
or Guest OS VM.
157+
158+
Guest Physical Memory Layout - E820
159+
===================================
160+
161+
DM will create the E820 table for a User OS VM based on these simple
162+
rules:
163+
164+
- If requested VM memory size < low memory limitation (defined in DM,
165+
as 2GB), then low memory range = [0, requested VM memory size]
166+
- If requested VM memory size > low memory limitation (defined in DM,
167+
as 2GB), then low memory range = [0, 2GB], high memory range = [4GB,
168+
4GB + requested VM memory size - 2GB]
169+
170+
.. figure:: images/mem-image6.png
171+
:align: center
172+
:width: 900px
173+
:name: uos-mem-layout
174+
175+
UOS Physical Memory Layout
176+
177+
DM is doing UOS memory allocation based on hugeTLB mechanism by
178+
default. The real memory mapping
179+
may be scattered in SOS physical memory space, as shown below:
180+
181+
.. figure:: images/mem-image5.png
182+
:align: center
183+
:width: 900px
184+
:name: uos-mem-layout-hugetlb
185+
186+
UOS Physical Memory Layout Based on Hugetlb
187+
188+
Host to Guest Mapping
189+
=====================
190+
191+
A User OS VM's memory is allocated by the Service OS DM application, and
192+
may come from different huge pages in the Service OS as shown in
193+
:ref:`uos-mem-layout-hugetlb`.
194+
195+
As Service OS has the full information of these huge pages size,
196+
SOS-GPA and UOS-GPA, it works with the hypervisor to complete UOS's host
197+
to guest mapping using this pseudo code:
198+
199+
.. code-block:: c
200+
201+
for x in allocated huge pages do
202+
x.hpa = gpa2hpa_for_sos(x.sos_gpa)
203+
host2guest_map_for_uos(x.hpa, x.uos_gpa, x.size)
204+
end
205+
206+
Trusty
207+
======
208+
209+
For an Android User OS, there is a secure world called "trusty world
210+
support", whose memory needs are taken care by the ACRN hypervisor for
211+
security consideration. From the memory management's view, the trusty
212+
memory space should not be accessible by SOS or UOS normal world.
213+
214+
.. figure:: images/mem-image7.png
215+
:align: center
216+
:width: 900px
217+
:name: uos-mem-layout-trusty
218+
219+
UOS Physical Memory Layout with Trusty
220+
221+
Memory Interaction
222+
******************
223+
224+
Previous sections described different memory spaces management in the
225+
ACRN hypervisor, Service OS, and User OS. Among these memory spaces,
226+
there are different kinds of interaction, for example, a VM may do a
227+
hypercall to the hypervisor that includes a data transfer, or an
228+
instruction emulation in the hypervisor may need to access the Guest
229+
instruction pointer register to fetch instruction data.
230+
231+
Access GPA from Hypervisor
232+
==========================
233+
234+
When a hypervisor needs access to the GPA for data transfers, the caller
235+
from the Guest must make sure this memory range's GPA is address
236+
continuous. But for HPA in the hypervisor, it could be address
237+
dis-continuous (especially for UOS under hugetlb allocation mechanism).
238+
For example, a 4MB GPA range may map to 2 different 2MB huge pages. The
239+
ACRN hypervisor needs to take care of this kind of data transfer by
240+
doing EPT page walking based on its HPA.
241+
242+
Access GVA from Hypervisor
243+
==========================
244+
245+
Likely, when hypervisor need to access GVA for data transfer, both GPA
246+
and HPA could be address dis-continuous. The ACRN hypervisor must pay
247+
attention to this kind of data transfer, and handle it by doing page
248+
walking based on both its GPA and HPA.

0 commit comments

Comments
 (0)