Skip to content

Commit ac5facd

Browse files
ZideChen0dbkinder
authored andcommitted
doc: update CPU affinity related descriptions
- Changed term from "VCPU affinity" to "CPU affinity" - changed vcpu_affinity to cpu_affinity_bitmap in vm_config - fixed some errors Signed-off-by: Zide Chen <zide.chen@intel.com>
1 parent 1436638 commit ac5facd

File tree

3 files changed

+95
-58
lines changed

3 files changed

+95
-58
lines changed

doc/developer-guides/hld/hv-cpu-virt.rst

Lines changed: 20 additions & 27 deletions
Original file line numberDiff line numberDiff line change
@@ -56,18 +56,22 @@ ACRN then forces a fixed 1:1 mapping between a VCPU and this physical CPU
5656
when creating a VCPU for the guest Operating System. This makes the VCPU
5757
management code much simpler.
5858

59-
``vcpu_affinity`` in ``vm config`` help to decide which physical CPU a
60-
VCPU in a VM affine to, then finalize the fixed mapping.
59+
``cpu_affinity_bitmap`` in ``vm config`` helps to decide which physical CPU a
60+
VCPU in a VM affines to, then finalize the fixed mapping. When launching an
61+
user VM, need to choose pCPUs from the VM's cpu_affinity_bitmap that are not
62+
used by any other VMs.
6163

6264
Flexible CPU Sharing
6365
********************
6466

65-
This is a TODO feature.
66-
To enable CPU sharing, the ACRN hypervisor could configure "round-robin
67-
scheduler" as the schedule policy for corresponding physical CPU.
67+
To enable CPU sharing, ACRN hypervisor could configure IORR
68+
(IO sensitive Round-Robin) or BVT (Borrowed Virtual Time) scheduler policy.
6869

69-
``vcpu_affinity`` in ``vm config`` help to decide which physical CPU two
70-
or more VCPUs from different VMs are sharing.
70+
``cpu_affinity_bitmap`` in ``vm config`` helps to decide which physical CPU two
71+
or more vCPUs from different VMs are sharing. A pCPU can be shared among Service OS
72+
and any user VMs as long as local APIC passthrough is not enabled in that user VM.
73+
74+
see :ref:`cpu_sharing` for more information.
7175

7276
CPU management in the Service VM under static CPU partitioning
7377
==============================================================
@@ -90,8 +94,8 @@ Here is an example flow of CPU allocation on a multi-core platform.
9094

9195
CPU allocation on a multi-core platform
9296

93-
CPU management in the Service VM under flexing CPU sharing
94-
==========================================================
97+
CPU management in the Service VM under flexible CPU sharing
98+
===========================================================
9599

96100
As all Service VM CPUs could share with different UOSs, ACRN can still pass-thru
97101
MADT to Service VM, and the Service VM is still able to see all physical CPUs.
@@ -102,28 +106,17 @@ CPUs intended for UOS use.
102106
CPU management in UOS
103107
=====================
104108

105-
From the UOS point of view, CPU management is very simple - when DM does
106-
hypercalls to create VMs, the hypervisor will create its virtual CPUs
107-
based on the configuration in this UOS VM's ``vm config``.
108-
109-
As mentioned in previous description, ``vcpu_affinity`` in ``vm config``
110-
tells which physical CPUs a VM's VCPU will use, and the scheduler policy
111-
associated with corresponding physical CPU decide this VCPU will run in
112-
partition or sharing mode.
113-
109+
``cpu_affinity_bitmap`` in ``vm config`` defines a set of pCPUs that an User VM
110+
is allowed to run on. acrn-dm could choose to launch on only a subset of the pCPUs
111+
or on all pCPUs listed in cpu_affinity_bitmap, but it can't assign
112+
any pCPU that is not included in it.
114113

115114
CPU assignment management in HV
116115
===============================
117116

118-
The physical CPU assignment is pre-defined by ``vcpu_affinity`` in
119-
``vm config``, necessary sanitize check should be done to ensure
120-
121-
- in one VM, each VCPU will have only one prefer physical CPU
122-
123-
- in one VM, its VCPUs will not share same physical CPU
124-
125-
- in one VM, if a VCPU is using "noop scheduler", corresponding
126-
physical CPU will not be shared with any other VM's VCPU
117+
The physical CPU assignment is pre-defined by ``cpu_affinity_bitmap`` in
118+
``vm config``, while post-launched VMs could be launched on pCPUs that are
119+
a subset of it.
127120

128121
Currently, the ACRN hypervisor does not support virtual CPU migration to
129122
different physical CPUs. This means no changes to the virtual CPU to

doc/tutorials/acrn_configuration_tool.rst

Lines changed: 2 additions & 3 deletions
Original file line numberDiff line numberDiff line change
@@ -132,9 +132,8 @@ Additional scenario XML elements:
132132
The order of severity from high to low is:
133133
``SEVERITY_SAFETY_VM``, ``SEVERITY_RTVM``, ``SEVERITY_SOS``, ``SEVERITY_STANDARD_VM``.
134134

135-
``vcpu_affinity``:
136-
vCPU affinity map. Each vCPU will be mapped to the selected pCPU ID. A different vCPU in the same VM cannot be mapped to the same pCPU.
137-
If the pCPU is mapped by different VMs, ``cpu_sharing`` of the VM must be set to ``Enabled`` in the launch XML.
135+
``cpu_affinity``:
136+
List of pCPUs: the guest VM is allowed to create vCPUs from all or a subset of this list.
138137

139138
``base`` (a child node of ``epc_section``):
140139
SGX EPC section base; must be page aligned.

doc/tutorials/cpu_sharing.rst

Lines changed: 73 additions & 28 deletions
Original file line numberDiff line numberDiff line change
@@ -6,28 +6,54 @@ ACRN CPU Sharing
66
Introduction
77
************
88

9-
The goal of CPU Sharing is to fully utilize the physical CPU resource to support more virtual machines. Currently, ACRN only supports 1 to 1 mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). Because of the lack of CPU sharing ability, the number of VMs is limited. To support CPU Sharing, we have introduced a scheduling framework and implemented two simple small scheduling algorithms to satisfy embedded device requirements. Note that, CPU Sharing is not available for VMs with local APIC passthrough (``--lapic_pt`` option).
9+
The goal of CPU Sharing is to fully utilize the physical CPU resource to
10+
support more virtual machines. Currently, ACRN only supports 1 to 1 mapping
11+
mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). Because of the
12+
lack of CPU sharing ability, the number of VMs is limited. To support CPU
13+
Sharing, we have introduced a scheduling framework and implemented two simple
14+
small scheduling algorithms to satisfy embedded device requirements. Note
15+
that, CPU Sharing is not available for VMs with local APIC passthrough
16+
(``--lapic_pt`` option).
1017

1118
Scheduling Framework
1219
********************
1320

14-
To satisfy the modularization design concept, the scheduling framework layer isolates the vCPU layer and scheduler algorithm. It does not have a vCPU concept so it is only aware of the thread object instance. The thread object state machine is maintained in the framework. The framework abstracts the scheduler algorithm object, so this architecture can easily extend to new scheduler algorithms.
21+
To satisfy the modularization design concept, the scheduling framework layer
22+
isolates the vCPU layer and scheduler algorithm. It does not have a vCPU
23+
concept so it is only aware of the thread object instance. The thread object
24+
state machine is maintained in the framework. The framework abstracts the
25+
scheduler algorithm object, so this architecture can easily extend to new
26+
scheduler algorithms.
1527

1628
.. figure:: images/cpu_sharing_framework.png
1729
:align: center
1830

19-
The below diagram shows that the vCPU layer invokes APIs provided by scheduling framework for vCPU scheduling. The scheduling framework also provides some APIs for schedulers. The scheduler mainly implements some callbacks in an ``acrn_scheduler`` instance for scheduling framework. Scheduling initialization is invoked in the hardware management layer.
31+
The below diagram shows that the vCPU layer invokes APIs provided by scheduling
32+
framework for vCPU scheduling. The scheduling framework also provides some APIs
33+
for schedulers. The scheduler mainly implements some callbacks in an
34+
``acrn_scheduler`` instance for scheduling framework. Scheduling initialization
35+
is invoked in the hardware management layer.
2036

2137
.. figure:: images/cpu_sharing_api.png
2238
:align: center
2339

24-
vCPU affinity
40+
CPU affinity
2541
*************
2642

27-
Currently, we do not support vCPU migration; the assignment of vCPU mapping to pCPU is statically configured in the VM configuration via a vcpu_affinity array. The item number of the array matches the vCPU number of this VM. Each item has one bit to indicate the assigned pCPU of the corresponding vCPU. Use these rules to configure the vCPU affinity:
28-
29-
- Only one bit can be set for each affinity item of vCPU.
30-
- vCPUs in the same VM cannot be assigned to the same pCPU.
43+
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
44+
pCPU is fixed at the time the VM is launched. The statically configured
45+
cpu_affinity_bitmap in the VM configuration defines a superset of pCPUs that
46+
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
47+
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
48+
VM is supposed to be launched on exact number of pCPUs that are assigned in
49+
this bitmap. and the vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
50+
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
51+
so on.
52+
53+
For post-launched VMs, acrn-dm could choose to launch a subset of pCPUs that
54+
are defined in cpu_affinity_bitmap by specifying the assigned pCPUs
55+
(``--cpu_affinity`` option). But it can't assign any pCPUs that are not
56+
included in the VM's cpu_affinity_bitmap.
3157

3258
Here is an example for affinity:
3359

@@ -46,26 +72,47 @@ The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
4672
.. figure:: images/cpu_sharing_state.png
4773
:align: center
4874

49-
After a new vCPU is created, the corresponding thread object is initiated. The vCPU layer invokes a wakeup operation. After wakeup, the state for the new thread object is set to RUNNABLE, and then follows its algorithm to determine whether or not to preempt the current running thread object. If yes, it turns to the RUNNING state. In RUNNING state, the thread object may turn back to the RUNNABLE state when it runs out of its timeslice, or it might yield the pCPU by itself, or be preempted. The thread object under RUNNING state may trigger sleep to transfer to BLOCKED state.
75+
After a new vCPU is created, the corresponding thread object is initiated.
76+
The vCPU layer invokes a wakeup operation. After wakeup, the state for the
77+
new thread object is set to RUNNABLE, and then follows its algorithm to
78+
determine whether or not to preempt the current running thread object. If
79+
yes, it turns to the RUNNING state. In RUNNING state, the thread object may
80+
turn back to the RUNNABLE state when it runs out of its timeslice, or it
81+
might yield the pCPU by itself, or be preempted. The thread object under
82+
RUNNING state may trigger sleep to transfer to BLOCKED state.
5083

5184
Scheduler
5285
*********
5386

54-
The below block diagram shows the basic concept for the scheduler. There are two kinds of scheduler in the diagram: NOOP (No-Operation) scheduler and IORR (IO sensitive Round-Robin) scheduler.
87+
The below block diagram shows the basic concept for the scheduler. There are
88+
two kinds of scheduler in the diagram: NOOP (No-Operation) scheduler and IORR
89+
(IO sensitive Round-Robin) scheduler.
5590

5691

5792
- **No-Operation scheduler**:
5893

59-
The NOOP (No-operation) scheduler has the same policy as the original 1-1 mapping previously used; every pCPU can run only two thread objects: one is the idle thread, and another is the thread of the assigned vCPU. With this scheduler, vCPU works in Work-Conserving mode, which always try to keep resource busy, and will run once it is ready. Idle thread can run when the vCPU thread is blocked.
94+
The NOOP (No-operation) scheduler has the same policy as the original 1-1
95+
mapping previously used; every pCPU can run only two thread objects: one is
96+
the idle thread, and another is the thread of the assigned vCPU. With this
97+
scheduler, vCPU works in Work-Conserving mode, which always try to keep
98+
resource busy, and will run once it is ready. Idle thread can run when the
99+
vCPU thread is blocked.
60100

61101
- **IO sensitive round-robin scheduler**:
62102

63-
The IORR (IO sensitive round-robin) scheduler is implemented with the per-pCPU runqueue and the per-pCPU tick timer; it supports more than one vCPU running on a pCPU. It basically schedules thread objects in a round-robin policy and supports preemption by timeslice counting.
103+
The IORR (IO sensitive round-robin) scheduler is implemented with the per-pCPU
104+
runqueue and the per-pCPU tick timer; it supports more than one vCPU running
105+
on a pCPU. It basically schedules thread objects in a round-robin policy and
106+
supports preemption by timeslice counting.
64107

65108
- Every thread object has an initial timeslice (ex: 10ms)
66-
- The timeslice is consumed with time and be counted in the context switch and tick handler
67-
- If the timeslice is positive or zero, then switch out the current thread object and put it to tail of runqueue. Then, pick the next runnable one from runqueue to run.
68-
- Threads with an IO request will preempt current running threads on the same pCPU.
109+
- The timeslice is consumed with time and be counted in the context switch
110+
and tick handler
111+
- If the timeslice is positive or zero, then switch out the current thread
112+
object and put it to tail of runqueue. Then, pick the next runnable one
113+
from runqueue to run.
114+
- Threads with an IO request will preempt current running threads on the
115+
same pCPU.
69116

70117
Scheduler configuration
71118
***********************
@@ -79,19 +126,20 @@ Two places in the code decide the usage for the scheduler.
79126
:name: Kconfig for Scheduler
80127
:caption: Kconfig for Scheduler
81128
:linenos:
82-
:lines: 40-58
129+
:lines: 25-52
83130
:emphasize-lines: 3
84131
:language: c
85132

86-
The default scheduler is **SCHED_NOOP**. To use the IORR, change it to **SCHED_IORR** in the **ACRN Scheduler**.
133+
The default scheduler is **SCHED_NOOP**. To use the IORR, change it to
134+
**SCHED_IORR** in the **ACRN Scheduler**.
87135

88-
* The affinity for VMs are set in ``hypervisor/scenarios/<scenario_name>/vm_configurations.h``
136+
* The VM CPU affinities are defined in ``hypervisor/scenarios/<scenario_name>/vm_configurations.h``
89137

90138
.. literalinclude:: ../../../..//hypervisor/scenarios/industry/vm_configurations.h
91139
:name: Affinity for VMs
92140
:caption: Affinity for VMs
93141
:linenos:
94-
:lines: 31-32
142+
:lines: 39-45
95143
:language: c
96144

97145
* vCPU number corresponding to affinity is set in ``hypervisor/scenarios/<scenario_name>/vm_configurations.c`` by the **vcpu_num**
@@ -142,9 +190,9 @@ Change the following three files:
142190
"i915.enable_gvt=1 " \
143191
SOS_BOOTARGS_DIFF
144192
145-
#define VM1_CONFIG_VCPU_AFFINITY {AFFINITY_CPU(0U)}
146-
#define VM2_CONFIG_VCPU_AFFINITY {AFFINITY_CPU(1U), AFFINITY_CPU(2U)}
147-
#define VM3_CONFIG_VCPU_AFFINITY {AFFINITY_CPU(3U)}
193+
#define VM1_CONFIG_CPU_AFFINITY (AFFINITY_CPU(0U))
194+
#define VM2_CONFIG_CPU_AFFINITY (AFFINITY_CPU(1U) | AFFINITY_CPU(2U))
195+
#define VM3_CONFIG_CPU_AFFINITY (AFFINITY_CPU(3U))
148196
149197
3. ``hypervisor/scenarios/industry/vm_configurations.c``
150198

@@ -187,8 +235,7 @@ Change the following three files:
187235
.load_order = POST_LAUNCHED_VM,
188236
.uuid = {0xd2U, 0x79U, 0x54U, 0x38U, 0x25U, 0xd6U, 0x11U, 0xe8U, \
189237
0x86U, 0x4eU, 0xcbU, 0x7aU, 0x18U, 0xb3U, 0x46U, 0x43U},
190-
.vcpu_num = 1U,
191-
.vcpu_affinity = VM1_CONFIG_VCPU_AFFINITY,
238+
.cpu_affinity_bitmap = VM1_CONFIG_CPU_AFFINITY,
192239
.vuart[0] = {
193240
.type = VUART_LEGACY_PIO,
194241
.addr.port_base = COM1_BASE,
@@ -206,8 +253,7 @@ Change the following three files:
206253
0xafU, 0x76U, 0xd4U, 0xbcU, 0x5aU, 0x8eU, 0xc0U, 0xe5U},
207254
208255
.guest_flags = GUEST_FLAG_HIGHEST_SEVERITY,
209-
.vcpu_num = 2U,
210-
.vcpu_affinity = VM2_CONFIG_VCPU_AFFINITY,
256+
.cpu_affinity_bitmap = VM2_CONFIG_CPU_AFFINITY,
211257
.vuart[0] = {
212258
.type = VUART_LEGACY_PIO,
213259
.addr.port_base = COM1_BASE,
@@ -225,8 +271,7 @@ Change the following three files:
225271
.load_order = POST_LAUNCHED_VM,
226272
.uuid = {0x38U, 0x15U, 0x88U, 0x21U, 0x52U, 0x08U, 0x40U, 0x05U, \
227273
0xb7U, 0x2aU, 0x8aU, 0x60U, 0x9eU, 0x41U, 0x90U, 0xd0U},
228-
.vcpu_num = 1U,
229-
.vcpu_affinity = VM3_CONFIG_VCPU_AFFINITY,
274+
.cpu_affinity_bitmap = VM3_CONFIG_CPU_AFFINITY,
230275
.vuart[0] = {
231276
.type = VUART_LEGACY_PIO,
232277
.addr.port_base = COM1_BASE,

0 commit comments

Comments
 (0)