You signed in with another tab or window. Reload to refresh your session.You signed out in another tab or window. Reload to refresh your session.You switched accounts on another tab or window. Reload to refresh your session.Dismiss alert
- Changed term from "VCPU affinity" to "CPU affinity"
- changed vcpu_affinity to cpu_affinity_bitmap in vm_config
- fixed some errors
Signed-off-by: Zide Chen <zide.chen@intel.com>
Copy file name to clipboardExpand all lines: doc/tutorials/cpu_sharing.rst
+73-28Lines changed: 73 additions & 28 deletions
Display the source diff
Display the rich diff
Original file line number
Diff line number
Diff line change
@@ -6,28 +6,54 @@ ACRN CPU Sharing
6
6
Introduction
7
7
************
8
8
9
-
The goal of CPU Sharing is to fully utilize the physical CPU resource to support more virtual machines. Currently, ACRN only supports 1 to 1 mapping mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). Because of the lack of CPU sharing ability, the number of VMs is limited. To support CPU Sharing, we have introduced a scheduling framework and implemented two simple small scheduling algorithms to satisfy embedded device requirements. Note that, CPU Sharing is not available for VMs with local APIC passthrough (``--lapic_pt`` option).
9
+
The goal of CPU Sharing is to fully utilize the physical CPU resource to
10
+
support more virtual machines. Currently, ACRN only supports 1 to 1 mapping
11
+
mode between virtual CPUs (vCPUs) and physical CPUs (pCPUs). Because of the
12
+
lack of CPU sharing ability, the number of VMs is limited. To support CPU
13
+
Sharing, we have introduced a scheduling framework and implemented two simple
14
+
small scheduling algorithms to satisfy embedded device requirements. Note
15
+
that, CPU Sharing is not available for VMs with local APIC passthrough
16
+
(``--lapic_pt`` option).
10
17
11
18
Scheduling Framework
12
19
********************
13
20
14
-
To satisfy the modularization design concept, the scheduling framework layer isolates the vCPU layer and scheduler algorithm. It does not have a vCPU concept so it is only aware of the thread object instance. The thread object state machine is maintained in the framework. The framework abstracts the scheduler algorithm object, so this architecture can easily extend to new scheduler algorithms.
21
+
To satisfy the modularization design concept, the scheduling framework layer
22
+
isolates the vCPU layer and scheduler algorithm. It does not have a vCPU
23
+
concept so it is only aware of the thread object instance. The thread object
24
+
state machine is maintained in the framework. The framework abstracts the
25
+
scheduler algorithm object, so this architecture can easily extend to new
26
+
scheduler algorithms.
15
27
16
28
.. figure:: images/cpu_sharing_framework.png
17
29
:align:center
18
30
19
-
The below diagram shows that the vCPU layer invokes APIs provided by scheduling framework for vCPU scheduling. The scheduling framework also provides some APIs for schedulers. The scheduler mainly implements some callbacks in an ``acrn_scheduler`` instance for scheduling framework. Scheduling initialization is invoked in the hardware management layer.
31
+
The below diagram shows that the vCPU layer invokes APIs provided by scheduling
32
+
framework for vCPU scheduling. The scheduling framework also provides some APIs
33
+
for schedulers. The scheduler mainly implements some callbacks in an
34
+
``acrn_scheduler`` instance for scheduling framework. Scheduling initialization
35
+
is invoked in the hardware management layer.
20
36
21
37
.. figure:: images/cpu_sharing_api.png
22
38
:align:center
23
39
24
-
vCPU affinity
40
+
CPU affinity
25
41
*************
26
42
27
-
Currently, we do not support vCPU migration; the assignment of vCPU mapping to pCPU is statically configured in the VM configuration via a vcpu_affinity array. The item number of the array matches the vCPU number of this VM. Each item has one bit to indicate the assigned pCPU of the corresponding vCPU. Use these rules to configure the vCPU affinity:
28
-
29
-
- Only one bit can be set for each affinity item of vCPU.
30
-
- vCPUs in the same VM cannot be assigned to the same pCPU.
43
+
Currently, we do not support vCPU migration; the assignment of vCPU mapping to
44
+
pCPU is fixed at the time the VM is launched. The statically configured
45
+
cpu_affinity_bitmap in the VM configuration defines a superset of pCPUs that
46
+
the VM is allowed to run on. One bit in this bitmap indicates that one pCPU
47
+
could be assigned to this VM, and the bit number is the pCPU ID. A pre-launched
48
+
VM is supposed to be launched on exact number of pCPUs that are assigned in
49
+
this bitmap. and the vCPU to pCPU mapping is implicitly indicated: vCPU0 maps
50
+
to the pCPU with lowest pCPU ID, vCPU1 maps to the second lowest pCPU ID, and
51
+
so on.
52
+
53
+
For post-launched VMs, acrn-dm could choose to launch a subset of pCPUs that
54
+
are defined in cpu_affinity_bitmap by specifying the assigned pCPUs
55
+
(``--cpu_affinity`` option). But it can't assign any pCPUs that are not
56
+
included in the VM's cpu_affinity_bitmap.
31
57
32
58
Here is an example for affinity:
33
59
@@ -46,26 +72,47 @@ The thread object contains three states: RUNNING, RUNNABLE, and BLOCKED.
46
72
.. figure:: images/cpu_sharing_state.png
47
73
:align:center
48
74
49
-
After a new vCPU is created, the corresponding thread object is initiated. The vCPU layer invokes a wakeup operation. After wakeup, the state for the new thread object is set to RUNNABLE, and then follows its algorithm to determine whether or not to preempt the current running thread object. If yes, it turns to the RUNNING state. In RUNNING state, the thread object may turn back to the RUNNABLE state when it runs out of its timeslice, or it might yield the pCPU by itself, or be preempted. The thread object under RUNNING state may trigger sleep to transfer to BLOCKED state.
75
+
After a new vCPU is created, the corresponding thread object is initiated.
76
+
The vCPU layer invokes a wakeup operation. After wakeup, the state for the
77
+
new thread object is set to RUNNABLE, and then follows its algorithm to
78
+
determine whether or not to preempt the current running thread object. If
79
+
yes, it turns to the RUNNING state. In RUNNING state, the thread object may
80
+
turn back to the RUNNABLE state when it runs out of its timeslice, or it
81
+
might yield the pCPU by itself, or be preempted. The thread object under
82
+
RUNNING state may trigger sleep to transfer to BLOCKED state.
50
83
51
84
Scheduler
52
85
*********
53
86
54
-
The below block diagram shows the basic concept for the scheduler. There are two kinds of scheduler in the diagram: NOOP (No-Operation) scheduler and IORR (IO sensitive Round-Robin) scheduler.
87
+
The below block diagram shows the basic concept for the scheduler. There are
88
+
two kinds of scheduler in the diagram: NOOP (No-Operation) scheduler and IORR
89
+
(IO sensitive Round-Robin) scheduler.
55
90
56
91
57
92
- **No-Operation scheduler**:
58
93
59
-
The NOOP (No-operation) scheduler has the same policy as the original 1-1 mapping previously used; every pCPU can run only two thread objects: one is the idle thread, and another is the thread of the assigned vCPU. With this scheduler, vCPU works in Work-Conserving mode, which always try to keep resource busy, and will run once it is ready. Idle thread can run when the vCPU thread is blocked.
94
+
The NOOP (No-operation) scheduler has the same policy as the original 1-1
95
+
mapping previously used; every pCPU can run only two thread objects: one is
96
+
the idle thread, and another is the thread of the assigned vCPU. With this
97
+
scheduler, vCPU works in Work-Conserving mode, which always try to keep
98
+
resource busy, and will run once it is ready. Idle thread can run when the
99
+
vCPU thread is blocked.
60
100
61
101
- **IO sensitive round-robin scheduler**:
62
102
63
-
The IORR (IO sensitive round-robin) scheduler is implemented with the per-pCPU runqueue and the per-pCPU tick timer; it supports more than one vCPU running on a pCPU. It basically schedules thread objects in a round-robin policy and supports preemption by timeslice counting.
103
+
The IORR (IO sensitive round-robin) scheduler is implemented with the per-pCPU
104
+
runqueue and the per-pCPU tick timer; it supports more than one vCPU running
105
+
on a pCPU. It basically schedules thread objects in a round-robin policy and
106
+
supports preemption by timeslice counting.
64
107
65
108
- Every thread object has an initial timeslice (ex: 10ms)
66
-
- The timeslice is consumed with time and be counted in the context switch and tick handler
67
-
- If the timeslice is positive or zero, then switch out the current thread object and put it to tail of runqueue. Then, pick the next runnable one from runqueue to run.
68
-
- Threads with an IO request will preempt current running threads on the same pCPU.
109
+
- The timeslice is consumed with time and be counted in the context switch
110
+
and tick handler
111
+
- If the timeslice is positive or zero, then switch out the current thread
112
+
object and put it to tail of runqueue. Then, pick the next runnable one
113
+
from runqueue to run.
114
+
- Threads with an IO request will preempt current running threads on the
115
+
same pCPU.
69
116
70
117
Scheduler configuration
71
118
***********************
@@ -79,19 +126,20 @@ Two places in the code decide the usage for the scheduler.
79
126
:name: Kconfig for Scheduler
80
127
:caption: Kconfig for Scheduler
81
128
:linenos:
82
-
:lines:40-58
129
+
:lines:25-52
83
130
:emphasize-lines: 3
84
131
:language: c
85
132
86
-
The default scheduler is **SCHED_NOOP**. To use the IORR, change it to **SCHED_IORR** in the **ACRN Scheduler**.
133
+
The default scheduler is **SCHED_NOOP**. To use the IORR, change it to
134
+
**SCHED_IORR** in the **ACRN Scheduler**.
87
135
88
-
* The affinity for VMs are set in ``hypervisor/scenarios/<scenario_name>/vm_configurations.h``
136
+
* The VM CPU affinities are defined in ``hypervisor/scenarios/<scenario_name>/vm_configurations.h``
0 commit comments