Skip to content

Commit d0e1f05

Browse files
lirui34wenlingz
authored andcommitted
doc: Align the updates of rt gsg with 1.3
Need to align the rt gsg updates with 1.3 branch. Signed-off-by: lirui34 <ruix.li@intel.com>
1 parent b4a4d46 commit d0e1f05

File tree

1 file changed

+275
-56
lines changed

1 file changed

+275
-56
lines changed

doc/getting-started/rt_industry.rst

Lines changed: 275 additions & 56 deletions
Original file line numberDiff line numberDiff line change
@@ -20,12 +20,58 @@ for the RTVM.
2020

2121
- Intel Kaby Lake (aka KBL) NUC platform with two disks inside
2222
(refer to :ref:`the tables <hardware_setup>` for detailed information).
23-
- Clear Linux OS (Ver: 31080) installation onto both disks on the KBL NUC.
23+
- Follow below steps to install Clear Linux OS (Ver: 31080) onto both disks on the KBL NUC:
2424

25-
.. _installation guide:
25+
.. _Clear Linux OS Server image:
26+
https://download.clearlinux.org/releases/31080/clear/clear-31080-live-server.iso.xz
27+
28+
#. Create a bootable USB drive on Linux*:
29+
30+
a. Download and decompress the `Clear Linux OS Server image`_::
31+
32+
$ unxz clear-31080-live-server.iso.xz
33+
34+
#. Plug in the USB drive.
35+
#. Use the ``lsblk`` command line to identify the USB drive:
36+
37+
.. code-block:: console
38+
:emphasize-lines: 6,7
39+
40+
$ lsblk | grep sd*
41+
sda 8:0 0 931.5G 0 disk
42+
├─sda1 8:1 0 512M 0 part /boot/efi
43+
├─sda2 8:2 0 930.1G 0 part /
44+
└─sda3 8:3 0 977M 0 part [SWAP]
45+
sdc 8:32 1 57.3G 0 disk
46+
└─sdc1 8:33 1 57.3G 0 part
47+
48+
#. Unmount all the ``/dev/sdc`` partitions and burn the image onto the USB drive::
49+
50+
$ umount /dev/sdc* 2>/dev/null
51+
$ sudo dd if=./clear-31080-live-server.iso of=/dev/sdc oflag=sync status=progress bs=4M
52+
53+
#. Plug in the USB drive to the KBL NUC and boot from USB.
54+
#. Launch the Clear Linux OS installer boot menu.
55+
#. With Clear Linux OS highlighted, select **Enter**.
56+
#. Log in with your root account and new password.
57+
#. Run the installer using the following command::
58+
59+
# clr-installer
60+
61+
#. From the Main menu, select **Configure Installation Media** and set
62+
**Destructive Installation** to your desired hard disk.
63+
#. Select **Telemetry** to set Tab to highlight your choice.
64+
#. Press :kbd:`A` to show the **Advanced** options.
65+
#. Select **Select additional bundles** and add bundles for
66+
**network-basic**, and **user-basic**.
67+
#. Select **Install**.
68+
#. Select **Confirm Install** in the **Confirm Installation** window to start the installation.
69+
70+
.. _step-by-step instruction:
2671
https://docs.01.org/clearlinux/latest/get-started/bare-metal-install-server.html
2772

28-
.. note:: Follow the `installation guide`_ to install a Clear Linux OS.
73+
.. note:: You may also refer to the `step-by-step instruction`_ for the detailed Clear Linux OS
74+
installation guide.
2975

3076
.. _hardware_setup:
3177

@@ -66,61 +112,182 @@ Use the pre-installed industry ACRN hypervisor
66112

67113
.. note:: Skip this section if you choose :ref:`Using the ACRN industry out-of-the-box image <use industry ootb image>`.
68114

69-
Follow :ref:`ACRN quick setup guide <quick-setup-guide>` to set up the
70-
ACRN Service VM. The industry hypervisor image is installed in the ``/usr/lib/acrn/``
71-
directory once the Service VM boots. Follow the steps below to use
72-
``acrn.kbl-nuc-i7.industry.efi`` instead of the original SDC hypervisor:
115+
#. Boot Clear Linux from SATA disk.
73116

74-
.. code-block:: none
117+
#. Log in as root and download ACRN quick setup script:
118+
119+
.. code-block:: none
120+
121+
# wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/getting-started/acrn_quick_setup.sh
122+
# chmod +x acrn_quick_setup.sh
75123
76-
$ sudo mount /dev/sda1 /mnt
77-
$ sudo mv /mnt/EFI/acrn/acrn.efi /mnt/EFI/acrn/acrn.efi.bak
78-
$ sudo cp /usr/lib/acrn/acrn.kbl-nuc-i7.industry.efi /mnt/EFI/acrn/acrn.efi
79-
$ sync && umount /mnt
80-
$ sudo reboot
124+
#. Run the script to set up Service VM:
125+
126+
.. code-block:: none
127+
128+
# ./acrn_quick_setup.sh -s 31080 -d -i
129+
130+
.. note:: ``-i`` option means the industry hypervisor image will be used:
131+
``acrn.kbl-nuc-i7.industry.efi``.
132+
133+
These outputs show the script is running correctly and
134+
industry hypervisor is also installed:
135+
136+
.. code-block:: console
137+
:emphasize-lines: 9
138+
139+
Upgrading Service VM...
140+
Disable auto update...
141+
Running systemctl to disable updates
142+
Clear Linux version 31080 is already installed. Continuing to setup Service VM...
143+
Adding the service-os and systemd-networkd-autostart bundles...
144+
Loading required manifests...
145+
2 bundles were already installed
146+
Add /mnt/EFI/acrn folder
147+
Copy /usr/lib/acrn/acrn.kbl-nuc-i7.industry.efi to /mnt/EFI/acrn/acrn.efi
148+
Getting latest Service OS kernel version: org.clearlinux.iot-lts2018-sos.4.19.73-92
149+
Add default (5 seconds) boot wait time.
150+
New timeout value is: 5
151+
Set org.clearlinux.iot-lts2018-sos.4.19.73-92 as default boot kernel.
152+
Check ACRN efi boot event
153+
Clean all ACRN efi boot event
154+
Check linux bootloader event
155+
Clean all Linux bootloader event
156+
Add new ACRN efi boot event
157+
Service OS setup done!
158+
159+
#. Use ``efibootmgr -v`` command to check the ACRN boot order:
160+
161+
.. code-block:: none
162+
:emphasize-lines: 3,5
163+
164+
BootCurrent: 000C
165+
Timeout: 1 seconds
166+
BootOrder: 0001,0002,000C,000D,0008,000E,000B,0003,0000,0004,0007
167+
Boot0000* Windows Boot Manager VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)WINDOWS.........x...B.C.D.O.B.J.E.C.T.=.{.9.d.e.a.8.6.2.c.-.5.c.d.d.-.4.e.7.0.-.a.c.c.1.-.f.3.2.b.3.4.4.d.4.7.9.5.}...o................
168+
Boot0001* ACRN HD(1,GPT,c6715698-0f6e-4e27-bb1b-bf7779c1486d,0x800,0x47000)/File(\EFI\acrn\acrn.efi)
169+
Boot0002* Linux bootloader HD(3,GPT,b537f16f-d70f-4f1b-83b4-0f11be83cd83,0xc1800,0xded3000)/File(\EFI\org.clearlinux\bootloaderx64.efi)
170+
Boot0003* CentOS VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
171+
Boot0004* CentOS Linux VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
172+
Boot0007* Linux bootloader VenHw(99e275e7-75a0-4b37-a2e6-c5385e6c00cb)
173+
Boot0008* UEFI : Built-in EFI Shell VenMedia(5023b95c-db26-429b-a648-bd47664c8012)..BO
174+
Boot000B* LAN : IBA CL Slot 00FE v0110 BBS(Network,,0x0)..BO
175+
Boot000C* SATA : PORT 0 : KINGSTON SUV500120G : PART 0 : Boot Drive BBS(HD,,0x0)..BO
176+
Boot000D* INTEL SSDPEKKW256G8 : PART 0 : Boot Drive BBS(HD,,0x0)..BO
177+
Boot000E* UEFI : INTEL SSDPEKKW256G8 : PART 0 : OS Bootloader PciRoot(0x0)/Pci(0x1d,0x0)/Pci(0x0,0x0)/NVMe(0x1,00-00-00-00-00-00-00-00)/HD(1,GPT,8aa992f8-8149-4f6b-8b64-503998c776c1,0x800,0x47000)..BO
178+
179+
.. note:: Ensure the ACRN is the first boot order, or you may use ``efibootmgr -o 1`` command to move it
180+
to the first order.
181+
182+
#. Reboot KBL NUC.
183+
184+
#. Use ``dmesg`` command to ensure the Service VM boots:
185+
186+
.. code-block:: console
187+
:emphasize-lines: 2
188+
189+
# dmesg | grep ACRN
190+
[ 0.000000] Hypervisor detected: ACRN
191+
[ 1.252840] ACRNTrace: Initialized acrn trace module with 4 cpu
192+
[ 1.253291] ACRN HVLog: Failed to init last hvlog devs, errno -19
193+
[ 1.253292] ACRN HVLog: Initialized hvlog module with 4
81194
82195
.. _use industry ootb image:
83196

84197
Use the ACRN industry out-of-the-box image
85198
==========================================
86199

87-
#. Download the
88-
`sos-industry-31080.img.xz <https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/sos-industry-31080.img.xz>`_
89-
to your development machine.
200+
.. note:: If you are following the section above to set up the Service VM, jump to the next
201+
:ref:`section <install_rtvm>`.
90202

91-
#. Decompress the xz image:
203+
#. Boot Clear Linux from NVMe disk.
92204

93-
.. code-block:: none
205+
#. Download the Service VM industry image::
206+
207+
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/sos-industry-31080.img.xz
208+
209+
#. Decompress the xz image::
210+
211+
# xz -d sos-industry-31080.img.xz
212+
213+
#. Burn the Service VM image onto the SATA disk::
214+
215+
# dd if=sos-industry-31080.img of=/dev/sda bs=4M oflag=sync status=progress
216+
217+
#. Configure the EFI firmware to boot the ACRN hypervisor by default::
94218

95-
$ xz -d sos-industry-31080.img.xz
219+
# efibootmgr -c -l "\EFI\acrn\acrn.efi" -d /dev/sda -p 1 -L "ACRN"
96220

97-
#. Follow the instructions at :ref:`Deploy the Service VM image <deploy_ootb_service_vm>`
98-
to deploy the Service VM image on the SATA disk.
221+
#. Unplug the U disk and reboot the test machine. After the Clear Linux OS boots,
222+
log in as “root” for the first time.
223+
224+
.. _install_rtvm:
99225

100226
Install and launch the Preempt-RT VM
101227
************************************
102228

103-
#. Download
104-
`preempt-rt-31080.img.xz <`https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz>`_ to your development machine.
229+
#. Log in Service VM as root privileges.
230+
231+
#. Download the Preempt-RT VM image::
232+
233+
# wget https://github.com/projectacrn/acrn-hypervisor/releases/download/acrn-2019w39.1-140000p/preempt-rt-31080.img.xz
234+
235+
#. Decompress the xz image::
236+
237+
# xz -d preempt-rt-31080.img.xz
238+
239+
#. Burn the Preempt-RT VM image onto the NVMe disk::
240+
241+
# dd if=preempt-rt-31080.img of=/dev/nvme0n1 bs=4M oflag=sync status=progress
105242

106-
#. Decompress the xz image:
243+
#. Use the ``lspci`` command to ensure that the correct NVMe device IDs will
244+
be used for the passthru before launching the script:
107245

108246
.. code-block:: none
247+
:emphasize-lines: 5
109248
110-
$ xz -d preempt-rt-31080.img.xz
249+
# lspci -v | grep -iE 'nvm|ssd'
250+
02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express])
111251
112-
#. Follow the instructions at :ref:`Deploy the User VM Preempt-RT image <deploy_ootb_rtvm>`
113-
to deploy the Preempt-RT vm image on the NVMe disk.
252+
# lspci -nn | grep "Non-Volatile memory controller"
253+
02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
114254
115-
#. Upon deployment completion, launch the RTVM directly on your KBL NUC::
255+
#. Modify the script to use the correct NVMe device IDs and bus number.
116256

117-
$ sudo /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
257+
.. code-block:: none
258+
:emphasize-lines: 6,11
118259
119-
.. note:: Use the ``lspci`` command to ensure that the correct NMVe device IDs will be used for the passthru before launching the script::
260+
# vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
120261
121-
$ sudo lspci -v | grep -iE 'nvm|ssd' 02:00.0 Non-Volatile memory controller: Intel Corporation Device f1a6 (rev 03) (prog-if 02 [NVM Express])
122-
$ sudo lspci -nn | grep "Non-Volatile memory controller" 02:00.0 Non-Volatile memory controller [0108]: Intel Corporation Device [8086:f1a6] (rev 03)
262+
passthru_vpid=(
263+
["eth"]="8086 156f"
264+
["sata"]="8086 9d03"
265+
["nvme"]="8086 f1a6"
266+
)
267+
passthru_bdf=(
268+
["eth"]="0000:00:1f.6"
269+
["sata"]="0000:00:17.0"
270+
["nvme"]="0000:02:00.0"
271+
)
123272
273+
.. code-block:: none
274+
:emphasize-lines: 6
275+
276+
/usr/bin/acrn-dm -A -m $mem_size -c $1 -s 0:0,hostbridge \
277+
--lapic_pt \
278+
--rtvm \
279+
--virtio_poll 1000000 \
280+
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
281+
-s 2,passthru,02/00/0 \
282+
-s 3,virtio-console,@stdio:stdio_port \
283+
$pm_channel $pm_by_vuart \
284+
--ovmf /usr/share/acrn/bios/OVMF.fd \
285+
hard_rtvm
286+
}
287+
288+
#. Upon deployment completion, launch the RTVM directly on your KBL NUC::
289+
290+
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
124291

125292
RT Performance Test
126293
*******************
@@ -174,6 +341,11 @@ Recommended BIOS settings
174341
Configure CAT
175342
-------------
176343

344+
.. _Apollo Lake NUC:
345+
https://www.intel.com/content/www/us/en/products/boards-kits/nuc/kits/nuc6cayh.html
346+
347+
.. note:: CAT configuration is only supported on `Apollo Lake NUC`_.
348+
177349
With the ACRN Hypervisor shell, we can use ``cpuid`` and ``wrmsr``/``rdmsr`` debug
178350
commands to enumerate the CAT capability and set the CAT configuration without rebuilding binaries.
179351
Because ``lapic`` is a pass-through to the RTVM, the CAT configuration must be
@@ -238,37 +410,58 @@ In our recommended configuration, two cores are allocated to the RTVM:
238410
core 0 for housekeeping and core 1 for RT tasks. In order to achieve
239411
this, follow the below steps to allocate all housekeeping tasks to core 0:
240412

241-
.. code-block:: bash
242-
243-
#!/bin/bash
244-
# Move all IRQs to core 0.
245-
for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `;
246-
do
247-
echo setting $i to affine for core zero
248-
echo 1 > /proc/irq/$i/smp_affinity
249-
done
413+
#. Modify the script to use two cores before launching RTVM::
414+
415+
# sed -i "s/launch_hard_rt_vm 1/launch_hard_rt_vm 2/" /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
416+
417+
#. Launch RTVM::
418+
419+
# /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
420+
421+
#. Log in RTVM as root and run the script as below:
422+
423+
.. code-block:: bash
424+
425+
#!/bin/bash
426+
# Move all IRQs to core 0.
427+
for i in `cat /proc/interrupts | grep '^ *[0-9]*[0-9]:' | awk {'print $1'} | sed 's/:$//' `;
428+
do
429+
echo setting $i to affine for core zero
430+
echo 1 > /proc/irq/$i/smp_affinity
431+
done
432+
433+
# Move all rcu tasks to core 0.
434+
for i in `pgrep rcu`; do taskset -pc 0 $i; done
435+
436+
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
437+
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
438+
439+
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
440+
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
441+
442+
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0
443+
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
444+
445+
echo disabling timer migration
446+
echo 0 > /proc/sys/kernel/timer_migration
447+
448+
.. note:: You can ignore the error messages during the script running.
250449

251-
# Move all rcu tasks to core 0.
252-
for i in `pgrep rcu`; do taskset -pc 0 $i; done
253-
254-
# Change realtime attribute of all rcu tasks to SCHED_OTHER and priority 0
255-
for i in `pgrep rcu`; do chrt -v -o -p 0 $i; done
450+
Run cyclictest
451+
==============
256452

257-
# Change realtime attribute of all tasks on core 1 to SCHED_OTHER and priority 0
258-
for i in `pgrep /1`; do chrt -v -o -p 0 $i; done
453+
#. Refer to the :ref:`troubleshooting <enabling the network on RTVM>` to enable the
454+
network connection for RTVM.
259455

260-
# Change realtime attribute of all tasks to SCHED_OTHER and priority 0
261-
for i in `ps -A -o pid`; do chrt -v -o -p 0 $i; done
456+
#. Launch RTVM and log in as root.
262457

263-
echo disabling timer migration
264-
echo 0 > /proc/sys/kernel/timer_migration
458+
#. Install ``cyclictest`` tool::
265459

266-
Run cyclictest
267-
==============
460+
# swupd bundle-add dev-utils
268461

269-
Use the following command to start cyclictest::
462+
#. Use the following command to start cyclictest::
270463

271-
$ cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
464+
# cyclictest -a 1 -p 80 -m -N -D 1h -q -H 30000 --histfile=test.log
272465

273466
- Usage:
274467

@@ -278,3 +471,29 @@ Use the following command to start cyclictest::
278471
:-D 1h: to run for 1 hour, you can change it to other values
279472
:-q: quiee mode; print a summary only on exit
280473
:-H 30000 --histfile=test.log: dump the latency histogram to a local file
474+
475+
Troubleshooting
476+
***************
477+
478+
.. _enabling the network on RTVM:
479+
480+
**Enabling the network on RTVM**
481+
482+
If you need to access the internet, you must add the following command line to the
483+
``launch_hard_rt_vm.sh`` script before launch it:
484+
485+
.. code-block:: none
486+
:emphasize-lines: 8
487+
488+
/usr/bin/acrn-dm -A -m $mem_size -c $1 -s 0:0,hostbridge \
489+
--lapic_pt \
490+
--rtvm \
491+
--virtio_poll 1000000 \
492+
-U 495ae2e5-2603-4d64-af76-d4bc5a8ec0e5 \
493+
-s 2,passthru,02/0/0 \
494+
-s 3,virtio-console,@stdio:stdio_port \
495+
-s 8,virtio-net,tap0 \
496+
$pm_channel $pm_by_vuart \
497+
--ovmf /usr/share/acrn/bios/OVMF.fd \
498+
hard_rtvm
499+
}

0 commit comments

Comments
 (0)