Skip to content

Commit 011134d

Browse files
Zhang, Yundbkinder
authored andcommitted
doc: Update Using PREEMPT_RT-Linux for real-time UOS
Signed-off-by: Zhang, Yun <yunx.zhang@intel.com>
1 parent 5533263 commit 011134d

File tree

1 file changed

+130
-64
lines changed

1 file changed

+130
-64
lines changed

doc/tutorials/rt_linux.rst

Lines changed: 130 additions & 64 deletions
Original file line numberDiff line numberDiff line change
@@ -44,71 +44,136 @@ system on Intel KBL NUC with a SATA SSD as ``/dev/sda`` and an NVME SSD as
4444
1. Follow the :ref:`set-up-CL` instructions in the
4545
:ref:`getting-started-apl-nuc` to:
4646

47-
a. Install Clear Linux (version 26800 or higher) onto the NVMe
48-
#. Install Clear Linux (version 26800 or higher) onto the SATA SSD
47+
a. Install Clear Linux (version 29400 or higher) onto the NVMe
48+
#. Install Clear Linux (version 29400 or higher) onto the SATA SSD
4949
#. Set up Clear Linux on the SATA SSD as the Service OS (SOS) following
5050
the :ref:`add-acrn-to-efi` instructions in the same guide.
5151

52-
#. Patch and build the Real-Time kernel
53-
54-
a. Download Linux kernel real-time patch::
55-
56-
$ wget https://mirrors.edge.kernel.org/pub/linux/kernel/projects/rt/4.19/patch-4.19.31-rt18.patch.xz
57-
58-
#. Sync the kernel code to acrn-2019w17.4-160000p::
59-
60-
$ git clone https://github.com/projectacrn/acrn-kernel.git
61-
$ git checkout acrn-2019w17.4-160000p
62-
$ cd acrn-kernel
63-
$ wget https://raw.githubusercontent.com/projectacrn/acrn-hypervisor/master/doc/tutorials/rt_linux.patch
64-
$ git apply rt_linux.patch
65-
$ xzcat ../patch-4.19.31-rt18.patch.xz | patch -p1
66-
67-
#. Edit the ``kernel_config_uos`` config file: search for the keyword
68-
"NVME Support", delete ``# CONFIG_BLK_DEV_NVME is not set`` and add two lines under "NVME Support" to enable
69-
the NVME driver in RT kernel::
70-
71-
CONFIG_NVME_CORE=y
72-
CONFIG_BLK_DEV_NVME=y
73-
74-
#. Build the RT kernel::
75-
76-
$ cp kernel_config_uos .config
77-
$ make targz-pkg
78-
79-
Choose "Fully Preemptible Kernel (RT)" when prompted, and
80-
choose default for all the other options.
81-
82-
#. Copy the generated package to SOS::
83-
84-
$ scp linux-4.19.28-rt18-quilt-2e5dc0ac-dirty-x86.tar.gz <user name>@<SOS ip>:~/
85-
86-
#. Configure the system on SOS
87-
88-
89-
a. Extract kernel boot and lib modules from the package::
90-
91-
$ cd ~/
92-
$ tar xzvf linux-4.19.28-rt18-quilt-2e5dc0ac-dirty-x86.tar.gz
93-
94-
#. Copy the extracted lib modules to NVME SSD::
95-
96-
$ mount /dev/nvme0n1p3 /mnt
97-
$ cp -r ~/lib/modules/4.19.28-rt18-quilt-2e5dc0ac-dirty /mnt/lib/modules
98-
99-
#. Edit and run the ``launch_hard_rt_vm.sh`` script to launch the UOS.
100-
A sample ``launch_hard_rt_vm.sh`` is included in the Clear Linux
101-
release, and is also available in the acrn-hypervisor/devicemodel
102-
GitHub repo (in the samples folder).
103-
104-
You'll need to modify two places:
105-
106-
1. Replace ``/root/rt_uos_kernel`` with ``~/boot/vmlinuz-4.19.28-rt18-quilt-2e5dc0ac-dirty``
107-
#. Replace ``root=/dev/sda3`` with ``root=/dev/nvme0n1p3``
108-
109-
#. Run the launch script::
110-
111-
$ sudo ./launch_hard_rt_vm.sh
52+
#. Set up and launch a Real-Time Linux guest
53+
54+
a. Add kernel-lts2018-preempt-rt bundle (as root)::
55+
56+
# swupd bundle-add kernel-lts2018-preempt-rt
57+
58+
#. Copy preempt-rt module to NVMe disk::
59+
60+
# mount /dev/nvme0n1p3 /mnt
61+
# ls -l /usr/lib/modules/
62+
4.19.31-6.iot-lts2018-preempt-rt/
63+
4.19.36-48.iot-lts2018/
64+
4.19.36-48.iot-lts2018-sos/
65+
5.0.14-753.native/
66+
# cp -r /usr/lib/modules/4.19.31-6.iot-lts2018-preempt-rt /mnt/lib/modules/
67+
# cd ~ && umount /mnt && sync
68+
69+
#. Get your NVMe pass-through IDs (in our example they are ``[01:00.0]`` and ``[8086:f1a6]``)::
70+
71+
# lspci -nn | grep SSD
72+
01:00.0 Non-Volatile memory controller [0108]: Intel Corporation SSD Pro 7600p/760p/E 6100p Series [8086:f1a6] (rev 03)
73+
74+
#. Modify ``launch_hard_rt_vm.sh`` script::
75+
76+
# vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
77+
78+
<Modify the passthru_bdf and passthru_vpid with your NVMe pass-through IDs>
79+
80+
passthru_vpid=(
81+
["eth"]="8086 156f"
82+
["sata"]="8086 9d03"
83+
)
84+
passthru_bdf=(
85+
["eth"]="0000:00:1f.6"
86+
["sata"]="0000:00:17.0"
87+
)
88+
89+
TO:
90+
passthru_vpid=(
91+
["eth"]="8086 156f"
92+
["sata"]="8086 f1a6"
93+
)
94+
passthru_bdf=(
95+
["eth"]="0000:00:1f.6"
96+
["sata"]="0000:01:00.0"
97+
)
98+
99+
<Modify NVMe pass-through id>
100+
101+
-s 2,passthru,0/17/0 \
102+
103+
TO:
104+
-s 2,passthru,01/00/0 \
105+
106+
<Modify rootfs to NVMe>
107+
108+
-B "root=/dev/sda3 rw rootwait maxcpus=$1 nohpet console=hvc0 \
109+
110+
TO:
111+
-B "root=/dev/nvme0n1p3 rw rootwait maxcpus=$1 nohpet console=hvc0 \
112+
113+
#. Get IP address in real-time VM if you need it (There is no IP by default)
114+
115+
#. Method 1 ``virtio-net NIC``::
116+
117+
# vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
118+
119+
<add below line into acrn-dm boot args>
120+
121+
-s 4,virtio-net,tap0 \
122+
123+
#. Method 2 ``pass-through NIC``::
124+
125+
<Get your ethernet IDs first(in our example they are ``[00:1f.6]`` and ``[8086:15e3]``)>
126+
127+
# lspci -nn | grep Eth
128+
00:1f.6 Ethernet controller [0200]: Intel Corporation Ethernet Connection (5) I219-LM [8086:15e3]
129+
130+
# vim /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
131+
132+
<Modify the passthru_bdf and passthru_vpid with your ethernet IDs>
133+
134+
passthru_vpid=(
135+
["eth"]="8086 156f"
136+
["sata"]="8086 f1a6"
137+
)
138+
passthru_bdf=(
139+
["eth"]="0000:00:1f.6"
140+
["sata"]="0000:01:00.0"
141+
)
142+
143+
TO:
144+
passthru_vpid=(
145+
["eth"]="8086 15e3"
146+
["sata"]="8086 f1a6"
147+
)
148+
passthru_bdf=(
149+
["eth"]="0000:00:1f.6"
150+
["sata"]="0000:01:00.0"
151+
)
152+
153+
<Uncomment the following three lines>
154+
155+
#echo ${passthru_vpid["eth"]} > /sys/bus/pci/drivers/pci-stub/new_id
156+
#echo ${passthru_bdf["eth"]} > /sys/bus/pci/devices/${passthru_bdf["eth"]}/driver/unbind
157+
#echo ${passthru_bdf["eth"]} > /sys/bus/pci/drivers/pci-stub/bind
158+
159+
TO:
160+
echo ${passthru_vpid["eth"]} > /sys/bus/pci/drivers/pci-stub/new_id
161+
echo ${passthru_bdf["eth"]} > /sys/bus/pci/devices/${passthru_bdf["eth"]}/driver/unbind
162+
echo ${passthru_bdf["eth"]} > /sys/bus/pci/drivers/pci-stub/bind
163+
164+
<add below line into acrn-dm boot args,behind is your ethernet ID>
165+
166+
-s 4,passthru,00/1f/6 \
167+
168+
.. note::
169+
170+
method 1 will give both the Service VM and User VM network connectivity
171+
172+
method 2 will give the User VM a network interface, the Service VM will loose it
173+
174+
#. Start the Real-Time Linux guest::
175+
176+
# sh /usr/share/acrn/samples/nuc/launch_hard_rt_vm.sh
112177

113178
#. At this point, you've successfully launched the real-time VM and
114179
Guest OS. You can verify a preemptible kernel was loaded using
@@ -117,11 +182,12 @@ system on Intel KBL NUC with a SATA SSD as ``/dev/sda`` and an NVME SSD as
117182
.. code-block:: console
118183
119184
root@rtvm-02 ~ # uname -a
120-
Linux rtvm-02 4.19.8-rt6+ #1 SMP PREEMPT RT Tue Jan 22 04:17:40 UTC 2019 x86_64 GNU/Linux
185+
Linux clr-de362ed3fd444586b99968b5ceb22275 4.19.31-6.iot-lts2018-preempt-rt #1 SMP PREEMPT Mon May 20 16:00:51 UTC 2019 x86_64 GNU/Linux
121186
122187
#. Now you can run all kinds of performance tools to experience real-time
123188
performance. One popular tool is ``cyclictest``. You can install this
124189
tool and run it with::
125190

126191
swupd bundle-add dev-utils
127-
cyclictest -N -p80 -D300
192+
cyclictest -N -p80 -D30 -M > log.txt
193+
cat log.txt

0 commit comments

Comments
 (0)