-
Notifications
You must be signed in to change notification settings - Fork 991
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
docs: Document Intel Discrete GPUs usage with Kata #9084
docs: Document Intel Discrete GPUs usage with Kata #9084
Conversation
These include the Intel® Data Center GPU Max Series and Intel® Data Center GPU Flex Series. | ||
For integrated GPUs please refer to [Integrate-Intel-GPUs-with-Kata](../Intel-GPU-passthrough-and-Kata.md) | ||
|
||
An Intel Discrate GPU can be passed to a Kata Containers Kata using GPU passthrough |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/Containers Kata/Container
You've used "a", so singular and Kata repeated
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How about " .. using one of GPU or SRIOV passthrough"
Shorter, clearly indicates only one of the technologies can be leveraged at any time.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
the VM to which it is assigned. The GPU is not shared among VMs. | ||
|
||
With SRIOV mode, it is possible to pass a Virtual GPU instance to a virtual machine. | ||
With this, multiple Virtual GPU instances carved out of a single GPU can be passed |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
s/With this, multiple Virtual GPU instances carved out of a single GPU can be passed
to multiple VMs at the same time allowing the GPU to be shared among them.
/s/With this, multiple Virtual GPU instances can be carved out of a single physical GPU and be passed to different VMs, allowing the GPU to be shared.
Some questions: does the VM have to be located on the same host where the GPU is attached? Any limit on the number of vGPUs once can carve out of a physical GPU? How does one slice a GPU, partition the GPU cores? Allocate all? Good to add a link to SRIOV GPU resource.
| Technology | Description | Behaviour | Detail | | ||
|-|-|-|-| | ||
| Intel VT-d | GPU passthrough | Physical GPU assigned to a single VM | Direct GPU assignment to VM without limitation | | ||
| SRIOV | GPU sharing | Physical GPU shared by multiple VMs | SRIOV passthrough | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Would this be more consistent with the previous row?
| SRIOV | SRIOV passthrough| GPU sharing | Physical GPU shared by multiple VMs |
|
||
## Host BIOS requirements | ||
|
||
Hardware such as Intel Max and Flex series, require larger PCI BARs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Do not need the ",".
Would be good to state what BAR is or are we assuming the user knows all this?
https://www.intel.com/content/www/us/en/support/articles/000090831/graphics.html
|
||
Hardware such as Intel Max and Flex series, require larger PCI BARs. | ||
|
||
For large BARs devices, MMIO mapping above 4G address space should be enabled in the PCI configuration of the BIOS. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
grammar/reads funny "large BARs devices"
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Corrected
|
||
1. Run the following to change the kernel command line using grub | ||
``` | ||
sudo vim /etc/default/grub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it the same file on Ubuntu and other distributions?
``` | ||
|
||
Run the previous command to determine the BDF for the GPU device on host.<br/> | ||
From the previous output, PCI address `0000:29:00.0` is assigned to the hardware GPU device.<br/> |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What is special about 29 versus 3a, 9a, or ca other than it is the lowest/listed first.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It is not. I just chose the first one. I have clarfied this now.
4. Start Kata container with GPU device enabled: | ||
|
||
``` | ||
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/437 --rm -t "docker.io/library/archlinux:latest" arch uname -r |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a Kubernetes environment is there some resource tracker that is watching out for which device nodes are available/free?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
What happens if we are running a TDVM and try to connect a GPU?
An error? Or allows it with a warning say not-end-to-end-secure?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
In a Kubernetes environment is there some resource tracker that is watching out for which device nodes are available/free?
A device plugin is for this purpose. I have demonstrated this with QAT but our GPU device plugin does not support GPU VF resources.
|
||
Use the following steps to pass an Intel discrete GPU with Kata: | ||
|
||
1. Find the Bus-Device-Function (BDF) for GPU device: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
How is the VF enablement done?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
@ mythi the command mentioned earlier echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs
creates the VFs. The out of tree kernel driver along with the kernel command line mentioned enables the VFs to be created.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Are the VFs automatically bound to vfio-pci
?
0264b5d
to
ec3b547
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @amshinde - A few initial (mostly formatting) comments. I'll try to take another look tomorrow...
These include the Intel® Data Center GPU Max Series and Intel® Data Center GPU Flex Series. | ||
For integrated GPUs please refer to [Integrate-Intel-GPUs-with-Kata](../Intel-GPU-passthrough-and-Kata.md) | ||
|
||
An Intel Discrate GPU can be passed to a Kata Container using GPU passthrough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
An Intel Discrate GPU can be passed to a Kata Container using GPU passthrough. | |
An Intel Discrete GPU can be passed to a Kata Container using GPU passthrough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This typo is still present.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed
|
||
For ubuntu: | ||
``` | ||
sudo update-grub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sudo update-grub | |
$ sudo update-grub |
|
||
For Centos/RHEL: | ||
``` | ||
sudo grub2-mkconfig -o /boot/grub2/grub.cfg |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sudo grub2-mkconfig -o /boot/grub2/grub.cfg | |
$ sudo grub2-mkconfig -o /boot/grub2/grub.cfg |
|
||
4. Reboot the system | ||
``` | ||
sudo reboot |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
sudo reboot | |
$ sudo reboot |
configuration in the Kata `configuration.toml` file as shown below. | ||
|
||
``` | ||
$ sudo sed -i -e 's/^# *\(hotplug_vfio_on_root_bus\).*=.*$/\1 = true/g' /usr/share/defaults/kata-containers/configuration.toml |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The config may well be below /opt/kata/
if Kata is installed using kata-deploy / kata-manager.
|
||
Create SR-IOV interfaces for the GPU: | ||
``` | ||
$ echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs | |
$ echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs |
|
||
``` | ||
$ BDF="0000:3a:00:1" | ||
$ readlink -e /sys/bus/pci/devices/$BDF/iommu_group |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ readlink -e /sys/bus/pci/devices/$BDF/iommu_group | |
$ readlink -e "/sys/bus/pci/devices/$BDF/iommu_group" |
Now you can use the device node `/dev/vfio/437` in docker command line to pass | ||
the VGPU to a Kata Container. | ||
|
||
4. Start Kata container with GPU device enabled: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
4. Start Kata container with GPU device enabled: | |
4. Start a Kata Containers container with the GPU device enabled: |
5. Start a Kata container with GPU device: | ||
|
||
``` | ||
$ sudo ctr --debug run --runtime "io.containerd.kata.v2" --device /dev/vfio/27 --rm -t "docker.io/library/archlinux:latest" arch uname -r |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth switching to quay.io
now that docker.io
has limits on it?
|
||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Extraneous blank line.
f4e9260
to
3daae0c
Compare
@jodh-intel I have addressed your comments. Can you take another look? |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @amshinde - A few more comments...
These include the Intel® Data Center GPU Max Series and Intel® Data Center GPU Flex Series. | ||
For integrated GPUs please refer to [Integrate-Intel-GPUs-with-Kata](../Intel-GPU-passthrough-and-Kata.md) | ||
|
||
An Intel Discrate GPU can be passed to a Kata Container using GPU passthrough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This typo is still present.
$ sudo apt install linux-headers-"$(UBUNTU_22.04_SERVER_KERNEL_VERSION) linux-image-unsigned-"$(UBUNTU_22.04_SERVER_KERNEL_VERSION)" | ||
$ make i915dkmsdeb-pkg | ||
``` | ||
The above make command will create debain package in parent folder: intel-i915-dkms_<release version>.<kernel-version>.deb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
The above make command will create debain package in parent folder: intel-i915-dkms_<release version>.<kernel-version>.deb | |
The above `make` command will create a Debian package in the parent folder named: `intel-i915-dkms_<release version>.<kernel-version>.deb`. |
Below are the steps for installing the driver from source: | ||
```bash | ||
$ export I915_BRANCH="backport/main" | ||
$ git clone -b ${I915_BRANCH} --depth 1 https://github.com/intel-gpu/intel-gpu-i915-backports.git && cd intel-gpu-i915-backports/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I would suggest one command per line for clarity:
$ git clone -b ${I915_BRANCH} --depth 1 https://github.com/intel-gpu/intel-gpu-i915-backports.git && cd intel-gpu-i915-backports/ | |
$ git clone -b ${I915_BRANCH} --depth 1 https://github.com/intel-gpu/intel-gpu-i915-backports.git | |
$ cd intel-gpu-i915-backports/ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
The above make command will create debain package in parent folder: intel-i915-dkms_<release version>.<kernel-version>.deb | ||
Install the package as: | ||
```bash | ||
$ sudo dpkg -i intel-i915-dkms_<release version>.<kernel-version>.deb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Double space.
$ sudo dpkg -i intel-i915-dkms_<release version>.<kernel-version>.deb | |
$ sudo dpkg -i intel-i915-dkms_<release version>.<kernel-version>.deb |
I also wonder if we should specify using apt-get
or even just apt
rather than the lower-level dpkg
here.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed the extra space. We are using dpkg for installing the local deb package that is generated.
|
||
## Install and configure Kata Containers | ||
|
||
To use this feature, you need Kata version 1.3.0 or above. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth stating how you check the version you are running, so one of:
kata-runtime --version
kata-ctl version
containerd-shim-kata-v2 --version
bash -c "$(curl -fsSL https://raw.githubusercontent.com/kata-containers/kata-containers/main/utils/kata-manager.sh) -l
In fact, for bonus points, it might be worth adding a doc or doc section somewhere else with these details and just referencing that here.
```bash | ||
$ sudo apt update | ||
$ sudo apt install -y gpg-agent wget | ||
$ wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Can you use curl
rather than wget
.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think use of wget
should be ok here. I would also rather use wget
here to keep in line with the installation instructions on the Intel site.
|
||
3. Update grub as per OS distribution: | ||
|
||
For ubuntu: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For ubuntu: | |
For Ubuntu: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
$ sudo update-grub | ||
``` | ||
|
||
For Centos/RHEL: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
For Centos/RHEL: | |
For CentOS/RHEL: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Fixed.
$ sudo apt install -y gpg-agent wget | ||
$ wget -qO - https://repositories.intel.com/gpu/intel-graphics.key | \ | ||
sudo gpg --dearmor --output /usr/share/keyrings/intel-graphics.gpg | ||
$ echo "deb [arch=amd64 signed-by=/usr/share/keyrings/intel-graphics.gpg] https://repositories.intel.com/gpu/ubuntu ${VERSION_CODENAME}/lts/2350 unified" | \ |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It may make sense to add a note at the version top of the document stating that your system must have an Intel x86_64 CPU. I know, I know! ... But I suspect someone will still try it! 😄
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
$ export I915_BRANCH="backport/main" | ||
$ git clone -b ${I915_BRANCH} --depth 1 https://github.com/intel-gpu/intel-gpu-i915-backports.git && cd intel-gpu-i915-backports/ | ||
$ sudo apt install dkms make debhelper devscripts build-essential flex bison mawk | ||
$ sudo apt install linux-headers-"$(UBUNTU_22.04_SERVER_KERNEL_VERSION) linux-image-unsigned-"$(UBUNTU_22.04_SERVER_KERNEL_VERSION)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
These variable names are not valid. I understand why we're doing this, but I still think it would be clearer to:
- State at the top of the doc that we're assuming you are running on a Ubuntu 22.04 LTS system.
- Change this code to reference the output of
uname -r
. - State here in a note that if you are not running on that version of Ubuntu, you will need to manually determine the latest 22.04 kernel version.
In Intel GPU pass-through mode, an entire physical GPU is directly assigned to one VM. | ||
In this mode of operation, the GPU is accessed exclusively by the Intel driver running in | ||
the VM to which it is assigned. The GPU is not shared among VMs. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is this a likely setup that is worth documenting?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We have covered this case in the past for Intel Integrated and Nvidia GPUs. Worth mentioning for completeness. I also feel this case will become more important in the Confidential Containers scenario.
3daae0c
to
c0fbc6e
Compare
@jodh-intel I have addressed your comments. Please take a look. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @amshinde - I've found a few more points (mostly nits), but please tal.
An Intel Discrete GPU can be passed to a Kata Container using GPU passthrough. | ||
as well as SRIOV passthrough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Detached sentence. Do we need something like:
An Intel Discrete GPU can be passed to a Kata Container using GPU passthrough. | |
as well as SRIOV passthrough. | |
An Intel Discrete GPU can be passed to a Kata Container using GPU passthrough, or SRIOV passthrough. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
| Technology | Description | | ||
|-|-| | ||
| GPU passthrough | Physical GPU assigned to a single VM | | ||
| SR-IOV passthrough | Physical GPU shared by multiple VMs | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
This doc contains a mixture of "SRIOV" and "SR-IOV". Can you select one term and use it consistently.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Done
With this, multiple Virtual GPU instances can be carved out of a single physical GPU | ||
and be passed to different VMs, allowing the GPU to be shared. | ||
|
||
| Technology | Description | |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
It might be worth adding some extra columns showing the proportion of the GPU that is available in the different contexts. Something like:
Technology | Proportion of GPU shared to VM | Proportion of GPU accessible to host | Description |
---|---|---|---|
GPU passthrough | 100% | 0% | Physical GPU assigned to a single VM |
SR-IOV passthrough | varies | varies | Physical GPU shared by multiple VMs |
If so, it does raise the question, "What is the minimum GPU % I can pass to a single VM when using SR-IOV?" so it might be worth stating that as a note.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I think adding the extra column is confusing in this case. Since we are dealing with passthrough, we should be conncerned about the proportion shared with the VM.
- Intel® Data Center GPU Max Series (`Ponte Vecchio`) | ||
- Intel® Data Center GPU Flex Series (`Arctic Sound-M`) | ||
- Intel® Data Center GPU Arc Series | ||
|
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it worth adding a note here explaining how the user can query their system to determine if they have a recommended Intel GPU?
- Intel® Data Center GPU Flex Series (`Arctic Sound-M`) | ||
- Intel® Data Center GPU Arc Series | ||
|
||
The following steps outline the workflow for using an Intel Graphics device with Kata. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
The following steps outline the workflow for using an Intel Graphics device with Kata. | |
The following steps outline the workflow for using an Intel Graphics device with Kata Containers. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
For support on other distributions, please refer to [DGPU-docs](https://dgpu-docs.intel.com/driver/installation.html) | ||
|
||
You can also install the driver from source which is maintained at [intel-gpu-i915-backports](https://github.com/intel-gpu/intel-gpu-i915-backports) | ||
Detailed instructions for reference can be found at: https://github.com/intel-gpu/intel-gpu-i915-backports/blob/backport/main/docs/README_ubuntu.md |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
Detailed instructions for reference can be found at: https://github.com/intel-gpu/intel-gpu-i915-backports/blob/backport/main/docs/README_ubuntu.md | |
Detailed instructions for reference can be found at: https://github.com/intel-gpu/intel-gpu-i915-backports/blob/backport/main/docs/README_ubuntu.md. |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
$ sudo apt install dkms make debhelper devscripts build-essential flex bison mawk | ||
$ sudo apt install linux-headers-"$(uname -r) linux-image-unsigned-"$(uname -r)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: For consistency with previous commands:
$ sudo apt install dkms make debhelper devscripts build-essential flex bison mawk | |
$ sudo apt install linux-headers-"$(uname -r) linux-image-unsigned-"$(uname -r)" | |
$ sudo apt install -y dkms make debhelper devscripts build-essential flex bison mawk | |
$ sudo apt install -y linux-headers-"$(uname -r) linux-image-unsigned-"$(uname -r)" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
The above make command will create debain package in parent folder: intel-i915-dkms_<release version>.<kernel-version>.deb | ||
Install the package as: | ||
```bash | ||
$ sudo dpkg -i intel-i915-dkms_<release version>.<kernel-version>.deb |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
We should add a warning pointing out that this package won't be automatically updated since it's been manually installed.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I dont think we really this here, we are installing local package using dpkg, so the user should be well aware of this.
$ sudo reboot | ||
``` | ||
Additionally, verify that the following kernel configs are enabled on your host kernel: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit:
Additionally, verify that the following kernel configs are enabled on your host kernel: | |
Additionally, verify that the following kernel configs are enabled for your host kernel: |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
1. Run the following to change the kernel command line using grub | ||
```bash | ||
sudo vim /etc/default/grub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Nit: Consistency: missing prompt:
sudo vim /etc/default/grub | |
$ sudo vim /etc/default/grub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
c0fbc6e
to
bbdf21b
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thanks, @amshinde.
lgtm
bbdf21b
to
c49df68
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Thank you @amshinde , just few comments:
Your host kernel needs to be booted with `intel_iommu=on` and `i915.enable_iaf=0` on the kernel command | ||
line. | ||
1. Run the following to change the kernel command line using grub |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
1. Run the following to change the kernel command line using grub | |
1. Run the following to change the kernel command line using grub: |
Create SR-IOV interfaces for the GPU: | ||
```bash | ||
$ echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
$ echo 4 | sudo tee /sys/bus/pci/devices/0000\:3a\:00.0/sriov_numvfs | |
$ echo 4 | sudo tee /sys/bus/pci/devices/$BDF/sriov_numvfs |
c49df68
to
52721d3
Compare
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
lgtm, thank you @amshinde
/test |
52721d3
to
25fe017
Compare
/test |
11084ae
to
b5016e5
Compare
Document describes the steps needed to pass an entire Intel Discrete GPU as well a GPU SR-IOV interface to a Kata Container. Fixes: kata-containers#9083 Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Configuration file for qemu with runtime-rs was recently renamed. Doc contains name for old file. This was somehow not caught in the CI earlier. Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
Add missing words to spell-check dictionaries Signed-off-by: Archana Shinde <archana.m.shinde@intel.com>
b5016e5
to
973a153
Compare
/test |
Document describes the steps needed to pass an entire Intel Discrete GPU as well a GPU SR-IOV interface to a Kata Container.
Fixes: #9083