Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Ensure that qemu has attach_queue permissions on tun_socket #2941

Merged
merged 1 commit into from Jan 3, 2020
Merged
Show file tree
Hide file tree
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
Ensure that qemu has attach_queue permissions on tun_socket
Normally libvirt labels tun devices the right way so that qemu can do
the network multiqueue setup. However in kubevirt libvirt can't do that.

In order to not get selinux denials like this:

```
type=AVC msg=audit(1576662638.540:8152): avc:  denied  { attach_queue } for  pid=795330 comm=43505520312F4B564D scontext=system_u:system_r:virt_launcher.process:s0:c135,c769 tcontext=system_u:system_r:virt_launcher.process:s0:c135,c769 tclass=tun_socket permissive=0
```

give qemu the explicit permission to do so.

Signed-off-by: Roman Mohr <rmohr@redhat.com>
  • Loading branch information
rmohr committed Dec 19, 2019
commit bc55cb916003c54f6cbf329112a4e36d0d874836
1 change: 1 addition & 0 deletions cmd/virt-handler/virt_launcher.cil
Expand Up @@ -5,6 +5,7 @@
(allow process mtrr_device_t (file (write)))
(allow process self (tun_socket (relabelfrom)))
(allow process self (tun_socket (relabelto)))
(allow process self (tun_socket (attach_queue)))
(allow process sysfs_t (file (write)))
(allow process tmp_t (dir (write add_name open getattr setattr read link search remove_name reparent lock ioctl)))
(allow process tmp_t (file (setattr open read write create getattr append ioctl lock)))
Expand Down
28 changes: 24 additions & 4 deletions tests/vmi_multiqueue_test.go
Expand Up @@ -21,7 +21,6 @@ package tests_test

import (
"encoding/xml"

"fmt"

. "github.com/onsi/ginkgo"
Expand All @@ -43,18 +42,39 @@ var _ = Describe("MultiQueue", func() {
virtClient, err := kubecli.GetKubevirtClient()
tests.PanicOnError(err)

var vmi *v1.VirtualMachineInstance

BeforeEach(func() {
tests.BeforeTestCleanup()
vmi = tests.NewRandomVMIWithEphemeralDisk(tests.ContainerDiskFor(tests.ContainerDiskAlpine))
})

Context("MultiQueue Behavior", func() {

availableCPUs := tests.GetHighestCPUNumberAmongNodes(virtClient)

It("should be able to successfully boot fedora to the login prompt with networking mutiqueues enabled without being blocked by selinux", func() {
vmi := tests.NewRandomFedoraVMIWitGuestAgent()
numCpus := 3
Expect(numCpus).To(BeNumerically("<=", availableCPUs),
fmt.Sprintf("Testing environment only has nodes with %d CPUs available, but required are %d CPUs", availableCPUs, numCpus),
)
cpuReq := resource.MustParse(fmt.Sprintf("%d", numCpus))
vmi.Spec.Domain.Resources.Requests[k8sv1.ResourceCPU] = cpuReq
multiQueue := true
vmi.Spec.Domain.Devices.NetworkInterfaceMultiQueue = &multiQueue
vmi.Spec.Domain.Devices.Rng = &v1.Rng{}

By("Creating and starting the VMI")
vmi, err := virtClient.VirtualMachineInstance(tests.NamespaceTestDefault).Create(vmi)
Expect(err).ToNot(HaveOccurred())
tests.WaitForSuccessfulVMIStartWithTimeout(vmi, 360)

By("Checking if we can login")
e, err := tests.LoggedInFedoraExpecter(vmi)
Expect(err).ToNot(HaveOccurred())
e.Close()
})

It("[test_id:959][rfe_id:2065] Should honor multiQueue requests", func() {
vmi := tests.NewRandomVMIWithEphemeralDisk(tests.ContainerDiskFor(tests.ContainerDiskAlpine))
numCpus := 3
Expect(numCpus).To(BeNumerically("<=", availableCPUs),
fmt.Sprintf("Testing environment only has nodes with %d CPUs available, but required are %d CPUs", availableCPUs, numCpus),
Expand Down