Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

[tempest]Enable vTPM testing #589

Closed
wants to merge 3 commits into from

Conversation

gibizer
Copy link
Contributor

@gibizer gibizer commented Dec 6, 2023

This ensures that the Barbican - Nova integration works.

Implements: https://issues.redhat.com/browse/OSPRH-2449
Implements: https://issues.redhat.com/browse/OSPRH-2451
Depends-On: openstack-k8s-operators/edpm-ansible#537

Copy link
Contributor

openshift-ci bot commented Dec 6, 2023

[APPROVALNOTIFIER] This PR is NOT APPROVED

This pull-request has been approved by: gibizer
Once this PR has been reviewed and has the lgtm label, please assign frenzyfriday for approval. For more information see the Kubernetes Code Review Process.

The full list of commands accepted by this bot can be found here.

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/c4861d8bd90043e7b52baade85ccc876

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 30m 47s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 3s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 12m 04s
openstack-operator-tempest-multinode FAILURE in 2m 46s

@gibizer
Copy link
Contributor Author

gibizer commented Dec 7, 2023

recheck the tempest job produce zero logs :/

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/0d878f286769409a82a4428e41ae6d38

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 26m 32s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 11m 42s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 09m 20s
openstack-operator-tempest-multinode FAILURE in 1h 07m 16s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/99bbf39048894688831329f6cd6dc486

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 25m 19s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 09m 00s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 07m 24s
openstack-operator-tempest-multinode FAILURE in 1h 05m 32s

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/ce3cc15c49224a6e8176960ec808f621

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 57m 02s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 08m 01s
cifmw-crc-podified-edpm-baremetal FAILURE in 35m 00s
✔️ openstack-operator-tempest-multinode SUCCESS in 1h 27m 15s

@gibizer
Copy link
Contributor Author

gibizer commented Dec 8, 2023

somebody does not telling me the full truth:

https://review.rdoproject.org/zuul/build/710edf6bab7e48fdb7f8d8195611eb3f/log/controller/ci-framework-data/tests/tempest/podman_tempest.log#46

2023-12-07 14:57:45.770 8 DEBUG config_tempest.constants [-] Setting [compute_feature_enabled] vtpm_device_supported = True set /usr/lib/python3.9/site-packages/config_tempest/tempest_conf.py:105�[00m

https://review.rdoproject.org/zuul/build/710edf6bab7e48fdb7f8d8195611eb3f/log/controller/ci-framework-data/tests/tempest/podman_tempest.log#206

{2} setUpClass (whitebox_tempest_plugin.api.compute.test_vtpm.VTPMTest) ... SKIPPED: CONF.compute_feature_enabled.vtpm_device_supported must be set.

Copy link

Merge Failed.

This change or one of its cross-repo dependencies was unable to be automatically merged with the current state of its repository. Please rebase the change and upload a new patchset.
Warning:
Error merging github.com/openstack-k8s-operators/openstack-operator for 589,bbea6cc23d1638f8b9f28132407f98b1cf52c31d

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/557373e3c00e4b27b10e1c1a3227b81f

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 44m 09s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 13m 06s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 10m 33s
openstack-operator-tempest-multinode FAILURE in 1h 28m 42s

@gibizer
Copy link
Contributor Author

gibizer commented Dec 9, 2023

ft1.1: setUpClass (whitebox_tempest_plugin.api.compute.test_vtpm.VTPMTest)testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 206, in setUpClass
    raise value.with_traceback(trace)
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 196, in setUpClass
    cls.setup_clients()
  File "/usr/local/whitebox-tempest-plugin/whitebox_tempest_plugin/api/compute/test_vtpm.py", line 50, in setup_clients
    cls.os_primary.secrets_client = service_clients.secret_v1.SecretClient(
AttributeError: 'ServiceClients' object has no attribute 'secret_v1'

@gibizer
Copy link
Contributor Author

gibizer commented Dec 13, 2023

ft1.1: setUpClass (whitebox_tempest_plugin.api.compute.test_vtpm.VTPMTest)testtools.testresult.real._StringException: Traceback (most recent call last):
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 206, in setUpClass
    raise value.with_traceback(trace)
  File "/usr/lib/python3.9/site-packages/tempest/test.py", line 196, in setUpClass
    cls.setup_clients()
  File "/usr/local/whitebox-tempest-plugin/whitebox_tempest_plugin/api/compute/test_vtpm.py", line 50, in setup_clients
    cls.os_primary.secrets_client = service_clients.secret_v1.SecretClient(
AttributeError: 'ServiceClients' object has no attribute 'secret_v1'

In a working job we have both whitebox and barbican tempest plugin loaded

2023-12-08 19:11:33.839 101156 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: barbican_tests
2023-12-08 19:11:33.840 101156 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: whitebox-tempest-plugin
2023-12-08 19:11:33.840 101156 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: barbican_tests
2023-12-08 19:11:33.840 101156 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin

But in this job we only have whitebox

2023-12-08 17:54:59.325 7 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf�[00m
2023-12-08 17:54:59.328 7 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-08 17:54:59.328 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-08 17:54:59.340 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
~/openshift ~ /
2023-12-08 17:55:00.014 8 INFO tempest [-] Using tempest config file /var/lib/tempest/openshift/etc/tempest.conf�[00m

This is why the secret client is not registered

@gibizer
Copy link
Contributor Author

gibizer commented Dec 13, 2023

Adding the missing depedency to the image openstack-k8s-operators/tcib#113

@gibizer
Copy link
Contributor Author

gibizer commented Dec 15, 2023

Since openstack-k8s-operators/tcib#112 merged we expect to have an openstack-tempest-all container image with all the plugins installed.

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/8944088499d04b4ca6c77e27964dcede

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 37m 59s
podified-multinode-edpm-deployment-crc FAILURE in 43m 38s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 15m 11s
✔️ openstack-operator-tempest-multinode SUCCESS in 1h 22m 01s

@gibizer
Copy link
Contributor Author

gibizer commented Dec 15, 2023

Either we need

@gibizer
Copy link
Contributor Author

gibizer commented Dec 19, 2023

recheck
openstack-k8s-operators/tcib#113 merged and container image is built to quay

This ensures that the Barbican - Nova integration works.
Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/2241c6cbf24d44afa75228bf5d0fe47d

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 38m 43s
podified-multinode-edpm-deployment-crc POST_FAILURE in 1h 09m 57s
cifmw-crc-podified-edpm-baremetal FAILURE in 20m 56s
openstack-operator-tempest-multinode FAILURE in 1h 22m 05s

@gibizer
Copy link
Contributor Author

gibizer commented Dec 19, 2023

Hm it seems that even though the last image build was after the tcib commit was merged, the image does not contain the tcib change.

2023-12-19 09:21:13.238 7 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf�[00m
2023-12-19 09:21:13.244 7 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.245 7 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests�[00m
2023-12-19 09:21:13.246 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.246 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests�[00m
2023-12-19 09:21:13.258 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.259 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests�[00m

@gibizer
Copy link
Contributor Author

gibizer commented Dec 19, 2023

Hm it seems that even though the last image build was after the tcib commit was merged, the image does not contain the tcib change.

2023-12-19 09:21:13.238 7 INFO tempest [-] Using tempest config file /etc/tempest/tempest.conf�[00m
2023-12-19 09:21:13.244 7 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.245 7 INFO tempest.test_discover.plugins [-] Register additional config options from Tempest plugin: manila_tests�[00m
2023-12-19 09:21:13.246 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.246 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests�[00m
2023-12-19 09:21:13.258 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: whitebox-tempest-plugin�[00m
2023-12-19 09:21:13.259 7 INFO tempest.test_discover.plugins [-] List additional config options registered by Tempest plugin: manila_tests�[00m

The image points to
https://trunk.rdoproject.org/centos9-antelope/current-podified/8f/cc/8fcc848d6c766b48142f0ffef9e34937/delorean.repo
that points to tcib
openstack-k8s-operators/tcib@7bf4dba
So the tcib#113 hasn't been built yet

@dprince
Copy link
Contributor

dprince commented Jan 4, 2024

Just rebased this, lets see if it passes now

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/aed8760957f94a0abfd504f00c9b9be0

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 36m 16s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 09m 29s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 20m 26s
openstack-operator-tempest-multinode FAILURE in 1h 19m 22s

@gibizer
Copy link
Contributor Author

gibizer commented Jan 15, 2024

recheck lets see if we have a fresh tempest extras image with the barbican addition

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/78a1235f6ff743f8a1f41ef08adb6716

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 41m 07s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 08m 56s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 14m 21s
openstack-operator-tempest-multinode FAILURE in 1h 25m 48s

@gibizer
Copy link
Contributor Author

gibizer commented Jan 15, 2024

Finally we have the vtpm tests and barbican available in the same env, so the tests starts. The tests still fails but now on the compute with swtpm error:

: libvirt.libvirtError: operation failed: swtpm died and reported:
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest Traceback (most recent call last):
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/nova/virt/libvirt/guest.py", line 165, in launch
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     return self._domain.createWithFlags(flags)
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 193, in doit
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     result = proxy_call(self._autowrap, f, *args, **kwargs)
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 151, in proxy_call
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     rv = execute(f, *args, **kwargs)
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 132, in execute
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     six.reraise(c, e, tb)
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/six.py", line 709, in reraise
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     raise value
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib/python3.9/site-packages/eventlet/tpool.py", line 86, in tworker
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     rv = meth(*args, **kwargs)
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest   File "/usr/lib64/python3.9/site-packages/libvirt.py", line 1409, in createWithFlags
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest     raise libvirtError('virDomainCreateWithFlags() failed')
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest libvirt.libvirtError: operation failed: swtpm died and reported: 
2024-01-15 12:14:32.411 2 ERROR nova.virt.libvirt.guest 
2024-01-15 12:14:32.416 2 ERROR nova.virt.libvirt.driver [None req-acf52b90-9ec7-4da7-8a30-2721e89bf1bc 3660a18f0a42438fb5bc85b678006c16 d33f2a21efe34ebda79fc27eef8db806 - - default default] [instance: b5d96241-d569-4566-acfe-4edec9413a2f] Failed to start libvirt guest: libvirt.libvirtError: operation failed: swtpm died and reported:

From the swtpm logs:

  The TPM's state will be encrypted using a key derived from a passphrase (fd).
Starting vTPM manufacturing as tss:tss @ Mon 15 Jan 2024 12:14:31 PM UTC
Successfully created RSA 2048 EK with handle 0x81010001.
  Invoking /usr/bin/swtpm_localca --type ek --ek b3bddc35bd367a69dfc94cf81cc5e9a9892ca63057d123ff7196a5e9ffd6ce24afc6d2739e04445b581194fc49fa5840b7f07a0540c1ab9f335281b92c46b44a2c9267b724df11915cc0b496b7d20193279d75e67e7f9ecc3bd42a32237d3c689cc0bfc7ca2e05ff684e28313903ed326466f2dac9e659030124eeca9075ecd45d42222999c749baf4aaf5f848002709fc51fdab09cef55a9d954422782c91bfdfdcc6c3016d53c611695791d52e8ba792178b7183b60c6af35b8efdb02f4a71e67200b5caad021165e8779b09ae63e8fe5373ed7bcfd8129ede91a7212c8b96dcfb26b6c528eda4327cebef026481fb83d3ebba28b89178fcef6851d5cd7157 --dir /tmp/swtpm_setup.certs.OJCXH2 --logfile /var/log/swtpm/libvirt/qemu/instance-0000003a-swtpm.log --vmid instance-0000003a:b5d96241-d569-4566-acfe-4edec9413a2f --tpm-spec-family 2.0 --tpm-spec-level 0 --tpm-spec-revision 164 --tpm-manufacturer id:00001014 --tpm-model swtpm --tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf --optsfile /etc/swtpm-localca.options
Creating root CA and a local CA's signing key and issuer cert.
Successfully created EK certificate locally.
  Invoking /usr/bin/swtpm_localca --type platform --ek b3bddc35bd367a69dfc94cf81cc5e9a9892ca63057d123ff7196a5e9ffd6ce24afc6d2739e04445b581194fc49fa5840b7f07a0540c1ab9f335281b92c46b44a2c9267b724df11915cc0b496b7d20193279d75e67e7f9ecc3bd42a32237d3c689cc0bfc7ca2e05ff684e28313903ed326466f2dac9e659030124eeca9075ecd45d42222999c749baf4aaf5f848002709fc51fdab09cef55a9d954422782c91bfdfdcc6c3016d53c611695791d52e8ba792178b7183b60c6af35b8efdb02f4a71e67200b5caad021165e8779b09ae63e8fe5373ed7bcfd8129ede91a7212c8b96dcfb26b6c528eda4327cebef026481fb83d3ebba28b89178fcef6851d5cd7157 --dir /tmp/swtpm_setup.certs.OJCXH2 --logfile /var/log/swtpm/libvirt/qemu/instance-0000003a-swtpm.log --vmid instance-0000003a:b5d96241-d569-4566-acfe-4edec9413a2f --tpm-spec-family 2.0 --tpm-spec-level 0 --tpm-spec-revision 164 --tpm-manufacturer id:00001014 --tpm-model swtpm --tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf --optsfile /etc/swtpm-localca.options
Successfully created platform certificate locally.
Successfully created NVRAM area 0x1c00002 for RSA 2048 EK certificate.
Successfully created NVRAM area 0x1c08000 for platform certificate.
Successfully created ECC EK with handle 0x81010016.
  Invoking /usr/bin/swtpm_localca --type ek --ek x=2fc91da22562f6fe75ec547e90ac8720c4c7aad0d5ee2814514c07493e86a47fbcc2dce876e9587d1d36a04316d39dbe,y=1d592ab56ddd30bad059938bc57ab0c4c0f64521488a470132132ed8d30194657298851bdf520b60838fc7d8699b55c4,id=secp384r1 --dir /tmp/swtpm_setup.certs.OJCXH2 --logfile /var/log/swtpm/libvirt/qemu/instance-0000003a-swtpm.log --vmid instance-0000003a:b5d96241-d569-4566-acfe-4edec9413a2f --tpm-spec-family 2.0 --tpm-spec-level 0 --tpm-spec-revision 164 --tpm-manufacturer id:00001014 --tpm-model swtpm --tpm-version id:20191023 --tpm2 --configfile /etc/swtpm-localca.conf --optsfile /etc/swtpm-localca.options
Successfully created EK certificate locally.
Successfully created NVRAM area 0x1c00016 for ECC EK certificate.
Successfully activated PCR banks sha256 among sha1,sha256,sha384,sha512.
Successfully authored TPM state.
Ending vTPM manufacturing @ Mon 15 Jan 2024 12:14:32 PM UTC
Could not open UnixIO socket: Permission denied

And from the audit.log

type=AVC msg=audit(1705320872.368:23894): avc:  denied  { create } for  pid=124248 comm="swtpm" name="24-instance-0000003a-swtpm.sock" scontext=system_u:system_r:svirt_t:s0:c383,c951 tcontext=system_u:object_r:container_file_t:s0 tclass=sock_file permissive=0

@gibizer
Copy link
Contributor Author

gibizer commented Jan 16, 2024

I can reproduce it locally with the same result. If I set setenforce 0 on the compute then the VM boots with VTPM so it is a selinux issue. Setting root to swtpm_user and swtpm_group in qemu.conf does not help.

@gibizer
Copy link
Contributor Author

gibizer commented Jan 16, 2024

Jan 16 07:35:18 edpm-compute-1 setroubleshoot[104300]: SELinux is preventing /usr/bin/swtpm from create access on the sock_file /(null). For complete SELinux messages run: sealert -l d76bbfba-9061-4608-a72d-5c1a8c8963e2
Jan 16 07:35:18 edpm-compute-1 setroubleshoot[104300]: SELinux is preventing /usr/bin/swtpm from create access on the sock_file /(null).#012#012*****  Plugin catchall_boolean (89.3 confidence) suggests   ******************#012#012If you want to allow os to enable vtpm#012Then you must tell SELinux about this by enabling the 'os_enable_vtpm' boolean.#012#012Do#012setsebool -P os_enable_vtpm 1#012#012*****  Plugin catchall (11.6 confidence) suggests   **************************#012#012If you believe that swtpm should be allowed create access on the (null) sock_file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'swtpm' --raw | audit2allow -M my-swtpm#012# semodule -X 300 -i my-swtpm.pp#012
Jan 16 07:35:18 edpm-compute-1 setroubleshoot[104300]: SELinux is preventing /usr/bin/swtpm from setattr access on the sock_file 2-instance-0000000c-swtpm.sock. For complete SELinux messages run: sealert -l e02741d9-d484-4cc5-9a14-8aff3bd1a887
Jan 16 07:35:18 edpm-compute-1 setroubleshoot[104300]: SELinux is preventing /usr/bin/swtpm from setattr access on the sock_file 2-instance-0000000c-swtpm.sock.#012#012*****  Plugin catchall_boolean (89.3 confidence) suggests   ******************#012#012If you want to allow os to enable vtpm#012Then you must tell SELinux about this by enabling the 'os_enable_vtpm' boolean.#012#012Do#012setsebool -P os_enable_vtpm 1#012#012*****  Plugin catchall (11.6 confidence) suggests   **************************#012#012If you believe that swtpm should be allowed setattr access on the 2-instance-0000000c-swtpm.sock sock_file by default.#012Then you should report this as a bug.#012You can generate a local policy module to allow this access.#012Do#012allow this access for now by executing:#012# ausearch -c 'swtpm' --raw | audit2allow -M my-swtpm#012# semodule -X 300 -i my-swtpm.pp#012
[root@edpm-compute-1 ~]# sealert -l d76bbfba-9061-4608-a72d-5c1a8c8963e2
SELinux is preventing /usr/bin/swtpm from create access on the sock_file /(null).

*****  Plugin catchall_boolean (89.3 confidence) suggests   ******************

If you want to allow os to enable vtpm
Then you must tell SELinux about this by enabling the 'os_enable_vtpm' boolean.

Do
setsebool -P os_enable_vtpm 1

*****  Plugin catchall (11.6 confidence) suggests   **************************

If you believe that swtpm should be allowed create access on the (null) sock_file by default.
Then you should report this as a bug.
You can generate a local policy module to allow this access.
Do
allow this access for now by executing:
# ausearch -c 'swtpm' --raw | audit2allow -M my-swtpm
# semodule -X 300 -i my-swtpm.pp


Additional Information:
Source Context                system_u:system_r:svirt_t:s0:c558,c605
Target Context                system_u:object_r:container_file_t:s0
Target Objects                /(null) [ sock_file ]
Source                        swtpm
Source Path                   /usr/bin/swtpm
Port                          <Unknown>
Host                          edpm-compute-1
Source RPM Packages           swtpm-0.8.0-1.el9.x86_64
Target RPM Packages           
SELinux Policy RPM            selinux-policy-targeted-38.1.29-1.el9.noarch
Local Policy RPM              selinux-policy-targeted-38.1.29-1.el9.noarch
Selinux Enabled               True
Policy Type                   targeted
Enforcing Mode                Permissive
Host Name                     edpm-compute-1
Platform                      Linux edpm-compute-1 5.14.0-407.el9.x86_64 #1 SMP
                              PREEMPT_DYNAMIC Wed Jan 10 23:24:51 UTC 2024
                              x86_64 x86_64
Alert Count                   1
First Seen                    2024-01-16 12:35:17 UTC
Last Seen                     2024-01-16 12:35:17 UTC
Local ID                      d76bbfba-9061-4608-a72d-5c1a8c8963e2

Raw Audit Messages
type=AVC msg=audit(1705408517.594:26100): avc:  denied  { create } for  pid=104299 comm="swtpm" name="2-instance-0000000c-swtpm.sock" scontext=system_u:system_r:svirt_t:s0:c558,c605 tcontext=system_u:object_r:container_file_t:s0 tclass=sock_file permissive=1


type=SYSCALL msg=audit(1705408517.594:26100): arch=x86_64 syscall=bind success=yes exit=0 a0=4 a1=7ffc4f3269a0 a2=39 a3=7f56fe5b13e0 items=2 ppid=1 pid=104299 auid=4294967295 uid=59 gid=59 euid=59 suid=59 fsuid=59 egid=59 sgid=59 fsgid=59 tty=(none) ses=4294967295 comm=swtpm exe=/usr/bin/swtpm subj=system_u:system_r:svirt_t:s0:c558,c605 key=(null)

type=CWD msg=audit(1705408517.594:26100): cwd=/

type=PATH msg=audit(1705408517.594:26100): item=0 name=(null) inode=3517 dev=00:18 mode=040770 ouid=107 ogid=59 rdev=00:00 obj=system_u:object_r:container_file_t:s0 nametype=PARENT cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0

type=PATH msg=audit(1705408517.594:26100): item=1 name=(null) inode=3711 dev=00:18 mode=0140755 ouid=59 ogid=59 rdev=00:00 obj=system_u:object_r:container_file_t:s0 nametype=CREATE cap_fp=0 cap_fi=0 cap_fe=0 cap_fver=0 cap_frootid=0

Hash: swtpm,svirt_t,container_file_t,sock_file,create

@gibizer
Copy link
Contributor Author

gibizer commented Jan 16, 2024

@gibizer
Copy link
Contributor Author

gibizer commented Jan 16, 2024

I confirmed executing setsebool -P os_enable_vtpm 1 is enough to allow the VM to boot with vtpm with selinux enforcing mode

@gibizer
Copy link
Contributor Author

gibizer commented Jan 16, 2024

Fix is proposed to edpm_ansible openstack-k8s-operators/edpm-ansible#537

@gibizer
Copy link
Contributor Author

gibizer commented Jan 17, 2024

recheck

The selinux config in openstack-k8s-operators/edpm-ansible#537 merged and a new runner image is built.

@gibizer
Copy link
Contributor Author

gibizer commented Jan 17, 2024

recheck

it seem the last recheck was ignored

Copy link

Build failed (check pipeline). Post recheck (without leading slash)
to rerun all jobs. Make sure the failure cause has been resolved before
you rerun jobs.

https://review.rdoproject.org/zuul/buildset/bb34460225594ac1b551a735780110a0

✔️ openstack-k8s-operators-content-provider SUCCESS in 1h 46m 12s
✔️ podified-multinode-edpm-deployment-crc SUCCESS in 1h 10m 23s
✔️ cifmw-crc-podified-edpm-baremetal SUCCESS in 1h 13m 16s
openstack-operator-tempest-multinode FAILURE in 1h 30m 32s

@gibizer
Copy link
Contributor Author

gibizer commented Jan 17, 2024

@gibizer
Copy link
Contributor Author

gibizer commented Jan 17, 2024

We discussed on the #rhos-compute channel and there is an effort downstream to enable whitebox tests already but the compute_nodes.yaml configuration for whitebox is not ready yet. So let's close this for now not to duplicate the effort in parallel. Later we can revisit to move some of the downstream testing upstream.

@gibizer gibizer closed this Jan 17, 2024
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
None yet
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

3 participants