New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
OCPBUGS-3690: profilerecorder: gracefully skip containers that did not record anything #20
OCPBUGS-3690: profilerecorder: gracefully skip containers that did not record anything #20
Conversation
@jhrozek: This pull request references Jira Issue OCPBUGS-3690, which is valid. 3 validation(s) were run on this bug
Requesting review from QA contact: The bug has been updated to refer to the pull request using the external bug tracker. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
@xiaojiey to test, you can deploy with |
flaky tests are flaky, sometimes |
/test e2e-flaky |
OK, turns out that the flaky tests are failing in the upstream PR as well. |
3e19d23
to
7d9fe55
Compare
This is mostly concerning SELinux because even containers that don't seemingly do anything at least record some syscalls, e.g. exec. But with SELinux, if a container only performs actions that are allowed by the container policy in the first place, no AVCs would be generated. In that case, the call to get AVCs would return with a rather generic error, which would then be retried by the profilerecording reconciler. As a consequence, if a pod had multiple containers and the one with no AVCs was collected before others, the profilerecorder would never reach the other containers. Instead, let's return an error matching the situation and handle the absence of AVCs gracefully, by emptying whatever there might be in the grpc cache and moving on to the next container.
In the flaky tests, I've sometimes seen failures like: ``` Warning Failed 1s kubelet Error: setup seccomp: from field: unable to load local profile "/var/lib/kubelet/seccomp/operator/spo-binding-enabled/profile-allow-unsafe.json": open /var/lib/kubelet/seccomp/operator/spo-binding-enabled/profile-allow-unsafe.json: no such file or directory ``` I think they are caused by us creating the profiles and not checking if they actually exist before using them.
7d9fe55
to
4adf95a
Compare
@jhrozek: all tests passed! Full PR test history. Your PR dashboard. Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. I understand the commands that are listed here. |
With payload 4.13.0-0.nightly-2022-12-12-210406 and code in the PR, tried several times, I didn't reproduce the bug.
|
/qe-approved |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
/lgtm
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: jhrozek, Vincent056 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Adding no-FF labels as QE tested the fix and there is no impact on docs or PX. |
/hold cancel |
@jhrozek: All pull requests linked via external trackers have merged: Jira Issue OCPBUGS-3690 has been moved to the MODIFIED state. In response to this:
Instructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes/test-infra repository. |
Backports kubernetes-sigs#1378 which fixes https://issues.redhat.com/browse/OCPBUGS-3690
It's unusual to send a backport before an upstream patch is merged, but I'm doing that because:
We can
/hold
the PR until the original upstream PR is reviewed and merged.