Conversation
|
Note Reviews pausedIt looks like this branch is under active development. To avoid overwhelming you with review comments due to an influx of new commits, CodeRabbit has automatically paused this review. You can configure this behavior by changing the Use the following commands to manage reviews:
Use the checkboxes below for quick actions:
WalkthroughAdds a RapiDAST security scanning integration test targeting the Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
|
[APPROVALNOTIFIER] This PR is NOT APPROVED This pull-request has been approved by: mmakwana30 The full list of commands accepted by this bot can be found here. DetailsNeeds approval from an approver in each of these files:Approvers can indicate their approval by writing |
There was a problem hiding this comment.
Actionable comments posted: 5
🧹 Nitpick comments (2)
test/integration/qe_tests/rapidast.go (1)
332-333: Remove unused import workaround.If
batchv1isn't used, remove the import instead of this blank assignment.🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/rapidast.go` around lines 332 - 333, Remove the unused import workaround by deleting the blank assignment "var _ = batchv1.Job{}" and, if the package "batchv1" is not referenced anywhere else, remove the "batchv1" import from the file; ensure there are no remaining references to the symbol batchv1.Job in rapidast.go before removing the import to avoid compile errors.test/integration/qe_tests/rapidast/job_rapidast.yaml (1)
37-37: Using:latesttag reduces reproducibility.Pin to a specific version tag to ensure consistent behavior across test runs and avoid unexpected breakages from upstream changes.
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/rapidast/job_rapidast.yaml` at line 37, The container image in job_rapidast.yaml currently uses the floating tag "quay.io/redhatproductsecurity/rapidast:latest" which harms reproducibility; update the image reference in the job spec (the image field) to a fixed version tag or an immutable digest (for example "quay.io/redhatproductsecurity/rapidast:vX.Y.Z" or "quay.io/redhatproductsecurity/rapidast@sha256:...") so test runs use a known, immutable image.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 122-126: The temp config file tmpConfigFile is created with
world-readable mode 0644 when writing updatedConfig; change the file permissions
to 0600 to restrict access by replacing the os.WriteFile call to use 0600 (e.g.,
os.WriteFile(tmpConfigFile, []byte(updatedConfig), 0600)) so the bearer token is
only readable/writable by the owner while leaving the defer
os.Remove(tmpConfigFile) unchanged.
- Around line 254-258: The call to addClusterRoleToSA (after g.By("Adding
cluster-admin role to default SA")) swallows errors by only logging them,
causing later confusing failures; change the flow to fail fast by propagating or
turning the error into a test failure immediately (e.g., return the error from
the surrounding function or call the test framework's fatal/fail method) instead
of continuing, while still ensuring cleanup via removeClusterRoleFromSA in a
defer; update the block that checks err from addClusterRoleToSA to abort the
test on error rather than just log it.
- Around line 165-167: The error from the pod listing is being swallowed (if err
!= nil { return false, nil }) which hides failures; update the error handling in
the function containing that snippet (the block checking err from the pod list
call) to propagate the error instead of returning nil—return the boolean and the
actual err (or otherwise pass the err up) so callers can handle/log it; ensure
you adjust any callers/tests expecting the old signature if necessary.
In `@test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml`:
- Around line 21-22: The active scan policy name is wrong: update the
activeScan.policy value from "Operator-scan" to the custom policy name used by
the job (e.g. "custom-scan") so the copied policy file (custom-scan.policy) is
actually referenced; then open the customscan.policy file and ensure its line 3
contains <policy>custom-scan</policy> to match the policy name. Ensure
references to "custom-scan" are consistent between activeScan.policy, the job
that copies to custom-scan.policy, and the policy file content.
In `@test/integration/qe_tests/rapidast/job_rapidast.yaml`:
- Line 47: The resources block currently empty (resources: {}) must include
explicit CPU and memory requests and limits for the container in the Job spec to
avoid resource exhaustion; edit the container spec that contains resources: {}
and replace it with appropriate request and limit values (e.g., requests.cpu,
requests.memory, limits.cpu, limits.memory) tailored to the test workload so
Kubernetes can schedule and throttle the pod predictably—update the resources
key in the job_rapidast.yaml container definition to include both requests and
limits.
---
Nitpick comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 332-333: Remove the unused import workaround by deleting the blank
assignment "var _ = batchv1.Job{}" and, if the package "batchv1" is not
referenced anywhere else, remove the "batchv1" import from the file; ensure
there are no remaining references to the symbol batchv1.Job in rapidast.go
before removing the import to avoid compile errors.
In `@test/integration/qe_tests/rapidast/job_rapidast.yaml`:
- Line 37: The container image in job_rapidast.yaml currently uses the floating
tag "quay.io/redhatproductsecurity/rapidast:latest" which harms reproducibility;
update the image reference in the job spec (the image field) to a fixed version
tag or an immutable digest (for example
"quay.io/redhatproductsecurity/rapidast:vX.Y.Z" or
"quay.io/redhatproductsecurity/rapidast@sha256:...") so test runs use a known,
immutable image.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 63608fbd-4f29-46ef-aab6-b1a05c4ebc36
📒 Files selected for processing (4)
test/integration/qe_tests/rapidast.gotest/integration/qe_tests/rapidast/customscan.policytest/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yamltest/integration/qe_tests/rapidast/job_rapidast.yaml
test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml
Outdated
Show resolved
Hide resolved
Codecov Report✅ All modified and coverable lines are covered by tests. Additional details and impacted files@@ Coverage Diff @@
## main #2178 +/- ##
=======================================
Coverage 52.07% 52.07%
=======================================
Files 52 52
Lines 3896 3896
=======================================
Hits 2029 2029
Misses 1701 1701
Partials 166 166 🚀 New features to boost your workflow:
|
There was a problem hiding this comment.
Actionable comments posted: 3
♻️ Duplicate comments (1)
test/integration/qe_tests/rapidast.go (1)
253-257:⚠️ Potential issue | 🟠 MajorFail fast if the RBAC setup step fails.
Line 254 is still logged and ignored, so the scan continues with the wrong auth setup and later failures become misleading. Also register the cleanup defer only after the add succeeds.
Possible fix
g.By("Adding cluster-admin role to default SA") if err := addClusterRoleToSA(ns, "default"); err != nil { - logf("Failed to add cluster-admin role: %v", err) + return false, fmt.Errorf("failed to add cluster-admin role: %w", err) } defer removeClusterRoleFromSA(ns, "default")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/rapidast.go` around lines 253 - 257, The test currently logs and ignores errors from addClusterRoleToSA(ns, "default") and registers the cleanup defer regardless; change it to fail fast if addClusterRoleToSA returns an error (e.g., return or call the test failure helper) and only register defer removeClusterRoleFromSA(ns, "default") after addClusterRoleToSA succeeds so the cleanup is only scheduled on success; locate the calls to addClusterRoleToSA and removeClusterRoleFromSA in rapidast.go and update the error handling accordingly.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 111-133: The test is embedding a live cluster-admin SA bearer
token into a ConfigMap (see use of getSAToken(ns, "default"), tmpConfigFile and
the rapidastTC.AsAdmin().WithoutNamespace().Run("create") call); instead stop
writing the token into a ConfigMap and either (a) use a Secret (oc create secret
generic ...) with strict RBAC and reference that in the pod/config, or (b)
prefer a projected serviceAccountToken volume (serviceAccountToken projection)
so the token is mounted at runtime rather than stored in etcd; remove the code
path that replaces "Bearer sha256~xxxxxxxx" with the live token and the
subsequent ConfigMap creation, and update the test to create/use a Secret or
configure the pod spec to mount the projected token (and avoid elevating the
default SA to cluster-admin in getSAToken usage).
- Around line 50-52: The test creates the namespace but immediately calls
getSAToken() (indirectly via createRapidastConfigMap()), which can fail because
the default ServiceAccount is created asynchronously; modify getSAToken() to
poll/retry until the ServiceAccount exists and its secret/token is available (or
add a wait loop before calling getSAToken()), using exponential backoff or k8s
polling (e.g., retry every 500ms up to a timeout) and return a clear error if
the SA/token never appears; update references to getSAToken() and
createRapidastConfigMap() so callers handle the retrying behavior or wait
result.
- Around line 157-183: waitForRapidastJobCompletion currently treats
corev1.PodFailed as a successful terminal state; change that so a failed pod
stops polling but returns an error instead. In the switch on
pods.Items[0].Status.Phase inside waitForRapidastJobCompletion, replace the
PodFailed branch to return true with a descriptive error (include jobName/pod
name and pod.Status.Reason or pod.Status.Message from pods.Items[0]) so
PollUntilContextTimeout surfaces a failure. Ensure you reference
pods.Items[0].Status.Phase and the pod metadata in the error message so callers
(and logs) can identify the failure.
---
Duplicate comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 253-257: The test currently logs and ignores errors from
addClusterRoleToSA(ns, "default") and registers the cleanup defer regardless;
change it to fail fast if addClusterRoleToSA returns an error (e.g., return or
call the test failure helper) and only register defer
removeClusterRoleFromSA(ns, "default") after addClusterRoleToSA succeeds so the
cleanup is only scheduled on success; locate the calls to addClusterRoleToSA and
removeClusterRoleFromSA in rapidast.go and update the error handling
accordingly.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 4d7886d3-bb60-4836-bd6f-be59e3403945
📒 Files selected for processing (4)
test/integration/qe_tests/rapidast.gotest/integration/qe_tests/rapidast/customscan.policytest/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yamltest/integration/qe_tests/rapidast/job_rapidast.yaml
✅ Files skipped from review due to trivial changes (3)
- test/integration/qe_tests/rapidast/customscan.policy
- test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml
- test/integration/qe_tests/rapidast/job_rapidast.yaml
There was a problem hiding this comment.
Actionable comments posted: 1
♻️ Duplicate comments (2)
test/integration/qe_tests/rapidast.go (2)
224-228:⚠️ Potential issue | 🟠 MajorAvoid granting
cluster-adminto the namespacedefaultServiceAccount.On Line 225, this introduces broad cluster-wide privileges to a shared SA. Use a dedicated SA for the RapiDAST job and bind only required permissions.
As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/rapidast.go` around lines 224 - 228, Do not grant cluster-admin to the namespace default ServiceAccount; instead create a dedicated ServiceAccount (e.g., "rapidast-sa") in the test namespace and bind only the minimal permissions the job needs (prefer a Role + RoleBinding scoped to ns or a narrowly-scoped ClusterRole if truly required). Update the test to create the SA (refer to ns and the new SA name), replace calls that target "default" (addClusterRoleToSA and removeClusterRoleFromSA) to instead create a Role/RoleBinding for required verbs/resources or use a helper that binds a narrowly-scoped ClusterRole to "rapidast-sa", and ensure cleanup removes that SA and its binding rather than modifying the default SA.
251-254:⚠️ Potential issue | 🔴 CriticalPropagate job-completion errors instead of continuing the scan flow.
On Line 252,
waitForRapidastJobCompletionerrors are logged but ignored, so the test continues to parse logs from a failed/timed-out run and can report a misleading pass.Proposed fix
g.By("Waiting for RapiDAST Job to complete") if err := waitForRapidastJobCompletion(ns, jobName, RapidastJobTimeout); err != nil { - logf("rapidastScan timeout waiting for job completion: %v", err) + logf("rapidastScan abort! wait for job completion failed: %v", err) + logf("rapidast result: riskHigh=unknown riskMedium=unknown") + return false, err }As per coding guidelines, "Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/rapidast.go` around lines 251 - 254, The code logs but ignores errors from waitForRapidastJobCompletion (called with RapidastJobTimeout) which lets the test continue after failure; replace the logf-only handling with error propagation: if the enclosing function returns an error, return the err from waitForRapidastJobCompletion instead of calling logf, otherwise use the test framework's fail/expect mechanism (e.g., g.Expect(err).ToNot(HaveOccurred()) or g.Fail/FailNow) to stop the test immediately—locate the call site around the g.By("Waiting for RapiDAST Job to complete") block and update the handling of err from waitForRapidastJobCompletion accordingly (remove the logf-only branch).
🧹 Nitpick comments (1)
test/integration/qe_tests/lvms.go (1)
7113-7115: Use the suite's standard toolbox image for the restore pod.This pod is the only one pulling a different image (
quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623) from a different registry, while the rest of the file standardizes onregistry.redhat.io/rhel8/support-tools:latest. For consistency and to avoid unnecessary registry/image variables, align with the suite's standard toolbox.Proposed fix
- Image: "quay.io/openshifttest/hello-openshift@sha256:b1aabe8c8272f750ce757b6c4263a2712796297511e0c6df79144ee188933623", + Image: "registry.redhat.io/rhel8/support-tools:latest",🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/lvms.go` around lines 7113 - 7115, The restore pod is using a nonstandard image; update the pod spec for the container with Name "dynamic" by replacing Image "quay.io/openshifttest/hello-openshift@sha256:..." with the suite-standard toolbox image "registry.redhat.io/rhel8/support-tools:latest" so the restore pod uses the same standardized toolbox image as the rest of the suite (modify the Image field in the container spec where Name == "dynamic").
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/integration/qe_tests/lvms.go`:
- Around line 7168-7175: The test currently reads pvc.status.capacity once via
tc.Clientset.CoreV1().PersistentVolumeClaims(...).Get and compares
restoredCapacity to expectedCapacity, which can flake; replace that single read
with a wait/poll loop that repeatedly calls PersistentVolumeClaims(...).Get (or
uses a helper like o.Eventually) until
pvc.Status.Capacity[corev1.ResourceStorage].Cmp(resource.MustParse(restoredPvcSize))
>= 0 or a timeout elapses, then assert; update the check around pvcRestoreObj /
restoredCapacity / expectedCapacity to use the polled value and fail only after
the timeout.
---
Duplicate comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 224-228: Do not grant cluster-admin to the namespace default
ServiceAccount; instead create a dedicated ServiceAccount (e.g., "rapidast-sa")
in the test namespace and bind only the minimal permissions the job needs
(prefer a Role + RoleBinding scoped to ns or a narrowly-scoped ClusterRole if
truly required). Update the test to create the SA (refer to ns and the new SA
name), replace calls that target "default" (addClusterRoleToSA and
removeClusterRoleFromSA) to instead create a Role/RoleBinding for required
verbs/resources or use a helper that binds a narrowly-scoped ClusterRole to
"rapidast-sa", and ensure cleanup removes that SA and its binding rather than
modifying the default SA.
- Around line 251-254: The code logs but ignores errors from
waitForRapidastJobCompletion (called with RapidastJobTimeout) which lets the
test continue after failure; replace the logf-only handling with error
propagation: if the enclosing function returns an error, return the err from
waitForRapidastJobCompletion instead of calling logf, otherwise use the test
framework's fail/expect mechanism (e.g., g.Expect(err).ToNot(HaveOccurred()) or
g.Fail/FailNow) to stop the test immediately—locate the call site around the
g.By("Waiting for RapiDAST Job to complete") block and update the handling of
err from waitForRapidastJobCompletion accordingly (remove the logf-only branch).
---
Nitpick comments:
In `@test/integration/qe_tests/lvms.go`:
- Around line 7113-7115: The restore pod is using a nonstandard image; update
the pod spec for the container with Name "dynamic" by replacing Image
"quay.io/openshifttest/hello-openshift@sha256:..." with the suite-standard
toolbox image "registry.redhat.io/rhel8/support-tools:latest" so the restore pod
uses the same standardized toolbox image as the rest of the suite (modify the
Image field in the container spec where Name == "dynamic").
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: c5b8166b-ce5a-40b5-9168-4213458dd28e
📒 Files selected for processing (5)
test/integration/qe_tests/lvms.gotest/integration/qe_tests/rapidast.gotest/integration/qe_tests/rapidast/customscan.policytest/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yamltest/integration/qe_tests/rapidast/job_rapidast.yaml
✅ Files skipped from review due to trivial changes (3)
- test/integration/qe_tests/rapidast/customscan.policy
- test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml
- test/integration/qe_tests/rapidast/job_rapidast.yaml
There was a problem hiding this comment.
Actionable comments posted: 1
🧹 Nitpick comments (1)
test/integration/qe_tests/lvms.go (1)
6929-6957: Reuse the namespace thatsetupTest()already creates.This spec never uses the extra namespace lifecycle that it adds here.
setupTest()+cleanupTest()already gives you a privileged namespace and snapshot-aware teardown, so creatingtestNsagain just adds more namespace churn and a second cleanup path to maintain.As per coding guidelines, "-Focus on major issues impacting performance, readability, maintainability and security. Avoid nitpicks and avoid verbosity."♻️ Proposed simplification
setupTest() g.DeferCleanup(cleanupTest) @@ - g.By("#. Create new namespace for the test scenario") - testNs := fmt.Sprintf("lvms-test-68707-%d", time.Now().UnixNano()) - ns := &corev1.Namespace{ - ObjectMeta: metav1.ObjectMeta{ - Name: testNs, - Labels: map[string]string{ - "pod-security.kubernetes.io/enforce": "privileged", - "pod-security.kubernetes.io/audit": "privileged", - "pod-security.kubernetes.io/warn": "privileged", - }, - }, - } - _, err := tc.Clientset.CoreV1().Namespaces().Create(context.TODO(), ns, metav1.CreateOptions{}) - o.Expect(err).NotTo(o.HaveOccurred()) - defer func() { - exec.Command("oc", "delete", "volumesnapshot", "--all", "-n", testNs, "--ignore-not-found").Run() - tc.Clientset.CoreV1().Namespaces().Delete(context.TODO(), testNs, metav1.DeleteOptions{}) - }() + testNs := testNamespace + var err error🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed. In `@test/integration/qe_tests/lvms.go` around lines 6929 - 6957, This test creates and deletes an extra namespace (variables testNs, ns and the deferred exec.Command + Namespaces().Delete call) even though setupTest() already provisions a privileged namespace and cleanupTest() handles teardown; remove the manual namespace creation and its deferred deletion so the test uses the namespace provided by setupTest(), deleting the block that defines testNs/ns and the defer cleanup lines and update any references to use the existing namespace from setupTest()/cleanupTest() instead.
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/integration/qe_tests/lvms.go`:
- Around line 7034-7047: The test currently only checks PVC Status.Capacity and
writes a tiny file, which doesn’t verify the in-pod filesystem was expanded;
update the test after writePodData/execCommandInPod (near podOri.Name, testNs,
"test-container", mountPath) to run a command inside the pod (e.g., run
statfs/df on mountPath via execCommandInPod) and assert the reported filesystem
size (or available bytes) is >= 2Gi; apply the same in the other block around
lines 7168-7194 so both restored-volume checks validate in-pod filesystem
expansion rather than just PVC Capacity.
---
Nitpick comments:
In `@test/integration/qe_tests/lvms.go`:
- Around line 6929-6957: This test creates and deletes an extra namespace
(variables testNs, ns and the deferred exec.Command + Namespaces().Delete call)
even though setupTest() already provisions a privileged namespace and
cleanupTest() handles teardown; remove the manual namespace creation and its
deferred deletion so the test uses the namespace provided by setupTest(),
deleting the block that defines testNs/ns and the defer cleanup lines and update
any references to use the existing namespace from setupTest()/cleanupTest()
instead.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: ad57b7a8-1624-4f39-9d9e-663bfdc4d913
📒 Files selected for processing (5)
test/integration/qe_tests/lvms.gotest/integration/qe_tests/rapidast.gotest/integration/qe_tests/rapidast/customscan.policytest/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yamltest/integration/qe_tests/rapidast/job_rapidast.yaml
✅ Files skipped from review due to trivial changes (3)
- test/integration/qe_tests/rapidast/customscan.policy
- test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml
- test/integration/qe_tests/rapidast/job_rapidast.yaml
0682e5b to
ca2b341
Compare
There was a problem hiding this comment.
Actionable comments posted: 2
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@test/integration/qe_tests/rapidast.go`:
- Around line 137-139: pods.Items[0] is non-deterministic and can pick the wrong
pod; update the pod selection logic where you currently use pods.Items and the
local pod variable to deterministically pick the pod that belongs to the Job
under test (e.g., iterate pods.Items and choose the pod whose OwnerReferences
contain the Job UID or whose label "job-name" equals job.Name), and if multiple
match pick the most recent by CreationTimestamp or prefer a non-terminated
Phase; replace direct pods.Items[0] accesses with this selector so the
status/log checks use the correct Job pod.
- Around line 224-229: The deferred call to removeClusterRoleFromSA is ignoring
errors which can leave elevated RBAC in place; change the defer to capture and
handle the returned error (e.g., defer func() { if err :=
removeClusterRoleFromSA(ns, "default"); err != nil { /* fail the test or log and
mark cleanup failure */ } }()) so that removeClusterRoleFromSA's error is
checked and either causes the test to fail or is logged/reported clearly;
reference the addClusterRoleToSA and removeClusterRoleFromSA calls and ensure
the cleanup error is not silently discarded.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: Repository YAML (base), Organization UI (inherited)
Review profile: CHILL
Plan: Pro
Run ID: 3a68a7cb-53a9-4b36-a2d9-0d6e51b92f86
📒 Files selected for processing (4)
test/integration/qe_tests/rapidast.gotest/integration/qe_tests/rapidast/customscan.policytest/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yamltest/integration/qe_tests/rapidast/job_rapidast.yaml
✅ Files skipped from review due to trivial changes (2)
- test/integration/qe_tests/rapidast/customscan.policy
- test/integration/qe_tests/rapidast/data_rapidastconfig_lvm_v1alpha1.yaml
🚧 Files skipped from review as they are similar to previous changes (1)
- test/integration/qe_tests/rapidast/job_rapidast.yaml
|
@mmakwana30: The following tests failed, say
Full PR test history. Your PR dashboard. DetailsInstructions for interacting with me using PR comments are available here. If you have questions or suggestions related to my behavior, please file an issue against the kubernetes-sigs/prow repository. I understand the commands that are listed here. |
Logs: https://privatebin.corp.redhat.com/?ec14d99fea804fe2#33uZTzW3iPzDPLTJyai3rHaSSQd1iLFycCy6iN2UwGcc