[MTC-294] Add e2e test for CronJob migration with attached PVC as non-admin user#268
[MTC-294] Add e2e test for CronJob migration with attached PVC as non-admin user#268midays wants to merge 2 commits intomigtools:mainfrom
Conversation
…r [MTC-294] This commit introduces a new test case that validates the migration of a CronJob along with its attached Persistent Volume Claim (PVC) when executed by a non-admin user. The test ensures that the necessary permissions are granted, the source application is prepared correctly, and the PVC is transferred and validated on the target cluster. It also checks that the CronJob is scheduled correctly on the target after migration. Signed-off-by: midays@redhat.com <midays@redhat.com>
|
Warning Rate limit exceeded
Your organization is not enrolled in usage-based pricing. Contact your admin to enable usage-based pricing to continue reviews beyond the rate limit, or try again in 43 minutes and 58 seconds. ⌛ How to resolve this issue?After the wait time has elapsed, a review can be triggered using the We recommend that you space out your commits to avoid hitting the rate limit. 🚦 How do rate limits work?CodeRabbit enforces hourly rate limits for each developer per organization. Our paid plans have higher rate limits than the trial, open-source and free plans. In all cases, we re-allow further reviews after a brief timeout. Please see our FAQ for further information. ℹ️ Review info⚙️ Run configurationConfiguration used: defaults Review profile: CHILL Plan: Pro Run ID: 📒 Files selected for processing (1)
📝 WalkthroughWalkthroughA new e2e test is added to verify CronJob with attached PVC migration functionality for non-admin users, testing the complete migration pipeline including PVC content integrity validation across source and target clusters. Changes
Estimated code review effort🎯 3 (Moderate) | ⏱️ ~25 minutes Possibly related PRs
Suggested labels
Suggested reviewers
Poem
🚥 Pre-merge checks | ✅ 3✅ Passed checks (3 passed)
✏️ Tip: You can configure your own custom pre-merge checks in the settings. ✨ Finishing Touches🧪 Generate unit tests (beta)
Thanks for using CodeRabbit! It's free for OSS, and your support helps us grow. If you like it, consider giving us a shout-out. Comment |
Test Coverage ReportTotal: 33.0% Per-package coverage
Full function-level detailsPosted by CI |
There was a problem hiding this comment.
Actionable comments posted: 3
🤖 Prompt for all review comments with AI agents
Verify each finding against the current code and only fix it if needed.
Inline comments:
In `@e2e-tests/tests/mtc_294_cronJob_PVC_test.go`:
- Around line 53-63: The test is switching back to scenario.KubectlTgt for
target-side mutations which bypasses the non-admin path; instead, use the
namespace-admin runner returned by SetupNamespaceAdminUsersForScenario
(kubectlSrcNonAdmin) for all target-side namespace operations (e.g., creating
the pvc-reader ServiceAccount and unsuspending the CronJob). Update the places
that currently call scenario.KubectlTgt (including the block around the initial
setup and the similar block at lines 199-246) to execute those mutations via
kubectlSrcNonAdmin so the non-admin scenario is exercised end-to-end, and keep
existing DeferCleanup calls unchanged.
- Around line 174-178: The test currently only checks that
ListPVCs(tgtApp.Namespace, "", tgtApp.Context) returns a non-empty slice
(tgtPVCs); update the assertion to verify each expected migrated PVC name exists
in tgtPVCs and that its Status.Phase equals "Bound"
(corev1.PersistentVolumeClaimPhase/ corev1.ClaimBound). Locate where the source
PVC names are gathered (e.g., srcPVCs or the setup that created the PVCs) and
for each expected name assert a matching entry in tgtPVCs by .Name and assert
its .Status.Phase == corev1.ClaimBound using Expect; keep using ListPVCs,
tgtApp.Namespace and tgtApp.Context to fetch current target PVCs before
validating names/phases.
- Around line 79-111: The current waitForLatestCronPod only waits for the latest
pod to appear and then checks logs, which can race with an active writer; change
it to wait until the pod has completed (phase Succeeded) before returning and
before any suspend/export/validation steps. Locate waitForLatestCronPod and
replace the pod-existence check with a loop or kubectl invocation that queries
the pod's .status.phase (or uses kubectl wait for the pod to reach Succeeded)
for the resolved podName, and only then return podName; also ensure
assertPodLogsContain is used after that completion check so log-based completion
assertions happen on a finished pod. Ensure you reference the pod via the same
label selection logic already in waitForLatestCronPod (cronowner= + appName) so
the correct latest pod is checked.
🪄 Autofix (Beta)
Fix all unresolved CodeRabbit comments on this PR:
- Push a commit to this branch (recommended)
- Create a new PR with the fixes
ℹ️ Review info
⚙️ Run configuration
Configuration used: defaults
Review profile: CHILL
Plan: Pro
Run ID: bc6d7fd6-b818-467f-a3c5-309b9fa958e6
📒 Files selected for processing (1)
e2e-tests/tests/mtc_294_cronJob_PVC_test.go
| kubectlSrcNonAdmin, _, cleanup, err := SetupNamespaceAdminUsersForScenario(scenario, namespace) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| DeferCleanup(func() { | ||
| By("Delete test namespace on source and target (best effort)") | ||
| for _, k := range []KubectlRunner{scenario.KubectlSrc, scenario.KubectlTgt} { | ||
| if _, err := k.Run("delete", "namespace", namespace, "--ignore-not-found=true", "--wait=false"); err != nil { | ||
| log.Printf("cleanup: failed to delete namespace %q on context %q: %v", namespace, k.Context, err) | ||
| } | ||
| } | ||
| }) | ||
| DeferCleanup(cleanup) |
There was a problem hiding this comment.
Keep the target-side mutations on the namespace-admin runner.
After setting up namespace-admin access, the test switches back to scenario.KubectlTgt for creating pvc-reader and unsuspending the CronJob. That bypasses the non-admin path the scenario is supposed to validate.
Suggested change
- kubectlSrcNonAdmin, _, cleanup, err := SetupNamespaceAdminUsersForScenario(scenario, namespace)
+ kubectlSrcNonAdmin, kubectlTgtNonAdmin, cleanup, err := SetupNamespaceAdminUsersForScenario(scenario, namespace)
Expect(err).NotTo(HaveOccurred())
@@
- _, err = scenario.KubectlTgt.Run(
+ _, err = kubectlTgtNonAdmin.Run(
"run", "pvc-reader",
"-n", namespace,
"--image=busybox",
@@
- _, err = scenario.KubectlTgt.Run(
+ _, err = kubectlTgtNonAdmin.Run(
"patch", "cronjob", appName,
"-n", namespace,
"-p", `{"spec":{"suspend":false}}`,
)Also applies to: 199-246
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e-tests/tests/mtc_294_cronJob_PVC_test.go` around lines 53 - 63, The test
is switching back to scenario.KubectlTgt for target-side mutations which
bypasses the non-admin path; instead, use the namespace-admin runner returned by
SetupNamespaceAdminUsersForScenario (kubectlSrcNonAdmin) for all target-side
namespace operations (e.g., creating the pvc-reader ServiceAccount and
unsuspending the CronJob). Update the places that currently call
scenario.KubectlTgt (including the block around the initial setup and the
similar block at lines 199-246) to execute those mutations via
kubectlSrcNonAdmin so the non-admin scenario is exercised end-to-end, and keep
existing DeferCleanup calls unchanged.
| By("Wait for at least one job pod to succeed and write to PVC") | ||
| waitForLatestCronPod := func(k KubectlRunner) string { | ||
| var podName string | ||
| Eventually(func() string { | ||
| out, err := k.Run( | ||
| "get", "pod", | ||
| "-n", namespace, | ||
| "-l", "cronowner="+appName, | ||
| "--sort-by=.metadata.creationTimestamp", | ||
| "-o", "jsonpath={.items[*].metadata.name}", | ||
| ) | ||
| if err != nil { | ||
| return "" | ||
| } | ||
| pods := strings.Fields(out) | ||
| if len(pods) == 0 { | ||
| return "" | ||
| } | ||
| podName = pods[len(pods)-1] | ||
| return podName | ||
| }, "3m", "10s").ShouldNot(BeEmpty()) | ||
| return podName | ||
| } | ||
|
|
||
| assertPodLogsContain := func(k KubectlRunner, podName, substr string) { | ||
| Eventually(func() string { | ||
| out, err := k.Run("logs", podName, "-n", namespace) | ||
| if err != nil { | ||
| return "" | ||
| } | ||
| return out | ||
| }, "2m", "10s").Should(ContainSubstring(substr)) | ||
| } |
There was a problem hiding this comment.
Wait for the CronJob pod to finish before exporting or validating PVC state.
waitForLatestCronPod only waits for pod creation, then log output. The source pod can still be running when you suspend/export, which makes the PVC copy race with the active writer and turns this into a flaky partial-write check.
Suggested change
waitForLatestCronPod := func(k KubectlRunner) string {
var podName string
Eventually(func() string {
out, err := k.Run(
"get", "pod",
"-n", namespace,
"-l", "cronowner="+appName,
"--sort-by=.metadata.creationTimestamp",
"-o", "jsonpath={.items[*].metadata.name}",
)
if err != nil {
return ""
}
pods := strings.Fields(out)
if len(pods) == 0 {
return ""
}
podName = pods[len(pods)-1]
return podName
}, "3m", "10s").ShouldNot(BeEmpty())
+ Eventually(func() string {
+ out, err := k.Run(
+ "get", "pod", podName,
+ "-n", namespace,
+ "-o", "jsonpath={.status.phase}",
+ )
+ if err != nil {
+ return ""
+ }
+ return out
+ }, "3m", "10s").Should(Equal("Succeeded"))
return podName
}📝 Committable suggestion
‼️ IMPORTANT
Carefully review the code before committing. Ensure that it accurately replaces the highlighted code, contains no missing lines, and has no issues with indentation. Thoroughly test & benchmark the code to ensure it meets the requirements.
| By("Wait for at least one job pod to succeed and write to PVC") | |
| waitForLatestCronPod := func(k KubectlRunner) string { | |
| var podName string | |
| Eventually(func() string { | |
| out, err := k.Run( | |
| "get", "pod", | |
| "-n", namespace, | |
| "-l", "cronowner="+appName, | |
| "--sort-by=.metadata.creationTimestamp", | |
| "-o", "jsonpath={.items[*].metadata.name}", | |
| ) | |
| if err != nil { | |
| return "" | |
| } | |
| pods := strings.Fields(out) | |
| if len(pods) == 0 { | |
| return "" | |
| } | |
| podName = pods[len(pods)-1] | |
| return podName | |
| }, "3m", "10s").ShouldNot(BeEmpty()) | |
| return podName | |
| } | |
| assertPodLogsContain := func(k KubectlRunner, podName, substr string) { | |
| Eventually(func() string { | |
| out, err := k.Run("logs", podName, "-n", namespace) | |
| if err != nil { | |
| return "" | |
| } | |
| return out | |
| }, "2m", "10s").Should(ContainSubstring(substr)) | |
| } | |
| waitForLatestCronPod := func(k KubectlRunner) string { | |
| var podName string | |
| Eventually(func() string { | |
| out, err := k.Run( | |
| "get", "pod", | |
| "-n", namespace, | |
| "-l", "cronowner="+appName, | |
| "--sort-by=.metadata.creationTimestamp", | |
| "-o", "jsonpath={.items[*].metadata.name}", | |
| ) | |
| if err != nil { | |
| return "" | |
| } | |
| pods := strings.Fields(out) | |
| if len(pods) == 0 { | |
| return "" | |
| } | |
| podName = pods[len(pods)-1] | |
| return podName | |
| }, "3m", "10s").ShouldNot(BeEmpty()) | |
| Eventually(func() string { | |
| out, err := k.Run( | |
| "get", "pod", podName, | |
| "-n", namespace, | |
| "-o", "jsonpath={.status.phase}", | |
| ) | |
| if err != nil { | |
| return "" | |
| } | |
| return out | |
| }, "3m", "10s").Should(Equal("Succeeded")) | |
| return podName | |
| } | |
| assertPodLogsContain := func(k KubectlRunner, podName, substr string) { | |
| Eventually(func() string { | |
| out, err := k.Run("logs", podName, "-n", namespace) | |
| if err != nil { | |
| return "" | |
| } | |
| return out | |
| }, "2m", "10s").Should(ContainSubstring(substr)) | |
| } |
🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e-tests/tests/mtc_294_cronJob_PVC_test.go` around lines 79 - 111, The
current waitForLatestCronPod only waits for the latest pod to appear and then
checks logs, which can race with an active writer; change it to wait until the
pod has completed (phase Succeeded) before returning and before any
suspend/export/validation steps. Locate waitForLatestCronPod and replace the
pod-existence check with a loop or kubectl invocation that queries the pod's
.status.phase (or uses kubectl wait for the pod to reach Succeeded) for the
resolved podName, and only then return podName; also ensure assertPodLogsContain
is used after that completion check so log-based completion assertions happen on
a finished pod. Ensure you reference the pod via the same label selection logic
already in waitForLatestCronPod (cronowner= + appName) so the correct latest pod
is checked.
| By("Verify PVC exists and is Bound on target") | ||
| tgtPVCs, err := ListPVCs(tgtApp.Namespace, "", tgtApp.Context) | ||
| Expect(err).NotTo(HaveOccurred()) | ||
| Expect(tgtPVCs).NotTo(BeEmpty(), "expected PVC to be present on target after transfer") | ||
| log.Printf("PVC(s) confirmed on target cluster\n") |
There was a problem hiding this comment.
Verify the transferred PVCs by name and phase.
This only proves the target namespace has some PVC. In a fixed namespace, stale resources can satisfy NotTo(BeEmpty()), and the test never asserts that each migrated claim exists and reaches Bound.
Suggested change
- tgtPVCs, err := ListPVCs(tgtApp.Namespace, "", tgtApp.Context)
- Expect(err).NotTo(HaveOccurred())
- Expect(tgtPVCs).NotTo(BeEmpty(), "expected PVC to be present on target after transfer")
- log.Printf("PVC(s) confirmed on target cluster\n")
+ for _, pvc := range pvcs {
+ Eventually(func() string {
+ out, err := scenario.KubectlTgt.Run(
+ "get", "pvc", pvc.Name,
+ "-n", tgtApp.Namespace,
+ "-o", "jsonpath={.status.phase}",
+ )
+ if err != nil {
+ return ""
+ }
+ return out
+ }, "2m", "5s").Should(Equal("Bound"), "expected PVC %q to be Bound on target", pvc.Name)
+ }
+ log.Printf("PVC(s) confirmed on target cluster\n")🤖 Prompt for AI Agents
Verify each finding against the current code and only fix it if needed.
In `@e2e-tests/tests/mtc_294_cronJob_PVC_test.go` around lines 174 - 178, The test
currently only checks that ListPVCs(tgtApp.Namespace, "", tgtApp.Context)
returns a non-empty slice (tgtPVCs); update the assertion to verify each
expected migrated PVC name exists in tgtPVCs and that its Status.Phase equals
"Bound" (corev1.PersistentVolumeClaimPhase/ corev1.ClaimBound). Locate where the
source PVC names are gathered (e.g., srcPVCs or the setup that created the PVCs)
and for each expected name assert a matching entry in tgtPVCs by .Name and
assert its .Status.Phase == corev1.ClaimBound using Expect; keep using ListPVCs,
tgtApp.Namespace and tgtApp.Context to fetch current target PVCs before
validating names/phases.
This update modifies the e2e test for CronJob migration to dynamically retrieve the PVC name from the list of available PVCs instead of using a hardcoded value. This change enhances the test's flexibility and ensures it accurately reflects the current state of the environment during execution. Signed-off-by: midays@redhat.com Signed-off-by: midays@redhat.com <midays@redhat.com>
This commit introduces a new test case that validates the migration of a CronJob along with its attached Persistent Volume Claim (PVC) when executed by a non-admin user. The test ensures that the necessary permissions are granted, the source application is prepared correctly, and the PVC is transferred and validated on the target cluster. It also checks that the CronJob is scheduled correctly on the target after migration.
Summary by CodeRabbit