-
Notifications
You must be signed in to change notification settings - Fork 38.9k
New issue
Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.
By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.
Already on GitHub? Sign in to your account
Rework multi-volume test to use StatefulSet #66925
Conversation
/assign @davidz627 |
/retest |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mostly looks good. just some questions and readability things.
for _, pvc := range pvcs { | ||
framework.Logf("Created PVC %q", pvc.Name) | ||
framework.ExpectNoError(framework.WaitForPersistentVolumeClaimPhase(v1.ClaimBound, c, ns, pvc.Name, framework.Poll, framework.ClaimProvisionTimeout)) | ||
writeCmd += "&& sleep 10000" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
why are we sleeping for so long?
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
I just want to simulate a long running process
By("Recreating the pod and validating the data") | ||
framework.ExpectNoError(framework.DeletePodWithWait(f, c, pod)) | ||
By("Deleting the StatefulSet but not the volumes") | ||
ss, err = ssTester.Scale(ss, 0) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we have to scale down before deleting? If so, can you add a comment for why
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
mainly because i wanted to use the statefulset testing library which doesn't have a method to wait for deletion, and this is what they do.
framework.ExpectNoError(framework.WaitForPodSuccessInNamespace(c, pod.Name, ns)) | ||
ssTester.WaitForRunningAndReady(1, ss) | ||
|
||
By("Deleting the pod and revalidating the data") |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
Is it necessary to double validate here? We're already deleting the statefulset and bringing a new one up.
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
not strictly. I can remove it
|
||
func getVolumeFile(i int) string { | ||
// mountPath is /mnt/vol<i+1> | ||
return fmt.Sprintf("/mnt/vol%v/data%v", i+1, i+1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just use i
.
for i := 0; i < pvcCount; i++ { | ||
pvc := framework.MakePersistentVolumeClaim(framework.PersistentVolumeClaimConfig{}, ns) | ||
pvc.GenerateName = "" | ||
pvc.Name = fmt.Sprintf("vol%v", i+1) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
just use i
}, | ||
}, | ||
VolumeClaimTemplates: claims, | ||
ServiceName: "test-service", |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need this? there is no corresponding service being created
} | ||
pod := framework.MakePod(ns, nil, pvcs, false, writeCmd) | ||
pod, err = c.CoreV1().Pods(ns).Create(pod) | ||
spec := makeStatefulSetWithPVCs(ns, writeCmd, numVols, probe) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
from the top level of this test it is not clear where the volume mount names are coming from, it would be better if you could somehow extract the volume mount paths out to this top level and pass it into this function so that the test is slightly more readable.
#testingOnTheToiletEp529
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
done
Updated |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
small comment then lgtm
Expect(err).NotTo(HaveOccurred()) | ||
framework.ExpectNoError(framework.WaitForPodSuccessInNamespace(c, pod.Name, ns)) | ||
ssTester.WaitForRunningAndReady(1, ss) |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
clean up the statefulset after test?
claims := []v1.PersistentVolumeClaim{} | ||
for i := 0; i < numVols; i++ { | ||
pvc := framework.MakePersistentVolumeClaim(framework.PersistentVolumeClaimConfig{}, ns) | ||
pvc.GenerateName = "" |
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
do we need to setting this explicitly to ""
There was a problem hiding this comment.
Choose a reason for hiding this comment
The reason will be displayed to describe this comment to others. Learn more.
nope i've removed
/sig storage |
updated |
/test pull-kubernetes-e2e-gce-device-plugin-gpu |
/lgtm |
[APPROVALNOTIFIER] This PR is APPROVED This pull-request has been approved by: davidz627, msau42 The full list of commands accepted by this bot can be found here. The pull request process is described here
Needs approval from an approver in each of these files:
Approvers can indicate their approval by writing |
Automatic merge from submit-queue (batch tested with PRs 66933, 66925). If you want to cherry-pick this change to another branch, please follow the instructions here. |
…-upstream-release-1.9 Automatic merge from submit-queue. Automated cherry pick of #66832: Detect if GCE PD udev link is wrong and try to correct it Cherry pick of #66832 on release-1.9. #66832: Detect if GCE PD udev link is wrong and try to correct it #66925: Rework multi-volume test to use StatefulSet
What this PR does / why we need it:
The e2e test that got added as part of #66832 fails in a multi-zone environment because the volumes get provisioned in random zones. This PR reworks the test to use StatefulSet instead, which handles provisioning multiple PVCs in the same zone.
Which issue(s) this PR fixes (optional, in
fixes #<issue number>(, fixes #<issue_number>, ...)
format, will close the issue(s) when PR gets merged):Fixes #
Special notes for your reviewer:
Release note: