Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

End2End tests for DynamicVolumeProvisioning of EBS #67102

Merged
merged 1 commit into from
Aug 11, 2018
Merged

End2End tests for DynamicVolumeProvisioning of EBS #67102

merged 1 commit into from
Aug 11, 2018

Conversation

ddebroy
Copy link
Member

@ddebroy ddebroy commented Aug 7, 2018

What this PR does / why we need it:
Add end2end tests to exercise DynamicProvisioningScheduling features for EBS. The tests make sure WaitForFirstConsumer and AllowedTopologies specified in a EBS storage class has the desired effect.

Which issue(s) this PR fixes (optional, in fixes #<issue number>(, fixes #<issue_number>, ...) format, will close the issue(s) when PR gets merged):
Fixes #

Special notes for your reviewer:
Tests features added to 217a3d8

Release note:

NONE

/sig storage
/assign @msau42 @jsafrane

@k8s-ci-robot k8s-ci-robot added release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/L Denotes a PR that changes 100-499 lines, ignoring generated files. needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. labels Aug 7, 2018
@msau42
Copy link
Member

msau42 commented Aug 8, 2018

/ok-to-test

@k8s-ci-robot k8s-ci-robot removed the needs-ok-to-test Indicates a PR that requires an org member to verify it is safe to test. label Aug 8, 2018
defer func() {
framework.DeletePodOrFail(client, pod.Namespace, pod.Name)
}()

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

should we validate that the pod came up running?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

CreateClientPod -> CreatePod -> WaitForPodNameRunningInNamespace waits and polls for pod to get to a PodRunning state. So I think that should be good?

func addAllowedTopologiesToStorageClass(c clientset.Interface, sc *storage.StorageClass) sets.String {
zones, err := framework.GetClusterZones(c)
Expect(err).ToNot(HaveOccurred())
zoneCount := rand.Intn(zones.Len())
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

what happens if zones.Len() == 0?

instead of randomly picking a number of zones, I would make the test deterministic and have a test for 1 zone for now. Maybe in the future, we can have a test case for multiple zones, and we can deterministically control which zones to use (ie through pod node selector or something)

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Assuming the nodes in the cluster where the tests are launched can span multiple zones (i.e. cannot assume a single zone): For the WaitForFirstConsumer + AllowedTopology case, the test will (1) pick a random schedule-able node from the cluster (2) use selected node's zone to set the allowed topology and (3) use a node selector in the test pod to schedule on the selected node and perform checks. For the AllowedTopology (without WaitForFirstConsumer) case, steps (2) and (3) will be skipped.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Why do you need to do 3)? The purpose of volume scheduling is that the pod will go where the storage is

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

True .. AllowedTopology is ignored during provisioning so we can set it to any random zone and the test should still succeed. So (3) is unnecessary above.

})
})
Describe("EBS allowedTopology [Feature:DynamicProvisioningScheduling]", func() {
It("should create persistent volume in the zones specified in allowedTopology of storageclass", func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

one more test case for allowedTopology + delayed binding?

@ddebroy
Copy link
Member Author

ddebroy commented Aug 9, 2018

Addressed review comments and squashed commits

@@ -874,6 +972,85 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
Expect(err).NotTo(HaveOccurred())
})
})
Describe("EBS delayed binding [Feature:DynamicProvisioningScheduling]", func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

It might be better to structure it like the "should provision storage with different parameters" test cases, where all the providers are under a test case.

So, something like Describe("DynamicProvisioner delayed [Feature...]")

@@ -874,6 +972,85 @@ var _ = utils.SIGDescribe("Dynamic Provisioning", func() {
Expect(err).NotTo(HaveOccurred())
})
})
Describe("EBS delayed binding [Feature:DynamicProvisioningScheduling]", func() {
It("should create persistent volume in the same zone as node after a pod mounting the claim is started", func() {
Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

How long does this test take? It may need [Slow] tag if it's more than 2 minutes

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

When running by itself it took about 100 seconds typically but I did see the test infra marking them slow: [SLOW TEST:96.519 seconds] . So I have put the [Slow] tags for the delayed binding tests.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

I think under 2 minutes is fineeeeee :-)

Signed-off-by: Deep Debroy <ddebroy@docker.com>
@msau42
Copy link
Member

msau42 commented Aug 11, 2018

/lgtm

@k8s-ci-robot k8s-ci-robot added the lgtm "Looks good to me", indicates that a PR is ready to be merged. label Aug 11, 2018
@k8s-ci-robot
Copy link
Contributor

[APPROVALNOTIFIER] This PR is APPROVED

This pull-request has been approved by: ddebroy, msau42

The full list of commands accepted by this bot can be found here.

The pull request process is described here

Needs approval from an approver in each of these files:

Approvers can indicate their approval by writing /approve in a comment
Approvers can cancel approval by writing /approve cancel in a comment

@k8s-ci-robot k8s-ci-robot added the approved Indicates a PR has been approved by an approver from all required OWNERS files. label Aug 11, 2018
@k8s-github-robot
Copy link

/test all [submit-queue is verifying that this PR is safe to merge]

@k8s-github-robot
Copy link

Automatic merge from submit-queue. If you want to cherry-pick this change to another branch, please follow the instructions here.

@k8s-github-robot k8s-github-robot merged commit 89e57b5 into kubernetes:master Aug 11, 2018
Sign up for free to join this conversation on GitHub. Already have an account? Sign in to comment
Labels
approved Indicates a PR has been approved by an approver from all required OWNERS files. cncf-cla: yes Indicates the PR's author has signed the CNCF CLA. lgtm "Looks good to me", indicates that a PR is ready to be merged. release-note-none Denotes a PR that doesn't merit a release note. sig/storage Categorizes an issue or PR as relevant to SIG Storage. size/L Denotes a PR that changes 100-499 lines, ignoring generated files.
Projects
None yet
Development

Successfully merging this pull request may close these issues.

None yet

5 participants