Skip to content
New issue

Have a question about this project? Sign up for a free GitHub account to open an issue and contact its maintainers and the community.

By clicking “Sign up for GitHub”, you agree to our terms of service and privacy statement. We’ll occasionally send you account related emails.

Already on GitHub? Sign in to your account

Increase provisioning test timeouts. #24185

Merged
Merged
Changes from all commits
Commits
File filter

Filter by extension

Filter by extension

Conversations
Failed to load comments.
Jump to
Jump to file
Failed to load files.
Diff view
Diff view
8 changes: 4 additions & 4 deletions test/e2e/volume_provisioning.go
Original file line number Diff line number Diff line change
Expand Up @@ -110,16 +110,16 @@ var _ = framework.KubeDescribe("Dynamic provisioning", func() {
// 10 minutes here. There is no way how to see if kubelet is
// finished with cleaning volumes. A small sleep here actually
// speeds up the test!
// One minute should be enough to clean up the pods properly.
// Detaching e.g. a Cinder volume takes some time.
// Three minutes should be enough to clean up the pods properly.
// We've seen GCE PD detach to take more than 1 minute.
By("Sleeping to let kubelet destroy all pods")
time.Sleep(time.Minute)
time.Sleep(3 * time.Minute)
Copy link
Contributor

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

is it possible to use waitForPodTerminated to monitor pods termination instead of sleep?

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

No, it is not. As you can see few lines above, the test does wait for pod termination. And as described in the comment, when a pod is deleted all its volumes are still attached to the node and kubelet (slowly) starts detaching them. There is no event that would signalize that kubelet finished and all volumes are detached.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

You could get fancy and retrieve the volume name from the PV object and use it to figure out what node it gets attached to and monitor for detach. See https://github.com/kubernetes/kubernetes/blob/master/test/e2e/pd.go#L498

Copy link
Member Author

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Well, I could, but as you can notice, AWS and Cinder is not implemented there. Common cloud volume itnerface as suggested in #21984 would help a lot (with IsDiskAttached() method).

I'd propose fixing the flake now and polishing the timers later.

Copy link
Member

Choose a reason for hiding this comment

The reason will be displayed to describe this comment to others. Learn more.

Good point, agreed.


By("deleting the claim")
framework.ExpectNoError(c.PersistentVolumeClaims(ns).Delete(claim.Name))

// Wait for the PV to get deleted too.
framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(c, pv.Name, 1*time.Second, 10*time.Minute))
framework.ExpectNoError(framework.WaitForPersistentVolumeDeleted(c, pv.Name, 5*time.Second, 20*time.Minute))
})
})
})
Expand Down